API Fortress Command Line Tool

Welcome to the API Fortress Command Line Tool!

The tool itself: https://github.com/apifortress/afcmd/releases

The documentation for the API that that tool leverages:  https://apifortressv3.docs.apiary.io/

The tool, or rather, pair of tools, are designed to reduce the amount of legwork that goes into executing or uploading API Fortress tests. The following readme will explain each part of the process.

APFCMD allows a user to easily integrate API Fortress testing into other workflows. Example use cases are:

  • Executing API Fortress tests from a CI/CD tool
  • Incorporating API Fortress tests in a Git version control plan.
  • Pushing test code from an IDE to the API Fortress platform.

All of these scenarios, and more, can be accomplished with the tool.

Lets take a look at the two major components of the tool:

APIF-RUN

Run allows us to execute tests on the platform and do things with that data. We can run tests via API either in an authenticated or unauthenticated state. By passing credentials, we receive a more verbose test result. We can output this result to a file. We also have access to all of the standard options that API Fortress provides in its API (silent run, dry run, etc.)

RUN EXECUTION FLAGS

  • run-all – RUN ALL – This will execute all of the tests in a chosen project.
  • run-by-tag – RUN BY TAG – This will execute all tests with a selected tag (requires the -t flag to set tag)
  • run-by-id – RUN BY ID – This will execute a test with a specific ID (requires the -i flag to set id)
  • hook – HOOK – This is the webhook of the project you are working with. This can be either an API Fortress URL, or the key from a configuration file (set the path to the config file with the -c tag)

ex: to run all of the tests in a specific project, we would use the following command string:

python apif-run.py run-all http://mastiff.apifortress.com/yourWebHook

RUN OPTION FLAGS

  • -S – SYNC – This will provide a response body with the result of the test.
  • -f – FORMAT – This will determine the format of the test result output (JSON, JUnit, Bool). REQUIRES SYNC MODE (-S)
  • -d – DRY – This will cause the test run to be a dry run.
  • -s – SILENT – This will cause the test to run in silent mode.
  • -o – OUTPUT – This will write the result of the test to a local file. You must provide the path to the file to be created. Remember your filetype! (.json/.xml)
  • -c – CONFIG – This provides the path to a configuration file which can provide webhooks and user credentials. If no path is specified, the program will look for a config.yml in the same directory as it is (./config.yml)
  • -C – CREDENTIALS – This allows you to manually pass user credentials (username:password) (SUPERSEDES CONFIG FILE)
  • -t – TAG – This is how you pass a tag for RUN BY TAG mode.
  • -i – ID – This is how you pass an ID for RUN BY ID mode.
  • -e – ENVIRONMENT – This is how you pass environmental/override variables. The format is key:value. You can pass multiple sets of environmental variables like so: key:value key1:value1 key2:value2

APIF-PUSH

Push allows us to push tests into API Fortress. When tests are downloaded from the platform, they come as 2 XML files (unit.xml & input.xml). We can use this tool to push those files back to an API Fortress project, either individually or in bulk.

PUSH EXECUTION FLAGS

  • hook – HOOK – This is the webhook of the project you are working with. This can be either an API Fortress URL, or the key from a configuration file (set the path to the config file with the -c tag)

PUSH OPTION FLAGS

  • -p – PATH – This provides the path to the test file you wish to upload. You can pass multiple paths.
  • -r – RECURSIVE – This flag will make the call recursive; It will dive through the directory passed with -p and grab every test in all of its subdirectories.
  • -b – BRANCH – This allows you to specify a Git branch that these test files are attached to. Default is master.
  • -c – CONFIG – This provides the path to a configuration file which can provide webhooks and user credentials. If no path is specified, the program will look for a config.yml in the same directory as it is (./config.yml)
  • -C – CREDENTIALS – This allows you to manually pass user credentials (username:password) (SUPERSEDES CONFIG FILE)
  • T – TAG – This allows you to pass tags to be appended to the test after it is pushed. This will OVERWRITE ANY EXISTING TAGS. Multiple tags can be passed.
  • -t – ADD TAG – This will allow you to add additional tags to a test that already has tags attached.

CONFIGURATION FILE

A configuration file is a YAML file that is formatted as follows:

hooks:
  - key: cool_proj1
    url: https://mastiff.apifortress.com/app/api/rest/v3/A_WEBHOOK
    credentials:
      username: (your username)
      password: (your password)
  - key: uncool_proj
    url: https://mastiff.apifortress.com/app/api/rest/v3/ANOTHER_WEBHOOK
    credentials:
      username: (another username)
      password: (another password)
  - key: unauth_proj
    url: https://mastiff.apifortress.com/app/api/rest/v3/JUST_A_WEBHOOK_WITHOUT_CREDENTIALS
test_directory: /tests

Once you create a configuration file, you can pass the path with -c and the key to the data in place of the normal hook URL. If you also pass credentials, they’ll override the credentials in the configuration file. If you don’t include credentials in the config file, you can pass them manually or leave them out entirely.

EXAMPLES

Execute all of the tests in a project and output the results to a JUnit/XML file via an authenticated route:

python apif-run.py run-all http://mastiff.apifortress.com/yourWebHook -S -C my@username.com:password1 -f junit -o some/route/results.xml

Push all of the tests from a directory and all of its subdirectories to a project:

python apif-push.py http://mastiff.apifortress.com/yourWebHook -C my@username.com:password1 -r -p some/directory/with/tests

Execute one test in a project by ID, using a config file for credentials and webhook:

python apif-run.py run-by-id config_key -c path/to/config/file -i testidhash8924jsdfiwef891

NOTES

  • The order of the optional arguments passed does not matter.
  • Remember, in a bash environment, anything that has a space in it needs to be wrapped in quotes. This goes for paths, filenames, etc.

POST-RECEIVE SCRIPT FOR GIT

This Post-Receive script is meant to assist in the incorporation of API Fortress in your Git workflow. Dropping the file into the hooks directory of your .git file will cause newly committed API Fortress test code to be pushed to the API Fortress platform. The ‘test_directory‘ key in the config.yml will let the scripts know which folder the tests themselves are located in. It will then watch for commits from this folder and push the appropriate code to the platform.

Updating an On Premises Instance

Updating an On Premises instance of API Fortress is done as follows:

  • Back up the databases. (Optional, but recommended) 
  • Stop the containers
    • From the ‘core’ directory, issue a docker-compose stop command and wait for the operation to complete. This command stops the currently-running Docker containers.
  • Pull the updated containers
    • From the ‘core’ directory, issue a docker-compose pull command and wait for the operation to complete. This command pulls updated images from the API Fortress Docker repository.
  • Restart the containers
    • From the ‘core’ directory, issue a ./start_all.sh command to restart the containers and wait for the operation to complete. This script restarts the containers in the proper order.

Once the preceding steps are completed, the On Premises version of API Fortress will be fully updated.

 

Best Practices for Disaster Recovery

Note: This document is referential only to the API Fortress-HA (High Availability) deployment.

Components:

Databases:

  • PostgreSQL
  • MongoDB

Message queues:

  • RabbitMQ

API Fortress:

  • API Fortress Dashboard
  • Microservices (mailer, scheduler, connector)
  • Remote workers (downloaders, core-server)

Resiliency / High availability

Databases can be replicated using their specific mechanism and the systems will connect to the clusters. Each replica will carry the full database in a streaming replication fashion.

Therefore, a failure (software, hardware, network) of any of the instances will not cause a service disruption.

When a replica is brought back to life, whether it’s the same server or another, their specific replication systems will synchronize the new instance.

Databases are the only components in need of a persistent state, therefore the machines spinning them need to be able to provide a persistent storage.

The message queue is stateless (therefore does not require persistent storage) and queues and exchanges are replicated using the high availability internal mechanism. Services can connect to both so that if one replica goes down, the other will take care of the work without service disruption.

The API Fortress dashboards are stateless (with the exclusion of in-memory web sessions) and can be scaled horizontally and load balanced.

The API Fortress microservices are stateless single-instance services that can be respawed in any server, without any specific concern.

The API Fortress remote workers are stateless multi-instance services that can be scaled horizontally and load balanced.

Backup and Restore

Backup

There are 2 primary types of backups:

  • Taking snapshots of the persisted database disks.
    The procedure is runtime dependent (AWS, GCloud, OpenShift etc.)
  • Dumping databases to files for classic restore.
    These procedures are described  here. The actual commands may vary based on the runtime.

Restoration

  • Given the snapshot of a disk, the runtime should provide the ability to create a new disk from it.
  • Given the dump files, you can follow the procedure described here. The actual commands may vary based on the runtime.

Note: No service except the two databases require access to persistent storage.

Disaster recovery

Databases:

  • In case of a database being unreachable for connectivity issues, the system will continue working using a replica. When the issue is solved, the system will sync itself automatically. No service degradation is expected.
  • In case of a system failure, disk failure, or data corruption, spin a new server in the same cluster with the same hostname. This will trigger the database automatic replication. No service degradation is expected.
  • In case of a global failure of all replicas, API Fortress will stop working. Spin a new database cluster starting from a backup and restart all services. Service degradation is expected. Data loss may occur, depending on your backup strategy.

Message queues:

  • In case of a message queue being unreachable for connectivity issues, the system will continue working using a replica. A respawn of the failing message queue will bring it back to the cluster. No service degradation is expected.
  • In case of a system failure, spin a new server in the same cluster with the same hostname.  No service degradation is expected.
  • In case of a global failure of all replicas, API Fortress will stop executing scheduled tests and will not send notifications. Start a new message queue cluster. A restart of all services is not required but recommended. Service degradation is expected.

Load Agent Deployment

A Load Agent is a server instance that provides the simulated users in a load test. Load Testing cannot function without at least one Load Agent.

The provided files (contained in core-server.tgz) are all that you need in order to deploy a Load Agent. This tutorial will explain what changed need to be made to the files within in order to properly deploy the Load Agent.

Before starting the process, there is a step that needs to be taken for clients who received their API Fortress containers before the introduction of Load Testing.

Step 0 (Not for all users) – Activate the Node Container

Open the docker-compose.yml in the main API Fortress directory. It can be located at /core/bin/docker-compose.yml

  • Paste the following code snippet in after the #RABBITMQ section and before the #APIFORTRESS DASHBOARD section:
#NODE
apifortress-node:
   image: theirish81/uitools
   hostname: node.apifortress
   networks:
      - apifortress
   domainname: node.apifortress
   labels:
      io.rancher.container.pull_image: always
  • In the links section of the #APIFORTRESS DASHBOARD configuration, add the following line:
- apifortress-node:node.apifortress
  • Save and close the docker-compose.yml.
  • Open the start_all.sh file in a code editor. It is also located in /core/bin.
  • Copy and paste the following and overwrite the entire contents of the file:
#!/bin/bash
sudo docker-compose up -d apifortress-postgres
sleep 5s
sudo docker-compose up -d apifortress-mongo
sleep 5s
sudo docker-compose up -d apifortress-rabbit
sudo docker-compose up -d apifortress-node
sleep 30s
sudo docker-compose up -d apifortress
sleep 1m
sudo docker-compose up -d apifortress-mailer
sudo docker-compose up -d apifortress-scheduler
sudo docker-compose up -d apifortress-connector
  • Your API Fortress instance can now utilize the API Fortress Node Container which powers Load Testing.

Step 1 – Unzip the provided file (core-server.tgz)

First, unzip the provided file.

Screen Shot 2018-06-05 at 11.44.28 AM

Step 2 – Define the maximum users per Load Agent

Users per agent are the maximum number of virtual users that each Load Agent can provide.

It’s important to remember that large numbers of simulated users will require large amounts of hardware resources. Contact your DevOps team to develop a strategy for resource allocation. 

  • Locate and open the file named application.conf. It is located in core-server/etc.
  • Line 14 of this file (fixed-pool-size) should have it’s value adjusted to match the desired number of maximum users per agent.
  • Line 48 of this file (nr-of-instances) should have it’s value adjusted to match the desired number of maximum users per agent. These two values should match.

Step 3 – Configure Config.yaml

  • Locate and open config.yaml. It is located at core-server/etc.
  • First, we have to configure the baseURL
    • baseURL is located on line 3.
    • If the Load Agent and the API Fortress Dashboard are located on the same server, then you can replace the baseURL with the internal address and port of the Dashboard on the server.
    • If the Load Agent and the API Fortress Dashboard are located on different servers, you can replace the baseURL with the actual URL of the Dashboard. That is to say, the URL you would use to access it via web browser.
  • Next, we need to provide the API Key and Secret.
    • Open the main API Fortress dashboard and click the gear icon in the upper right corner to access the settings menu
    • Click the “API Keys” option in the left sidebar.
    • Click “+API Key” 

Create API Key

(Click image for GIF of procedure)

  • Copy the API Key to line 5 of config.yml.
  • Copy the Secret to line 6 of config.yml.

Step 4 – Adding the Engine

  • The next step is to add the new Engine to API Fortress itself.
  • Log into API Fortress as an administrator.
  • Click the user icon in the upper right corner, and then click “Admin Panel”
  • Click “Engines” on the left side of the screen.
  • Click “+Engine”
  • Enter the name and location of the Engine.
  • The CRN value defaults to a random string. You must change it to something human-readable. This is the internal name of the engine.
  • After modifying the CRN, copy the value to line 11 of config.yml
  • Copy the secret to line 12 of config.yml
  • Select the Owning Company of the Engine. An Engine must be owned by a single company. The default value (Public Engine) should not be chosen.
  • Select “Yes” for “Dedicated to Load Testing
  • Click the green check to save the Engine settings.

Add Engine

(Click image for GIF of procedure)

Step 5 – Deploy the Load Agent

At the desired server location, use the “docker-compose up -d” command to deploy the Load Agent container. After the operation is complete, the Load Agent will be visible to your API Fortress Load Tests. 

On-Premises: Backing Up Your Data

When running an on-premises installation, you will certainly want to run periodic backups of all your data.

In this article, we will provide you the scripts to perform a data dump of API Fortress. You will then need to wire them up in your scheduled operations system, such as cron.

We will assume you have a running API Fortress installation, ability to sudo to root privileges and a general idea of how Docker works.

Backup

1. In the host server, create a directory that will host your backup. In this example, it’s /var/local/backups but it can be anything. Make sure the directory has read/write permissions docker can use,

2. Run (change the directory according to your needs):

sudo docker run --rm --net apifortress --link core_apifortress-mongo_1:mongo.apifortress -v /var/local/backups:/backup mongo:3.0.14 bash -c 'mongodump --out /backup --host mongo.apifortress'
3. Run (change the directory according to your needs):
sudo docker run --rm --net apifortress --link core_apifortress-postgres_1:postgres.apifortress -v /var/local/backups:/backup postgres:9.5.5 bash -c 'pg_dump --dbname postgresql://apipulse:jk5112@postgres.apifortress:5432/apipulse > /backup/postgres.sql'
4. Access the /var/local/backups directory. You will find both an “apipulse” directory and a “postgres.sql” file. This is all your backup. You can now zip it and copy it wherever your backup procedures require. At this point we suggest you to clear the directory used for backup to have it empty for the next backup iteration.

Backup restore

1. in the core/ directory, stop all services by issuing:

sudo docker-compose stop

2. Remove all data files from your persistent volume on the host machine. EXTREME CAUTION: this will erase all your current data. Make sure that the backup you are going to restore is available. If unsure, just MOVE the current data to another location,

3. Activate MongoDB and PostgreSQL by issuing:

sudo docker-compose up -d apifortress-postgres
sudo docker-compose up -d apifortress-mongo

4. We will assume your backup is located in /var/local/backups. Run the following commands:

sudo docker run --rm --net apifortress --link core_apifortress-mongo_1:mongo.apifortress -v /var/local/backups:/backup mongo:3.0.14 bash -c 'mongorestore /backup --host mongo.apifortress'
sudo docker run --rm --net apifortress --link core_apifortress-postgres_1:postgres.apifortress -v /var/local/backups:/backup postgres:9.5.5 bash -c 'psql -h postgres.apifortress --dbname postgresql://apipulse:jk5112@postgres.apifortress:5432/apipulse  < /backup/postgres.sql'

5. Verify that files are now present in the persistent volume location of your host machine,

6. You can now start the platform by running the ./start_all.sh script.

On-Premises: Deployment Using Docker

Introduction

This manual will describe a normal deployment procedure for API Fortress on-premises, using a Docker container. It is important to remember that the goal of this guide is to be as thorough as possible. It may seem long but the process is fairly straightforward.

Also, don’t fret as we can provide as much help and guidance as you need. We are just a video conference away!

You have been provided with apifortress_starter.zip, which contains the following files:
/create_network.sh
/
core/docker-compose.yml
/
core/tomcat_conf/conf/
/core/start_all.sh
/
downloader/docker-compose.yml
/data/connectors.tgz
/data/help.tgz
/data/import_help.sh
/data/import_connectors.sh

1. Copy the Provided Script Files

Copy the provided core and downloader directories to the server and then type cd core/.

2. Configure the Core Services

Before anything else, let’s configure each service and prepare the environment.
Most configuration keys are stored within the core/docker-compose.yml file.

PostgreSQL
The only special configuration will be the storage on the host machine.
Create a directory that will host PostgreSQL data in the host machine, and edit configuration file with that location. Replace the “/data/postgres” with your details.

    volumes:
   - /data/postgres:/var/lib/postgresql/data

MongoDB
As with PostgreSQL, you are required to provide a storage location and edit the volumes key accordingly. Replace the “/data/mongodb” with your location.

  volumes:
  - /data/mongodb:/data/db

API Fortress
There are a lot of configuration keys here. None of them should be left empty (a fake value is fine if you’re not using a certain feature). See the API Fortress Configuration Guide below for an explanation of each key.

The essential keys for bootstrap (with dummy values) are:

Admin User Creation
adminEmail: patrick@company.com
adminFullName: Patrick Poulin

Company Creation
defaultCompanyName: Your Company

Base URL that will respond to HTTP requests
grailsServerURL: http://yourcompany.com/app

API Fortress Mailer
Refer below.

API Fortress Downloader
To be configured after the dashboard bootstrap. Refer below.

3. Install Docker

Install Docker on a supported Linux distribution following the official instructions:
https://docs.docker.com/engine/installation/
The API Fortress stack runs successfully on Docker 1.12.

4. Install Docker Compose

Docker Compose is a utility that simplifies the deployment and management of complete stacks. Follow the official instructions for installation:
https://docs.docker.com/compose/install/

5. Provide API Fortress your DockerHub username

For API Fortress to grant you access to the API Fortress registries, your DockerHub username is required.  If you don’t have a DockerHub account, create one at https://hub.docker.com/

6. Login

Type sudo docker login and input your DockerHub credentials.

7. Create the API Fortress network

The default API Fortress subnet is 172.18.0.0/16. Make sure the default subnet is not in use. If it is then edit it in the create_network.sh script. Issue sudo ./create_network.sh  to create a virtual subnet for API Fortress.

8. Launch the Services

Before you launch any service, we strongly recommend you to run a: docker-compose pull  from the “core” and “downloader” directories to download all packages and preemptively verify any possible connection issue.

To launch all core services, just run the start_all.sh script. It will take some time, but it will ensure every dependency is up and running before launching API Fortress.

9. Verify the Deployment

At the end of the process, the API Fortress dashboard should be up and running in the host server on port 80. You can also check for errors in the logs by issuing the: sudo docker-compose logs command.

The admin user login details are as follows:

  • username: the email address provided in the docker-compose configuration, in the adminEmail field;
  • password: ‘foobar’, change it as soon as you log in.

10. Configure and Run the Downloader

The API Fortress downloader is the agent that retrieves the resources to be tested. Downloaders can be installed in various locations, so factors such as latency and download time can be measured by remote consumers.

In this configuration path, we are deploying a downloader in the same server as API Fortress, and it will serve as the default downloader.

1. Edit the downloader/docker-compose.yml file and take note of the value of the ipv4_address configuration key.

2. Login to API Fortress with the admin user, access the API Fortress admin panel by clicking the “user” icon in the top right, then click Admin Panel.

login

3. Choose “Downloaders” from the list of actions and click on the “Add Downloader” button.

4. Fill the fields:
Name: Write a recognizable name.
Location: A representation of where the downloader is. ie. Chicago
Latitude / Longitude: The geographical position of the downloader.
Last Resort: Check this to make it the default downloader used.
URL: The address of the downloader, followed by port (default 8819) and path /api. In our example, the ipv4_address and our downloader address would result in https://172.18.1.1:8819/api
API Key, API Secret: Write these two values down for use later.

5. Edit the  downloader/docker-compose.yml file and enter the API Key and API Secret.

6. Go to the downloader/ directory and issue the sudo docker-compose up -d command.

API Fortress Configuration Guide

A description of each configuration field you may need to alter.

API Fortress Dashboard

Bootstrap

 – adminEmail: The admin user email address, also used as login.
 – adminFullName: The admin’s full name.
 – defaultCompanyName: The company name.

System

– grailsServerURL: the url the server will respond to
 – dbHost: MongoDB host
 – psqlhost: PostgreSQL host
 – rabbitHost: RabbitMQ host

Note: in case you’re considering using an external PostgreSQL provider, the psqlUsername and psqlPassword parameters are also available. The database name is fixed and it’s apipulse.

Email

– apifortressMailUseSES: set to ‘true’ if you will use Amazon SES to send emails. When set to ‘false’, SMTP is used instead.
 – apifortressMailFrom: the email address that will be used to dispatch administrative emails.
 – apifortressMailSmtpHost: SMTP host to dispatch administrative emails.
 – apifortressMailSmtpUsername: SMTP username.
 – apifortressMailSmtpPassword: SMTP password.
 – apifortressMailSmtpPort: SMTP port.
 – amazonkey: Amazon key, if you’re using Amazon SES to send emails.
 – amazonsecret: Amazon secret, if you’re using Amazon SES to send emails.
 – apiaryClientId: client ID if you’re using Apiary services.
 – apiarySecret: secret, if you’re using Apiary services.
 – license: the license string.

API Fortress Mailer

 – twilioSid: SID, if you’re sending SMSes via Twilio.
 – twilioToken: token, if you’re sending SMSes via Twilio.
 – smsFrom: the phone number of the SMS sender, if you’re sending SMSes via Twilio.
 – mailFrom: the email address that will be sending notification emails.
 – mailUseSES: ‘true’ if you’re sending emails via Amazon SES. False if you’re using SMTP.
 – amazonKey: the Amazon key, if you’re sending emails via Amazon SES.
 – amazonSecret: the Amazon secret, if you’re sending emails via Amazon SES.
 – mailSmtpHost: the SMTP host.
 – mailSmtpPort: the SMTP port.
 – mailSmtpUsername: the SMTP username.
 – mailSmtpPassword: the SMTP password.
 – apifortressServerURL: the url the server will respond to.

API Fortress Downloader

– apikey: the API key, as shown in the admin panel.
 – secret: the API secret, as shown in the admin panel.
 – port: the HTTP port the server will be listening to, in HTTP mode.
 – rabbitHost: the RabbitMQ host, when running in active mode.
 – rabbitPort: the RabbitMQ port, when running in active mode.
 – rabbitSsl: ‘true’ if RabbitMQ will need to communicate over SSL when running in active mode.
 – rabbitUsername: the RabbitMQ username when running in active mode.
 – rabbitPassword: the RabbitMQ password when running in active mode.
 – use_rabbit: ‘true’ to run in active mode.
 – use_http: ‘true’ to use the internal HTTP server (passive mode).
 – use_ssl: ‘true’ if the internal HTTP server has to run over SSL.

The network configuration is also important as the IP address may be used for internal communication.

networks.apifortress.ipv4_address: the reserved IP address in the API Fortress subnet.

Appendix: Importing help tools and connectors

The API Fortress database comes free from data, but the provided package gives you the option to import the help tools and the connectors. These operations are meant to be run once the API Fortress stack is fully functional.

Import Help From the /data directory, run the import_help.sh script.
Import Connectors From the /data directory, run the import_connectors.sh script.

Appendix: Tweaking Tomcat Configuration

If you need to tweak the Tomcat configuration, you will need to mount the Tomcat conf/ directory in your system.
1. Change the configuration files you need to edit in the core/tomcat_conf/conf directory
2. Mount the directory by uncommenting the following lines in the core/docker-compose.yml file:

# volumes:
# - ./tomcat_conf/conf:/usr/local/tomcat/conf

Dashboard over SSL

To have Tomcat running over SSL:
1. Copy your JKS keystore containing your certificate in the core/tomcat_conf/conf directory
2. 
Edit the core/tomcat_conf/conf/server.xml file and uncomment the block:

<Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" />

3. Edit the block by adding the following attributes:

keystoreFile="/usr/local/tomcat/conf/keystore.jks"
keystorePass="thePasswordHere"

4. Mount the directory by uncommenting the following lines in the core/docker-compose.yml file:

# volumes:
# - ./tomcat_conf/conf:/usr/local/tomcat/conf

5. In the core/docker-compose.yml file, change the port declaration to:
ports:

- 443:8443/tcp

On-Premises: System Requirements

The one server setup for API Fortress on-premise is a quick way to get things started in a protected environment. While not ideal for availability or performance, works exactly as expected and provides all the features of the cloud version.

Minimum Hardware Requirements
CPU: Intel based high frequency quad core processor
Memory: 16 GB RAM
HDD: 250 GB
Memory: the memory impacts significantly on the speed of queries on big data sets. 32 GB is a recommended setup
HDD: All API Fortress reports and metrics are stored. 10 million reports + 30 million metrics can require up to 250GB of disk space

Software Requirements
OS: a recent Linux distribution

Classic Deployment
Java: Oracle JDK 1.8 series
Tomcat: 7 series
PostgreSQL: 9.5 series
MongoDB: 3.2 series
RabbitMQ: 3.5 series

Docker Deployment
Docker: 1.12

Processes
PostgreSQL: relational database for structured data
MongoDB: document database for reports and metrics
RabbitMQ: message queue
Tomcat: dashboard and engine application
AFScheduler: the API Fortress scheduler
AFMailer: the API Fortress mailer
AFConnector: dynamic data dispatcher for notifications
AFDownloadAgent: the downloader agent (actually performing HTTP calls)

Networking
We assume this deployment will be able to access the services to be tested.

Further Connections
HTTP(80) and/or HTTPS(443) inbound traffic enabled for every location that will need access to the dashboards. Ports and services may vary based on system requirements.

Docker
For the Docker deployment to succeed and to ease further updates, the server has to be able to communicate with https://hub.docker.com

The On-Premises Engine

What Is It

API Fortress can also come in an on-premises version. On-premises means that an API Fortress engine will live inside your infrastructure and will interact with your APIs from the inside, as opposed to the cloud solution where everything resides on the API Fortress infrastructure at apifortress.com.

Why

There are multiple reasons for having an on-prem engine, and these are some of the most common:

  • Security Restrictions
  • Access to Private / Sensitive Information
  • Large Companies That Want an Unlimited Amount of Deployments, Tests, and Users

But there’s another reason that makes it suitable for a number of users: customization

Customization

API Fortress is extremely modular and most functionalities can be replaced with different code, behaving in a different way. Some use cases are:

  • Storing the results of the tests in a dedicate archive, such as DynamoDB, a private MongoDB instance, or an object storage.
  • Customizing the chain of alerts with internal tools.
  • Storing the code of the tests in a location that is not the API Fortress cloud.
  • Adding the ability to ingest and analyze exotic data types.

All this is done with a few lines of Java. The engine itself can work as an SDK to build what you need. Or you can ask our team, we are glad to help.

Deployment

A simple Docker deployment. We ask a handful of questions, setup a configuration file for you, and then you deploy with Docker. The system requirements here.

Operations

The engine operates exactly the same way API Fortress does in the cloud.