Backing Up Your Data – Kubernetes (On-Premises)

When running an on-premises installation, you will certainly want to run periodic backups of all your data.

In this article, we will provide you the scripts to perform a data dump of API Fortress. You will then need to wire them up in your scheduled operations system, such as cron.

We will assume you have a running API Fortress installation, ability to sudo to root privileges and a general idea of how Kubernetes works.

If you are using EKS, you can simply take snapshots of the postgres and mongo disks directly through EKS. If you would like to take data dumps via Kubernetes please see the instructions below.

On the machine where you have Kubernetes installed and running, execute the following commands:

kubectl get pods

Now we will start with backing up the postgres disk, please run the following two commands in order:

kubectl exec -ti postgres-0 -- bash -c "pg_dump -U apipulse > apifortress_postgres.sql"
kubectl cp postgres-0:apifortress_postgres.sql apifortress_postgres.sql

Next we will back up the mongodb disk, please run the following two commands in order:

kubectl exec -ti mongodb-0 -- mongodump
kubectl cp mongodb-0:dump dump

Note that the mongodb dumps can become quite large, so we recommend that these dumps be done on a volume disk or use a separate desk that is mounted to all of this that will only be used for backups.

Updating the API Fortress License Key

If you need an updated API Fortress license please reach out to your account manager or sales@apifortress.com

The below instructions will show you where to replace the license key in the configuration file:

For Docker users:

      1. Find the “docker-compose.yml” file located in the “core” directory
      2. Locate the section labeled “APIFORTRESS DASHBOARD”
      3. Towards the bottom of the section you will find the key “license:” 
      4. Replace the string to the right of the “:” be mindful to keep the single quotes around the license key

For Kubernetes users:

      1. Find the “apifortress.yml” file located in the “root” directory
      2. Locate the section labeled “API Fortress Dashboard”
      3. Towards the bottom of the section you will find “- name: license”
      4. Below that you will see “value:” replace the string to the right of the “:” be mindful to keep the single quotes around the license key

Deployment – Configure the DNS for the Mocking service

Regardless of the deployment method used, to use the Mocking service you will need to apply one change in your DNS.

Assuming your API Fortress dashboard is mapped to the domain:

apif.yourcompany.com

A new CNAME entry needs to be created, as in:

CNAME *.apif.yourcompany.com > apif.yourcompany.com

As mocked services will be accessible via subdomains of the dashboard.

Deployment – Kubernetes (On-Premises)

Before we start:

  • This tutorial assumes that the reader is familiar with some standard procedures in Kubernetes (creating secrets, creating config-maps etc.) If you are not familiar with these processes, please refer to the Kubernetes documentation.
  • The memory settings configured for each container are to be intended as the minimum for a production environment. Wherever applicable, this document will provide settings for a minimum for a test drive environment and optimal for a larger scale production environment
  • If your cluster is not allowed to communicate with DockerHub or is incapable of logging in, you will need to manually pull (from DockerHub) and push (to your private repository) images.
  • This guide, and the provided starter configuration files will assume the deployment will occur in the apifortress project/namespace. If this is not the case for your setup, please update all current hostname references to apifortress, as in  postgres.apifortress.svc or tools.apifortress.svc
  • The whole guide and annexed configuration files have been built upon hands-on experience with the Google GCloud Kubernetes service. Some tweaking may be required if using a different provider.

Starting the Main Services

Step 1 – Accessing a private Repository:

Create a secret in Kubernetes that contains the DockerHub user credentials for the account shared with API Fortress. As the repositories on the APIF-side are private, you must submit the same account that was submitted with the configuration survey. You can find further information here https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

Step 2 – Configure apifortress.yml:

    1. Ensure that the cluster is capable of supporting the default image memory limits. The apifortress container is set for 8GB of memory. The optimal memory setting is 16GB, the minimum memory setting is 4GB;
    2. memorySettings (optional parameter) describe the minimum and maxium heap memory the process can use. Xmx should be set to 1/2 of the available memory of the process. You don’t need to tweak these values if you don’t change the overall available memory.
      This is an example of the setting to be placed among the environment variables:
        #- name: memorySettings
        #  value: '-Xms1024m -Xmx4098m'

3. Ensure that any critical key/value pairs have been defined. The configuration files should be populated with the values submitted with the pre-configuration survey, but for safeties sake a user should ensure that grailsServerUrl has been passed the URL that the instance will be reached through, that license has been passed a license key and that adminEmailadminFullName and companyName have been defined. These values are all found in the env section of the apifortress.yml file. While it is not critical to deployment, it is strongly recommended that the user configures the mailer service as well. This section in env:

 - name: apifortressMailEnabled
          value: 'true'
        - name: apifortressMailFrom
          value: info@example.com
        - name: apifortressMailSmtpHost
          value: ''
        - name: apifortressMailSmtpPassword
          value: ''
        - name: apifortressMailSmtpPort
          value: '25'
        - name: apifortressMailStartTLS
          value: 'true'
        - name: apifortressMailSmtpUsername
          value: info@example.com
        - name: apifortressMailUseSES
          value: 'false'

as well as the settings in the AFMAILER Microservice should be completed to allow the platform to generate emails.

4. The Load Balancer is the mechanism for communicating with the platform. This can be replaced with a NodePort or Ingress if required, according to the configuration of your system.

# >>> APIFORTRESS loadBalancer service >>>
apiVersion: v1
kind: Service
metadata:
 name: apifortress
spec:
 type: LoadBalancer
 selector:
 app: apifortress
 ports:
 - port: 8080
 loadBalancerIP: '[cluster-ip-change-it]'
 sessionAffinity: ClientIP
---

5. Ensure that all the ports exposed in the descriptor match your expectations. As a default, the dashboard will run on port 8080 and the liveness probe will test that to determine the service availability.

Step 3 – Configure dependencies.yml

Each of the database services in dependencies.yml has a preconfigured definition for the amount of disk space allocated to the service. These values can be edited to match the available disk space that you wish to provide for said services.
For MongoDB the proposed memory setting is 8Gi. The minimum is 1Gi, the optimal is 16Gi. However, for the inner workings of MongoDB, any increase in memory will result in better performance.
For PostgreSQL the proposed memory setting is 1Gi which is considered also an optimal setting. The minimum is 512Mi.

NOTE: volume claims may need to be tweaked based on your service provider.
NOTE: MongoDB will store most of the data produced by the platform, so make sure the disk size is reasonable for your use case

  volumeClaimTemplates:
  - metadata:
      name: mongovol
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 50Gi
  volumeClaimTemplates:
  - metadata:
      name: psqlvol
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

Step 4 – Start the main services

Start the dependency services by typing:

kubectl create -f dependencies.yml

Once these services have spun up, you can start the main API Fortress platform with:

kubectl create -f apifortress.yml

Step 5 : verify

Access the platform with the URL provided in the apifortress.xml file. Login using the default admin username and the default password (“foobar” – change it ASAP). You should now be able to access the API Fortress Dashboard.

Configure the Downloader

The API Fortress downloader is the agent that retrieves the resources to be tested. Downloaders can be installed in various locations, so factors such as latency and download time can be measured by remote consumers.

Step 1 – Create a Downloader in API Fortress

Login to API Fortress with the admin user, access the API Fortress admin panel by clicking the “user” icon in the top right, then click Admin Panel.

login

Choose “Downloaders” from the list of actions and click on the “Add Downloader” button.

Step 2 – Configure the Downloader

Fill in the following fields:
Name: Write a recognizable name.
Location: A representation of where the downloader is. ie. Chicago
Latitude / Longitude: The geographical position of the downloader.
Last Resort: Check this to make it the default downloader used.
URL: The address of the downloader, followed by port (default 8819) and path /api. In our Kubernetes deployment, our downloader address would be https://downloader.apifortress.svc:8819/api
API Key, API Secret: Write these two values down for use later.

Step 3 – Move the key and secret values to downloader.yml

Edit the  downloader.yml file and enter the API Key and API Secret provided by the platform in the previous step.

Step 4 – Start the Downloader

Start the downloader with:

kubectl create -f downloader.yml

7. Open the HTTP client from the tools drop-down menu in API Fortress. Attempt to contact a site that is accessible from this server environment. API Fortress should now be able to successfully communicate with other websites.

Configure the Load Agent

Step 1 – Define the maximum users per Load Agent

Users per agent are the maximum number of virtual users that each Load Agent can provide.

It’s important to remember that large numbers of simulated users will require large amounts of hardware resources. Contact your DevOps team to develop a strategy for resource allocation. 

  • Locate and open the file named application.conf. It is located in the core-server-etc directory.
  • Line 14 of this file (fixed-pool-size) should have its value adjusted to match the desired number of maximum users per agent.
  • Line 48 of this file (nr-of-instances) should have its value adjusted to match the desired number of maximum users per agent. These two values should match.

Step 2 – Configure config.yml

  • Locate and open config.yml. It is located in core-server-etc.
  • First, we have to configure the baseURL
    • baseURL is located on line 3.
    • If the Load Agent and the API Fortress Dashboard are located on the same cluster, then you can replace the baseURL with the internal address and port of the Dashboard on the server.
    • If the Load Agent and the API Fortress Dashboard are located on different clusters, you can replace the baseURL with the actual URL of the Dashboard. That is to say, the URL you would use to access it via web browser.
  • Next, we need to provide the API Key and Secret.
    • Open the main API Fortress dashboard and click the gear icon in the upper right corner to access the settings menu
    • Click the “API Keys” option in the left sidebar.
    • Click “+API Key” 


  • Copy the API Key to line 5 of config.yml.
  • Copy the Secret to line 6 of config.yml.

Step 3 – Adding the Engine

  • The next step is to add the new Engine to API Fortress itself.
  • Log into API Fortress as an administrator.
  • Click the user icon in the upper right corner, and then click “Admin Panel”
  • Click “Engines” on the left side of the screen.
  • Click “+Engine”
  • Enter the name and location of the Engine.
  • The CRN value defaults to a random string. It is very recommended that you change it to something human-readable, but unique in the list. This is the internal name of the engine.
  • After modifying the CRN, copy the value to line 11 of config.yml
  • Copy the secret to line 12 of config.yml
  • Select the Owning Company of the Engine. An Engine must be owned by a single company. The default value (Public Engine) should not be chosen.
  • Select “Yes” for “Dedicated to Load Testing
  • Click the green check to save the Engine settings.

Step 4 – Creating the Config-Map

Create a config-map called ‘core-0’ from the core-server-etc directory.

Step 5 – Tweak the memory settings if necessary

The memory settings may vary a lot based on the number of virtual users the load agent is meant to support. The default 2Gi is generally OK for up to 50 virtual users. It is to be noted that as the process is memory, CPU and network intensive, better results are achieved by introducing more load agents versus increasing the size of each one.
For the very same reason, it’s generally pointless to run multiple load agents in the server.

Step 6 – Start the Load Agent service

Start the load agent service with:

kubectl create -f core-server.yml

Step 7 – Verify the deployment

Access the Load Testing tool by clicking on the Tools dropdown at the top of the view in API Fortress. The Load Agent that you just deployed should be visible on the right side of the screen.

General tweaks

HTTPS to HTTP

If you’re having the dashboard go through a gateway, it is likely that you will want to run the container in HTTP and the gateway in HTTPS.
Therefore the grailsURL in the configuration will need to be in HTTPS. At this point the API Fortress dashboard will perform a hard check on the protocol at each request which will always appear as in HTTP, causing an illegal redirect. This is done for security reasons.

To overcome this issue you will need to override one configuration file in the Tomcat configuration via a configMap. This is not the default in the API Fortress Dashboard image on purpose, again, for security reasons.

We will assume that the gateway will forward the x-forwarded-proto header.

The file to be added is located here: https://github.com/apifortress/containers/blob/master/kubernetes_gcloud/tomcat_conf/context.xml

  1. Tweak the file according to your needs
  2. Create a config map for the single file named tomcat-context
  3. Change the apifortress service in the apifortress.xml file as follows:
    Add this fragment within the containers element:

    volumeMounts:
     - name: tomcat-context
     mountPath: /usr/local/tomcat/conf/context.xml
     subPath: context.xml
  4. Add this fragment in the spec element:
    volumes:
     - name: tomcat-context
     configMap:
     name: tomcat-context

By doing so, we will have API Fortress to accept the original protocol as the actual protocol being used.

Deployment – Red Hat OpenShift (On-Premises)

Before We Start:

  • This tutorial assumes that the reader is familiar with some standard procedures in OpenShift (creating secrets, creating config-maps.) If you are not familiar with these processes, please refer to the OpenShift documentation.
  • The memory settings configured for each container are to be intended as the minimum for a production environment. Wherever applicable, this document will provide settings for a minimum for a test drive environment and optimal for a larger scale production environment
  • If your cluster is not allowed to communicate with a server on the internet, the “Create ImageStream” process will need to be performed by manually pulling (from DockerHub) and pushing (to your image streams) images.
  • This guide, and the provided starter configuration files will assume the deployment will occur in the apifortress project/namespace. If this is not the case for your setup, please update all current host name references to apifortress, as in  postgres.apifortress.svc or tools.apifortress.svc

Starting the Main Services

Step 1 – Creating the ImageStream:

  1. Create a secret in OpenShift that contains the DockerHub user credentials for the account shared with API Fortress. As the repositories on the APIF-side are private, you must submit the same account that was submitted with the configuration survey.
  2. Create the API Fortress OpenShift image streams with the provided apifortress-imagestream.yml with:
oc create -f apifortress-imagestream.yml

3. Configure apifortress.yml, downloader.yml and core-server.yml to point at the established image stream. Changing the bracketed value in the below example would change the selected imagestream.

spec:
      containers:
      - name: apifortress
        image: '[imagestream-changeit]/apifortress/apifortress:16.5.3'
        resources:
          limits:
            memory: 8Gi

Step 2 – Configure apifortress.yml:

    1. Ensure that the cluster is capable of supporting the default image memory limits. The apifortress container is set for 8GB of memory. The optimal memory setting is 16GB, the minimum memory setting is 4GB;
    2. memorySettings (optional parameter) describe the minimum and maxium heap memory the process can use. Xmx should be set to 1/2 of the available memory of the process. You don’t need to tweak these values if you don’t change the overall available memory.
      This is an example of the setting to be placed among the environment variables:
        #- name: memorySettings
        #  value: '-Xms1024m -Xmx4098m'

3. Ensure that any critical key/value pairs have been defined. The configuration files should be populated with the values submitted with the pre-configuration survey, but for safeties sake a user should ensure that grailsServerUrl has been passed the URL that the instance will be reached through, that license has been passed a license key and that adminEmailadminFullName and companyName have been defined. These values are all found in the env section of the apifortress.yml file. While it is not critical to deployment, it is strongly recommended that the user configures the mailer service as well. This section in env:

 - name: apifortressMailEnabled
          value: 'true'
        - name: apifortressMailFrom
          value: info@example.com
        - name: apifortressMailSmtpHost
          value: ''
        - name: apifortressMailSmtpPassword
          value: ''
        - name: apifortressMailSmtpPort
          value: '25'
        - name: apifortressMailStartTLS
          value: 'true'
        - name: apifortressMailSmtpUsername
          value: info@example.com
        - name: apifortressMailUseSES
          value: 'false'

as well as the settings in the AFMAILER Microservice should be completed to allow the platform to generate emails.

4. The NodePort is the mechanism for communicating with the platform. This can be replaced with a LoadBalancer if required. When creating an OpenShift Route, this is where the Route should point.

# >>> API Fortress NodePort >>>
apiVersion: v1
kind: Service
metadata:
  name: apifortress
  labels:
    app: apifortress
spec:
  type: NodePort
  selector:
    app: apifortress
  ports:
  - port: 8080
    name: http
  loadBalancerIP:
  sessionAffinity: ClientIP
---

Step 3 – Configure dependencies.yml

Each of the database services in dependencies.yml has a preconfigured definition for the amount of disk space allocated to the service. These values can be edited to match the available disk space that you wish to provide for said services.
For MongoDB the proposed memory setting is 8Gi. The minimum is 1Gi, the optimal is 16Gi. However, for the inner workings of MongoDB, any increase in memory will result in better performance.
For PostgreSQL the proposed memory setting is 1Gi which is considered also an optimal setting. The minimum is 512Mi.

NOTE: MongoDB will store most of the data produced by the platform, so make sure the disk size is reasonable for your use case

  volumeClaimTemplates:
  - metadata:
      name: mongovol
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 50Gi
  volumeClaimTemplates:
  - metadata:
      name: psqlvol
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

Step 4 – Start the main services

Start the dependency services by typing:

oc create -f dependencies.yml

Once these services have spun up, you can start the main API Fortress platform with:

oc create -f apifortress.yml

Step 5 : verify

Access the platform with the URL provided in the apifortress.xml file. Login using the default admin username and the default password (“foobar” – change it ASAP). You should now be able to access the API Fortress Dashboard.

Configure the Downloader

The API Fortress downloader is the agent that retrieves the resources to be tested. Downloaders can be installed in various locations, so factors such as latency and download time can be measured by remote consumers.

Step 1 – Create a Downloader in API Fortress

Login to API Fortress with the admin user, access the API Fortress admin panel by clicking the “user” icon in the top right, then click Admin Panel.

login

Choose “Downloaders” from the list of actions and click on the “Add Downloader” button.

Step 2 – Configure the Downloader

Fill in the following fields:
Name: Write a recognizable name.
Location: A representation of where the downloader is. ie. Chicago
Latitude / Longitude: The geographical position of the downloader.
Last Resort: Check this to make it the default downloader used.
URL: The address of the downloader, followed by port (default 8819) and path /api. In our OpenShift deployment, our downloader address would be https://downloader.apifortress.svc:8819/api
API Key, API Secret: Write these two values down for use later.

Step 3 – Move the key and secret values to downloader.yml

Edit the  downloader.yml file and enter the API Key and API Secret provided by the platform in the previous step.

Step 4 – Start the Downloader

Start the downloader with:

oc create -f downloader.yml

7. Open the HTTP client from the tools drop-down menu in API Fortress. Attempt to contact a site that is accessible from this server environment. API Fortress should now be able to successfully communicate with other websites.

Configure the Load Agent

Step 1 – Define the maximum users per Load Agent

Users per agent are the maximum number of virtual users that each Load Agent can provide.

It’s important to remember that large numbers of simulated users will require large amounts of hardware resources. Contact your DevOps team to develop a strategy for resource allocation. 

  • Locate and open the file named application.conf. It is located in the core-server-etc directory.
  • Line 14 of this file (fixed-pool-size) should have its value adjusted to match the desired number of maximum users per agent.
  • Line 48 of this file (nr-of-instances) should have its value adjusted to match the desired number of maximum users per agent. These two values should match.

Step 2 – Configure config.yml

  • Locate and open config.yml. It is located in core-server-etc.
  • First, we have to configure the baseURL
    • baseURL is located on line 3.
    • If the Load Agent and the API Fortress Dashboard are located on the same cluster, then you can replace the baseURL with the internal address and port of the Dashboard on the server.
    • If the Load Agent and the API Fortress Dashboard are located on different clusters, you can replace the baseURL with the actual URL of the Dashboard. That is to say, the URL you would use to access it via web browser.
  • Next, we need to provide the API Key and Secret.
    • Open the main API Fortress dashboard and click the gear icon in the upper right corner to access the settings menu
    • Click the “API Keys” option in the left sidebar.
    • Click “+API Key” 


  • Copy the API Key to line 5 of config.yml.
  • Copy the Secret to line 6 of config.yml.

Step 3 – Adding the Engine

  • The next step is to add the new Engine to API Fortress itself.
  • Log into API Fortress as an administrator.
  • Click the user icon in the upper right corner, and then click “Admin Panel”
  • Click “Engines” on the left side of the screen.
  • Click “+Engine”
  • Enter the name and location of the Engine.
  • The CRN value defaults to a random string. It is very recommended that you change it to something human-readable, but unique in the list. This is the internal name of the engine.
  • After modifying the CRN, copy the value to line 11 of config.yml
  • Copy the secret to line 12 of config.yml
  • Select the Owning Company of the Engine. An Engine must be owned by a single company. The default value (Public Engine) should not be chosen.
  • Select “Yes” for “Dedicated to Load Testing
  • Click the green check to save the Engine settings.

Step 4 – Creating the Config-Map

Create a config-map called ‘core-0’ from the core-server-etc directory.

Step 5 – Tweak the memory settings if necessary

The memory settings may vary a lot based on the number of virtual users the load agent is meant to support. The default 2Gi is generally OK for up to 50 virtual users. It is to be noted that as the process is memory, CPU and network intensive, better results are achieved by introducing more load agents versus increasing the size of each one.
For the very same reason, it’s generally pointless to run multiple load agents in the server server.

Step 6 – Start the Load Agent service

Start the load agent service with:

oc create -f core-server.yml

Step 7 – Verify the deployment

Access the Load Testing tool by clicking on the Tools dropdown at the top of the view in API Fortress. The Load Agent that you just deployed should be visible on the right side of the screen.

General tweaks

HTTPS to HTTP

If you’re having the dashboard go through a gateway, it is likely that you will want to run the container in HTTP and the gateway in HTTPS.
Therefore the grailsURL in the configuration will need to be in HTTPS. At this point the API Fortress dashboard will perform a hard check on the protocol at each request which will always appear as in HTTP, causing an illegal redirect. This is done for security reasons.

To overcome this issue you will need to override one configuration file in the Tomcat configuration via a configMap. This is not the default in the API Fortress Dashboard image on purpose, again, for security reasons.

We will assume that the gateway will forward the x-forwarded-proto header.

The file to be added is located here (works for both OpenShift and Kubernetes): https://github.com/apifortress/containers/blob/master/kubernetes_gcloud/tomcat_conf/context.xml

  1. Tweak the file according to your needs
  2. Create a config map for the single file named tomcat-context
  3. Change the apifortress service in the apifortress.xml file as follows:
    Add this fragment within the containers element:

    volumeMounts:
     - name: tomcat-context
     mountPath: /usr/local/tomcat/conf/context.xml
     subPath: context.xml
  4. Add this fragment in the spec element:
    volumes:
     - name: tomcat-context
     configMap:
     name: tomcat-context

By doing so, we will have API Fortress to accept the original protocol as the actual protocol being used.

Updating an On-Premises Instance

Updating an On Premises instance of API Fortress is done as follows:

  • Back up the databases. (Optional, but recommended) 
  • Stop the containers
    • From the ‘core’ directory, issue a docker-compose stop command and wait for the operation to complete. This command stops the currently-running Docker containers.
  • Pull the updated containers
    • From the ‘core’ directory, issue a docker-compose pull command and wait for the operation to complete. This command pulls updated images from the API Fortress Docker repository.
  • Restart the containers
    • From the ‘core’ directory, issue a ./start_all.sh command to restart the containers and wait for the operation to complete. This script restarts the containers in the proper order.

Once the preceding steps are completed, the On Premises version of API Fortress will be fully updated.

 

Best Practices for Disaster Recovery (On-Premises)

Note: This document is referential only to the API Fortress-HA (High Availability) deployment.

Components:

Databases:

  • PostgreSQL
  • MongoDB

Message queues:

  • RabbitMQ

API Fortress:

  • API Fortress Dashboard
  • Microservices (mailer, scheduler, connector)
  • Remote workers (downloaders, core-server)

Resiliency / High availability

Databases can be replicated using their specific mechanism and the systems will connect to the clusters. Each replica will carry the full database in a streaming replication fashion.

Therefore, a failure (software, hardware, network) of any of the instances will not cause a service disruption.

When a replica is brought back to life, whether it’s the same server or another, their specific replication systems will synchronize the new instance.

Databases are the only components in need of a persistent state, therefore the machines spinning them need to be able to provide a persistent storage.

The message queue is stateless (therefore does not require persistent storage) and queues and exchanges are replicated using the high availability internal mechanism. Services can connect to both so that if one replica goes down, the other will take care of the work without service disruption.

The API Fortress dashboards are stateless (with the exclusion of in-memory web sessions) and can be scaled horizontally and load balanced.

The API Fortress microservices are stateless single-instance services that can be respawed in any server, without any specific concern.

The API Fortress remote workers are stateless multi-instance services that can be scaled horizontally and load balanced.

Backup and Restore

Backup

There are 2 primary types of backups:

  • Taking snapshots of the persisted database disks.
    The procedure is runtime dependent (AWS, GCloud, OpenShift etc.)
  • Dumping databases to files for classic restore.
    These procedures are described  here. The actual commands may vary based on the runtime.

Restoration

  • Given the snapshot of a disk, the runtime should provide the ability to create a new disk from it.
  • Given the dump files, you can follow the procedure described here. The actual commands may vary based on the runtime.

Note: No service except the two databases require access to persistent storage.

Disaster recovery

Databases:

  • In case of a database being unreachable for connectivity issues, the system will continue working using a replica. When the issue is solved, the system will sync itself automatically. No service degradation is expected.
  • In case of a system failure, disk failure, or data corruption, spin a new server in the same cluster with the same hostname. This will trigger the database automatic replication. No service degradation is expected.
  • In case of a global failure of all replicas, API Fortress will stop working. Spin a new database cluster starting from a backup and restart all services. Service degradation is expected. Data loss may occur, depending on your backup strategy.

Message queues:

  • In case of a message queue being unreachable for connectivity issues, the system will continue working using a replica. A respawn of the failing message queue will bring it back to the cluster. No service degradation is expected.
  • In case of a system failure, spin a new server in the same cluster with the same hostname.  No service degradation is expected.
  • In case of a global failure of all replicas, API Fortress will stop executing scheduled tests and will not send notifications. Start a new message queue cluster. A restart of all services is not required but recommended. Service degradation is expected.

Load Agent Deployment (On-Premises)

A Load Agent is a server instance that provides the simulated users in a load test. Load Testing cannot function without at least one Load Agent.

The provided files (contained in core-server.tgz) are all that you need in order to deploy a Load Agent. This tutorial will explain what changed need to be made to the files within in order to properly deploy the Load Agent.

Before starting the process, there is a step that needs to be taken for clients who received their API Fortress containers before the introduction of Load Testing.

Step 0 (Not for all users) – Activate the Node Container

Open the docker-compose.yml in the main API Fortress directory. It can be located at /core/bin/docker-compose.yml

  • Paste the following code snippet in after the #RABBITMQ section and before the #APIFORTRESS DASHBOARD section:
#NODE
apifortress-node:
   image: theirish81/uitools
   hostname: node.apifortress
   networks:
      - apifortress
   domainname: node.apifortress
   labels:
      io.rancher.container.pull_image: always
  • In the links section of the #APIFORTRESS DASHBOARD configuration, add the following line:
- apifortress-node:node.apifortress
  • Save and close the docker-compose.yml.
  • Open the start_all.sh file in a code editor. It is also located in /core/bin.
  • Copy and paste the following and overwrite the entire contents of the file:
#!/bin/bash
sudo docker-compose up -d apifortress-postgres
sleep 5s
sudo docker-compose up -d apifortress-mongo
sleep 5s
sudo docker-compose up -d apifortress-rabbit
sudo docker-compose up -d apifortress-node
sleep 30s
sudo docker-compose up -d apifortress
sleep 1m
sudo docker-compose up -d apifortress-mailer
sudo docker-compose up -d apifortress-scheduler
sudo docker-compose up -d apifortress-connector
  • Your API Fortress instance can now utilize the API Fortress Node Container which powers Load Testing.

Step 1 – Unzip the provided file (core-server.tgz)

First, unzip the provided file.

Screen Shot 2018-06-05 at 11.44.28 AM

Step 2 – Define the maximum users per Load Agent

Users per agent are the maximum number of virtual users that each Load Agent can provide.

It’s important to remember that large numbers of simulated users will require large amounts of hardware resources. Contact your DevOps team to develop a strategy for resource allocation. 

  • Locate and open the file named application.conf. It is located in core-server/etc.
  • Line 14 of this file (fixed-pool-size) should have it’s value adjusted to match the desired number of maximum users per agent.
  • Line 48 of this file (nr-of-instances) should have it’s value adjusted to match the desired number of maximum users per agent. These two values should match.

Step 3 – Configure Config.yaml

  • Locate and open config.yaml. It is located at core-server/etc.
  • First, we have to configure the baseURL
    • baseURL is located on line 3.
    • If the Load Agent and the API Fortress Dashboard are located on the same server, then you can replace the baseURL with the internal address and port of the Dashboard on the server.
    • If the Load Agent and the API Fortress Dashboard are located on different servers, you can replace the baseURL with the actual URL of the Dashboard. That is to say, the URL you would use to access it via web browser.
  • Next, we need to provide the API Key and Secret.
    • Open the main API Fortress dashboard and click the gear icon in the upper right corner to access the settings menu
    • Click the “API Keys” option in the left sidebar.
    • Click “+API Key” 

Create API Key

(Click image for GIF of procedure)

  • Copy the API Key to line 5 of config.yml.
  • Copy the Secret to line 6 of config.yml.

Step 4 – Adding the Engine

  • The next step is to add the new Engine to API Fortress itself.
  • Log into API Fortress as an administrator.
  • Click the user icon in the upper right corner, and then click “Admin Panel”
  • Click “Engines” on the left side of the screen.
  • Click “+Engine”
  • Enter the name and location of the Engine.
  • The CRN value defaults to a random string. You must change it to something human-readable. This is the internal name of the engine.
  • After modifying the CRN, copy the value to line 11 of config.yml
  • Copy the secret to line 12 of config.yml
  • Select the Owning Company of the Engine. An Engine must be owned by a single company. The default value (Public Engine) should not be chosen.
  • Select “Yes” for “Dedicated to Load Testing
  • Click the green check to save the Engine settings.

Add Engine

(Click image for GIF of procedure)

Step 5 – Deploy the Load Agent

At the desired server location, use the “docker-compose up -d” command to deploy the Load Agent container. After the operation is complete, the Load Agent will be visible to your API Fortress Load Tests. 

Enabling API Fortress to Read Local Files

Using the read-file command, you can have your test read local files.

Currently there is no GUI functionality to upload the files, however, you can set up your container to connect to a local folder on your host machine.

To do so, you have to update your docker-compose.yml file in the core/ directory.

In the “apifortress” service definition, modify the “volumes” block by adding one entry looking like this:

volumes: 
    - /var/local/data:/data

Where /var/local/data is the path in your host machine where you want to store the files.

Backing Up Your Data (On-Premises)

When running an on-premises installation, you will certainly want to run periodic backups of all your data.

In this article, we will provide you the scripts to perform a data dump of API Fortress. You will then need to wire them up in your scheduled operations system, such as cron.

We will assume you have a running API Fortress installation, ability to sudo to root privileges and a general idea of how Docker works.

Backup

1. In the host server, create a directory that will host your backup. In this example, it’s /var/local/backups but it can be anything. Make sure the directory has read/write permissions docker can use,

2. Run (change the directory according to your needs):

sudo docker run --rm --net apifortress --link core_apifortress-mongo_1:mongo.apifortress -v /var/local/backups:/backup mongo:3.0.14 bash -c 'mongodump --out /backup --host mongo.apifortress'
3. Run (change the directory according to your needs):
sudo docker run --rm --net apifortress --link core_apifortress-postgres_1:postgres.apifortress -v /var/local/backups:/backup postgres:9.5.5 bash -c 'pg_dump --dbname postgresql://apipulse:jk5112@postgres.apifortress:5432/apipulse > /backup/postgres.sql'
4. Access the /var/local/backups directory. You will find both an “apipulse” directory and a “postgres.sql” file. This is all your backup. You can now zip it and copy it wherever your backup procedures require. At this point we suggest you to clear the directory used for backup to have it empty for the next backup iteration.

Backup restore

1. in the core/ directory, stop all services by issuing:

sudo docker-compose stop

2. Remove all data files from your persistent volume on the host machine. EXTREME CAUTION: this will erase all your current data. Make sure that the backup you are going to restore is available. If unsure, just MOVE the current data to another location,

3. Activate MongoDB and PostgreSQL by issuing:

sudo docker-compose up -d apifortress-postgres
sudo docker-compose up -d apifortress-mongo

4. We will assume your backup is located in /var/local/backups. Run the following commands:

sudo docker run --rm --net apifortress --link core_apifortress-mongo_1:mongo.apifortress -v /var/local/backups:/backup mongo:3.0.14 bash -c 'mongorestore /backup --host mongo.apifortress'
sudo docker run --rm --net apifortress --link core_apifortress-postgres_1:postgres.apifortress -v /var/local/backups:/backup postgres:9.5.5 bash -c 'psql -h postgres.apifortress --dbname postgresql://apipulse:jk5112@postgres.apifortress:5432/apipulse  < /backup/postgres.sql'

5. Verify that files are now present in the persistent volume location of your host machine,

6. You can now start the platform by running the ./start_all.sh script.