Multiple Client-Cert Downloader

This mechanism will allow, in on-prem deployments only, to use multiple client-side certificates for authentication, instead of the current implementation that assigns one certificate to a downloader.

The Image

The updates focus on the downloader and it’s currently available in the image or above:
apifortress/remotedownloadagent:20.2.1
If you are using the “latest” tag and updated you are all set.

Components

The configuration of the downloader is made of two parts:
1. The client certificates
The downloader will need to mount a volume for the certificates (we suggest /certs) which will contain the client-side certificates.
For example:
- ./certs:/certs
2. The trust store
In case the certificates are issued by a non-trusted CA, it’ll be necessary to update the internal trust store of the image. This operation can be done in multiple ways, such as creating a derivative image of the downloader or mounting the file.
We’ll discuss the two options later.

How to build the client-side certificates

The certificates need to be in the Java Key Store (JKS) format. Each client-side certificate (key and cert) need to be in a separate store.
If your certificates are not in this format already, you will be able to convert your .key .crt file by following these steps:
a) Convert the certificate to PKCS#12 format using openssl, as in:
openssl pkcs12 -export -in client.crt -inkey client.key -out client.p12
b) Import the p12 to a JKS:
keytool -importkeystore -srckeystore client.p12 \
        -srcstoretype PKCS12 \
        -destkeystore client.jks \
        -deststoretype JKS
Once you’re done, you can copy the resulting artifact to the mounted volume.
Iterate the operation for each certificate you need to covert, changing the destination file every time, so that at the end of the process you’ll have a separate JKS file for each certificate.
Note: both commands are interactive but can be made non-interactive with appropriate switches, for automation purposes.
Note: keytool is a program that is part of the default Java distribution. You will need at least a JRE to use it.
Note: other tools, some of which visual, exist to perform this kind of operations, such as Keystore Explorer

Updating the trust store

As previously said, we can tackle this in two different ways.
One is creating aderivate imagewith a Dockerfile similar to this:
FROM apifortress/remotedownloadagent:20.2.1
COPY ca.crt /ca.crt
COPY cert.crt /cert.crt
RUN /usr/java/latest/bin/keytool -import -trustcacerts -keystore /usr/java/latest/jre/lib/security/cacerts -storepass changeit -alias localca -import -file /ca.crt -noprompt
RUN /usr/java/latest/bin/keytool -import -trustcacerts -keystore /usr/java/latest/jre/lib/security/cacerts -storepass changeit -alias localcrt -import -file /cert.crt -noprompt
If that’s not practical for your automation and routines, you can:
Copy the file located in /usr/java/latest/jre/lib/security/cacerts in the container, add the new certificate using thekeytooljava command similar to what’s shown in the Dockerfile.
Assuming the cacerts file and the ca.crt file are in the current directory, you can update it by using:
keytool -import -trustcacerts -keystore cacerts -storepass changeit -alias localca -import -file ca.crt -noprompt
You can eventually mount the file in the remotedownloadagent container, depending on your deployment method.
In docker-compose, you can add a volume like this:
- ./cacerts:/usr/java/latest/jre/lib/security/cacerts
For Kubernetes, the most practical way since Kubernetes 1.10.0 is to create a config map from a binary file and then mount it accordingly.

Technical Caveats

  • Whenever the trust store is altered, the service needs to be restarted for the change to be effective.
  • If a certificate is activated (see: Test writing) then the certificates involved need to be fully valid. It’ll be, in other words, impossible to skip SSL validation
    • the disable_ssl_validation must be set to false.
  • This feature is currently unavailable in load testing (but will be implemented once we receive sufficient feedback on this implementation)

    Test writing

    The test writer is required to provide configuration (if necessary) on which certificate to use in each call. Here’s an example:
    <get url="https://nginx.apifortress" params="[:]" var="payload" mode="text">
        <config name="client_cert_configuration" value="{&quot;keystorePath&quot;:&quot;/certs/client.jks&quot;,&quot;keystorePassword&quot;:&quot;foobar&quot;}"/>
    </get>
    The unescaped value is as follows:
    {"keystorePath":"/certs/client.jks","keystorePassword":"foobar"}
    Each call can be configured to use a different certificate, or no certificate at all.
    The value can also be parametrized as a template using the ${…} syntax


Certificate-based (mutual) SSL/TLS Authentication

Definition: Mutual SSL authentication or certificate-based mutual authentication, or client-side SSL authentication refers to two parties authenticating each other through verifying the provided digital certificate so that both parties are assured of the others’ identity

In API Fortress, the component in charge of registering thecertificateis the downloader. If your deployment has mixed auth/unauth endpoints, we suggest you create a specific downloader for each one scenario.

Note: this feature is experimental and only available on a self-hosted instance.

Install the servers certificates on the downloader’s trust store

This step may be not necessary based on the nature of the certificate and implementation. If the certificate is signed by an internal CA, this step is certainly mandatory.

To trust the server certificate, you will need to create a derivative image of the downloader.
Dockerfile example:
FROM apifortress/remotedownloadagent:latest
COPY ca.crt /ca.crt
COPY cert.crt /cert.crt
RUN /usr/java/latest/bin/keytool -import -trustcacerts -keystore /usr/java/latest/jre/lib/security/cacerts -storepass changeit -alias localca -import -file /ca.crt -noprompt
RUN /usr/java/latest/bin/keytool -import -trustcacerts -keystore /usr/java/latest/jre/lib/security/cacerts -storepass changeit -alias localcrt -import -file /cert.crt -noprompt
Where ca.crt and cert.crt are the certification authority certificate and the server certificate itself.
If you’re unsure of what a Dockerfile file is, please refer to the Docker guide or contact us.
To trigger the build of the image, simply issue:
sudo docker build -t ssldownloader
from the directory where the Dockerfile is “ssldownloader” is the name of the derivate image. You can name it whatever you want or match it to your own Docker registry.

Create the client-side certificate

Assuming you have a client certificate file and a key file, you will need to create a Java Key Store file from them (JKS).
Steps:
a) Convert the certificate to PKCS#12 format using openssl, as in:
openssl pkcs12 -export -in client.crt -inkey client.key -out client.p12
b) Import the p12 to a JKS:
keytool -importkeystore -srckeystore client.p12 \
        -srcstoretype PKCS12 \
        -destkeystore client.jks \
        -deststoretype JKS
Note: keytool is a program that is part of the default Java distribution. You will need at least a JRE to use it.
Note: other tools, some of which visual, exist to perform this kind of operations, such as Keystore Explorer

Update the configuration file

Position the client.jks file in the same directory as the docker-compose.yml file.
In the docker-compose.yml file:
If you had to go through step 1, you will need to:
– change the image name from apifortress/remotedownloadagent:latest to the name of the derivate image you created, as in:
image: ssldownloader
Mandatory steps:
– add a volume to mount the client.jks file, as in:
volumes:
- ./client.jks:/client.jks
– add an environment variable to configure the client certificate as in:
client_cert_configuration: '{"keystorePath":"/client.jks","keystorePassword":"foobar"}'
– the disable_ssl_validation must be set to false. Certificate validation needs to be active.

You can now restart the downloader.


Note: if you are using .pfx files you can follow this digicert guide to convert them to .jks files.

Note: You can also bind multiple certificates to a single downloader, click here to learn how.




Downloader 101

What is the Downloader:

The API Fortress Downloader is the agent that retrieves the resources (payloads) to be tested. For cloud customers, we have various downloaders already available to you such as US East, US West, and Europe. 

For self hosted (on-premises) customers it is important that you deploy the Downloaders along with the API Fortress Dashboard. Without at least one Downloader, the dashboard doesn’t have the ability to make the API call and get the response. You can deploy multiple Downloaders in various locations, so factors such as latency and download time can be measured more precisely.


Local (Hybrid) Use of Downloader:

The API Fortress Remote Download Agent can also sit inside of your infrastructure to allow the cloud (SaaS) platform to test systems that are not exposed externally. It will listen to an HTTPS port for jobs requested by an API Fortress engine. The agent will perform an HTTP(S) request to an endpoint as described in the job, and once completed will serialize the data back to the engine, adding contextual information such as the metrics. No data is retained in the agent memory after job completion. The agent will use the DNS settings provided by the machine it’s installed on.

The Downloader Is Very Configurable:

You may disable SSL validation:
https://apifortress.com/doc/disable-ssl-validation/

It can be configured to go through a proxy:
https://apifortress.com/doc/proxy-settings-in-downloader/

The Downloader (aka: RemoteDownloadAgent) receives inbound HTTPS connections from the dashboard, encrypting everything with its own certificate. 

However, you can also install an actual certificate to a RemoteDownloadAgent:
https://apifortress.com/doc/keystores-for-downloader/

 

Proxy Settings in Downloader

*This is for Self-Hosted/On-Premises deployments*

If you need your downloader to go through a proxy to reach your API follow the below steps to configure the proxy settings.

Downloader Configuration

You will need to modify the downloader config file, by adding an environment field for proxy settings.

Docker:
Navigate to the “downloader” folder within your installation files, and find the file named “docker-compose.yml”

At the end of the file will be a section called “environment”, add a field in this section called “proxy_configuration”. See example below:

proxy_configuration:'{"*":{"address":"10.10.10.10","port":3128,"authentication":"basic","username":"foo","password":"bar"}}'

Kubernetes:
Navigate to the file named “downloader.yml”.

You will find a section named “env”, add a field to this section called proxy_configuration. See example below:

name: proxy_configuration
value: '{"*":{"address":"10.10.10.10","port":3128,"authentication":"basic","username":"foo","password":"bar"}}'

Where “address” and “port” are, respectively, the addresses and port of the proxy. Authentication is optional.

Proxy Configuration Syntax

The proxy configuration syntax is as below (multiple proxy configurations should be comma separated):

{"foo.com":{"address":"172.18.0.1","port":3128,"username":"proxyuser","password":"password"},"bar.com":{"address":"172.18.0.1","port":3128,"username":"proxyuser","password":"password"}}

There is also a catch-all syntax:

{"*":{"address":"172.18.0.1","port":3128,"username":"proxyuser","password":"password"}}

In addition, you can use a wildcard in place of the lowest level of the domain, as in:

{"*.google.com":{"address":"172.18.0.1","port":3128,"username":"proxyuser","password":"password"}}

Priority

The proxy configuration now has a priority sequence, the entries at the beginning of the configuration block have higher priority, the ones at the ending have lower priority.

The “*” entry is not involved in the priority verification, and is always the last to be used, regardless of where it appears.

Wildcards

In the previous versions of the downloader, the wildcards only cover the level of the domain. For example *.domain.com covered sub1.domain.com and sub2.domain.com but not third.sub1.domain.com

This is not the case anymore with the current version of the downloader. The wildcard will cover all the lower levels of the domain. This makes the priority essential.

Negative selection

It is now possible to deactivate proxy settings for specific selectors.

Simply add one entry looking like this: sub3.domain.com“:{“address”:”NONE”}

If sub3.domain.com is matched, then no proxy will be selected and the priority rundown will stop. Wildcards can also apply here, as in “*.domain.com“: {“address”:”NONE”}

Again, check the order of appearance for priority!

Examples

{

"sub.domain.com":{"address":"proxy1.com","port":2255"}

}

Only sub.domain.com will go through proxy1.com. Other requests will go through no proxy.

{

"sub1.domain.com":{"address":"proxy1.com","port":2255"},

"*":{"address":"proxy2.com","port":2255"}

}

Only sub1.domain.com will go through proxy1.com. Other requests will go through proxy2.com

{

"sub1.domain.com":{"address":"proxy1.com","port":2255"},

"*.domain.com":{"address":"proxy2.com","port":2255"}

}

Only sub1.domain.com will go through proxy1.com. Requests to any domain.com subdomain will go through proxy2. Other requests will not go through a proxy.

{

"*.sub1.domain.com":{"address":"proxy1.com","port":2255"},

"*.domain.com":{"address":"proxy2.com","port":2255"}

}

All subdomains of sub1.domain.com will go through proxy1.com. Other subdomains of domain.com will go through proxy2.com. Any other domain will not go through a proxy.

{

"*.sub1.domain.com":{"address":"NONE"},

"*.domain.com":{"address":"proxy2.com","port":2255"},

"*":{"address":"proxy3.com","port":2255"}

}

All subdomains of sub1.domain.com will NOT go through a proxy. All subdomains of domain.com will go through proxy2. All other domains will go through proxy3.com

Using RDS and DocumentDB

API Fortress supports the use of one or both RDS and DocumentDB instead of the default Postgres and MongoDB. Below we detail the steps needed to make the switch.

RDS

Once Deployed a PostgreSQL 9.5 series RDS instance, the configuration is straight forward. All that is required is to change the PostgreSQL settings in the configuration. Postgres is solely used by the dashboard service.

The involved configuration keys are:
psqlhost
psqlUsername
psqlPassword
The database name MUST BE “apipulse”.

DocumentDB

Premise:
We don’t use DocumentDB internally. Keep in mind DocumentDB is not MongoDB as a Service, it’s a clone developed by Amazon and there are differences. Some incompatibilities have been addressed by us in the software, but we have no data concerning the long term success of Amazon DocumentDB.

Steps on AWS:
1. Create a DocumentDB cluster. A single instance is perfectly fine as long as backups are configured.

2. Change the preferences group to disable TLS. Currently, API Fortress does not support TLS transport to DocumentDB (and it would be superfluous as all internal communications in AWS are peer-encrypted)
2a. Make sure you restart your instance once that is done (choose to restart immediately and not during a maintenance window, otherwise you’ll wait quite a long time…)

3. Make sure to assign a security group that allows the communication between your EKS cluster and the DocumentDB cluster. We discovered that as a default, that is not the case.

Changes in the APIF configuration:
1. Apply the following changes in the apifortress deployment section:
– set the dbHost key to reflect the DocumentDB endpoint
– add the dbUsername key to reflect the DocumentDB username
– add the dbPassword key to reflect the DocumentDB password

2. Apply the following changes to the afscheduler section:
– set the mongoHost key to reflect the DocumentDB endpoint
– add the mongoUsername key to reflect the DocumentDB username
– add the mongoPassword key to reflect the DocumentDB password

3. Apply the following changes to the afconnector section:
– set the mongoHost key to reflect the DocumentDB endpoint
– add the mongoUsername key to reflect the DocumentDB username
– add the mongoPassword key to reflect the DocumentDB password

Before you deploy: 
Make sure the Kubernetes or EC2 cluster and the MongoDB instance live in the same VPC

Backing Up Your Data – Kubernetes (Self-Hosted)

When running an self-hosted/on-premises installation, you will certainly want to run periodic backups of all your data. In this article, we will provide you the scripts to perform a data dump of API Fortress. You will then need to wire them up in your scheduled operations system, such as cron. We will assume you have a running API Fortress installation, ability to sudo to root privileges and a general idea of how Kubernetes works. If you are using EKS, you can simply take snapshots of the postgres and mongo disks directly through EKS. If you would like to take data dumps via Kubernetes please see the instructions below. On the machine where you have Kubernetes installed and running, execute the following commands: kubectl get pods Now we will start with backing up the postgres disk, please run the following two commands in order: kubectl exec -ti postgres-0 -- bash -c "pg_dump -U apipulse > apifortress_postgres.sql" kubectl cp postgres-0:apifortress_postgres.sql apifortress_postgres.sql Next we will back up the mongodb disk, please run the following two commands in order: kubectl exec -ti mongodb-0 -- mongodump kubectl cp mongodb-0:dump dump Note that the mongodb dumps can become quite large, so we recommend that these dumps be done on a volume disk or use a separate desk that is mounted to all of this that will only be used for backups.

Updating the API Fortress License Key

If you need an updated API Fortress license please reach out to your account manager or sales@apifortress.com The below instructions will show you where to replace the license key in the configuration file: For Docker users:
      1. Find the “docker-compose.yml” file located in the “core” directory
      2. Locate the section labeled “APIFORTRESS DASHBOARD”
      3. Towards the bottom of the section you will find the key “license:” 
      4. Replace the string to the right of the “:” be mindful to keep the single quotes around the license key

For Kubernetes users:

      1. Find the “apifortress.yml” file located in the “root” directory
      2. Locate the section labeled “API Fortress Dashboard”
      3. Towards the bottom of the section you will find “- name: license”
      4. Below that you will see “value:” replace the string to the right of the “:” be mindful to keep the single quotes around the license key

Deployment – Configure the DNS for the Mocking service

Regardless of the deployment method used, to use the Mocking service you will need to apply one change in your DNS.

Assuming your API Fortress dashboard is mapped to the domain:

apif.yourcompany.com

A new CNAME entry needs to be created, as in:

CNAME *.apif.yourcompany.com > apif.yourcompany.com

As mocked services will be accessible via subdomains of the dashboard.

Deployment – Kubernetes (Self-Hosted)

Before we start:

  • This tutorial assumes that the reader is familiar with some standard procedures in Kubernetes (creating secrets, creating config-maps etc.) If you are not familiar with these processes, please refer to the Kubernetes documentation.
  • The memory settings configured for each container are to be intended as the minimum for a production environment. Wherever applicable, this document will provide settings for a minimum for a test drive environment and optimal for a larger scale production environment
  • If your cluster is not allowed to communicate with DockerHub or is incapable of logging in, you will need to manually pull (from DockerHub) and push (to your private repository) images.
  • This guide, and the provided starter configuration files will assume the deployment will occur in the apifortress project/namespace. If this is not the case for your setup, please update all current hostname references to apifortress, as in  postgres.apifortress.svc or tools.apifortress.svc
  • The whole guide and annexed configuration files have been built upon hands-on experience with the Google GCloud Kubernetes service. Some tweaking may be required if using a different provider.

Starting the Main Services

Step 1 – Accessing a private Repository:

Create a secret in Kubernetes that contains the DockerHub user credentials for the account shared with API Fortress. As the repositories on the APIF-side are private, you must submit the same account that was submitted with the configuration survey. You can find further information here https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

Step 2 – Configure apifortress.yml:

    1. Ensure that the cluster is capable of supporting the default image memory limits. The apifortress container is set for 8GB of memory. The optimal memory setting is 16GB, the minimum memory setting is 4GB;
    2. memorySettings (optional parameter) describe the minimum and maxium heap memory the process can use. Xmx should be set to 1/2 of the available memory of the process. You don’t need to tweak these values if you don’t change the overall available memory. This is an example of the setting to be placed among the environment variables:
        #- name: memorySettings
        #  value: '-Xms1024m -Xmx4098m'
3. Ensure that any critical key/value pairs have been defined. The configuration files should be populated with the values submitted with the pre-configuration survey, but for safeties sake a user should ensure that grailsServerUrl has been passed the URL that the instance will be reached through, that license has been passed a license key and that adminEmailadminFullName and companyName have been defined. These values are all found in the env section of the apifortress.yml file. While it is not critical to deployment, it is strongly recommended that the user configures the mailer service as well. This section in env:
 - name: apifortressMailEnabled
          value: 'true'
        - name: apifortressMailFrom
          value: info@example.com
        - name: apifortressMailSmtpHost
          value: ''
        - name: apifortressMailSmtpPassword
          value: ''
        - name: apifortressMailSmtpPort
          value: '25'
        - name: apifortressMailStartTLS
          value: 'true'
        - name: apifortressMailSmtpUsername
          value: info@example.com
        - name: apifortressMailUseSES
          value: 'false'
as well as the settings in the AFMAILER Microservice should be completed to allow the platform to generate emails. 4. The Load Balancer is the mechanism for communicating with the platform. This can be replaced with a NodePort or Ingress if required, according to the configuration of your system.
# >>> APIFORTRESS loadBalancer service >>>
apiVersion: v1
kind: Service
metadata:
 name: apifortress
spec:
 type: LoadBalancer
 selector:
 app: apifortress
 ports:
 - port: 8080
 loadBalancerIP: '[cluster-ip-change-it]'
 sessionAffinity: ClientIP
---
5. Ensure that all the ports exposed in the descriptor match your expectations. As a default, the dashboard will run on port 8080 and the liveness probe will test that to determine the service availability.

Step 3 – Configure dependencies.yml

Each of the database services in dependencies.yml has a preconfigured definition for the amount of disk space allocated to the service. These values can be edited to match the available disk space that you wish to provide for said services. For MongoDB the proposed memory setting is 8Gi. The minimum is 1Gi, the optimal is 16Gi. However, for the inner workings of MongoDB, any increase in memory will result in better performance. For PostgreSQL the proposed memory setting is 1Gi which is considered also an optimal setting. The minimum is 512Mi. NOTE: volume claims may need to be tweaked based on your service provider. NOTE: MongoDB will store most of the data produced by the platform, so make sure the disk size is reasonable for your use case
  volumeClaimTemplates:
  - metadata:
      name: mongovol
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 50Gi
  volumeClaimTemplates:
  - metadata:
      name: psqlvol
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

Step 4 – Start the main services

Start the dependency services by typing:
kubectl create -f dependencies.yml
Once these services have spun up, you can start the main API Fortress platform with:
kubectl create -f apifortress.yml

Step 5 : verify

Access the platform with the URL provided in the apifortress.xml file. Login using the default admin username and the default password (“foobar” – change it ASAP). You should now be able to access the API Fortress Dashboard.

Configure the Downloader

The API Fortress downloader is the agent that retrieves the resources to be tested. Downloaders can be installed in various locations, so factors such as latency and download time can be measured by remote consumers. Click here to learn more about the Downloaders.

Step 1 – Create a Downloader in API Fortress

Login to API Fortress with the admin user, access the API Fortress admin panel by clicking the “user” icon in the top right, then click Admin Panel. login Choose “Downloaders” from the list of actions and click on the “Add Downloader” button.

Step 2 – Configure the Downloader

Fill in the following fields: Name: Write a recognizable name. Location: A representation of where the downloader is. ie. Chicago Latitude / Longitude: The geographical position of the downloader. Last Resort: Check this to make it the default downloader used. URL: The address of the downloader, followed by port (default 8819) and path /api. In our Kubernetes deployment, our downloader address would be https://downloader.apifortress.svc:8819/api API Key, API Secret: Write these two values down for use later.

Step 3 – Move the key and secret values to downloader.yml

Edit the  downloader.yml file and enter the API Key and API Secret provided by the platform in the previous step.

Step 4 – Start the Downloader

Start the downloader with:
kubectl create -f downloader.yml
7. Open the HTTP client from the tools drop-down menu in API Fortress. Attempt to contact a site that is accessible from this server environment. API Fortress should now be able to successfully communicate with other websites.

Configure the Load Agent

Step 1 – Define the maximum users per Load Agent

Users per agent are the maximum number of virtual users that each Load Agent can provide. It’s important to remember that large numbers of simulated users will require large amounts of hardware resources. Contact your DevOps team to develop a strategy for resource allocation. 
  • Locate and open the file named application.conf. It is located in the core-server-etc directory.
  • Line 14 of this file (fixed-pool-size) should have its value adjusted to match the desired number of maximum users per agent.
  • Line 48 of this file (nr-of-instances) should have its value adjusted to match the desired number of maximum users per agent. These two values should match.

Step 2 – Configure config.yml

  • Locate and open config.yml. It is located in core-server-etc.
  • First, we have to configure the baseURL
    • baseURL is located on line 3.
    • If the Load Agent and the API Fortress Dashboard are located on the same cluster, then you can replace the baseURL with the internal address and port of the Dashboard on the server.
    • If the Load Agent and the API Fortress Dashboard are located on different clusters, you can replace the baseURL with the actual URL of the Dashboard. That is to say, the URL you would use to access it via web browser.
  • Next, we need to provide the API Key and Secret.
    • Open the main API Fortress dashboard and click the gear icon in the upper right corner to access the settings menu
    • Click the “API Keys” option in the left sidebar.
    • Click “+API Key” 
  • Copy the API Key to line 5 of config.yml.
  • Copy the Secret to line 6 of config.yml.

Step 3 – Adding the Engine

  • The next step is to add the new Engine to API Fortress itself.
  • Log into API Fortress as an administrator.
  • Click the user icon in the upper right corner, and then click “Admin Panel”
  • Click “Engines” on the left side of the screen.
  • Click “+Engine”
  • Enter the name and location of the Engine.
  • The CRN value defaults to a random string. It is very recommended that you change it to something human-readable, but unique in the list. This is the internal name of the engine.
  • After modifying the CRN, copy the value to line 11 of config.yml
  • Copy the secret to line 12 of config.yml
  • Select the Owning Company of the Engine. An Engine must be owned by a single company. The default value (Public Engine) should not be chosen.
  • Select “Yes” for “Dedicated to Load Testing
  • Click the green check to save the Engine settings.

Step 4 – Creating the Config-Map

Create a config-map called ‘core-0’ from the core-server-etc directory.

Step 5 – Tweak the memory settings if necessary

The memory settings may vary a lot based on the number of virtual users the load agent is meant to support. The default 2Gi is generally OK for up to 50 virtual users. It is to be noted that as the process is memory, CPU and network intensive, better results are achieved by introducing more load agents versus increasing the size of each one. For the very same reason, it’s generally pointless to run multiple load agents in the server.

Step 6 – Start the Load Agent service

Start the load agent service with:
kubectl create -f core-server.yml

Step 7 – Verify the deployment

Access the Load Testing tool by clicking on the Tools dropdown at the top of the view in API Fortress. The Load Agent that you just deployed should be visible on the right side of the screen.

General tweaks

HTTPS to HTTP

If you’re having the dashboard go through a gateway, it is likely that you will want to run the container in HTTP and the gateway in HTTPS. Therefore the grailsURL in the configuration will need to be in HTTPS. At this point the API Fortress dashboard will perform a hard check on the protocol at each request which will always appear as in HTTP, causing an illegal redirect. This is done for security reasons. To overcome this issue you will need to override one configuration file in the Tomcat configuration via a configMap. This is not the default in the API Fortress Dashboard image on purpose, again, for security reasons. We will assume that the gateway will forward the x-forwarded-proto header. The file to be added is located in the deployment files you have been provided: /tomcat_conf/context.xml
  1. Tweak the file according to your needs
  2. Create a config map for the single file named tomcat-context
  3. Change the apifortress service in the apifortress.xml file as follows: Add this fragment within the containers element:
    volumeMounts:
     - name: tomcat-context
     mountPath: /usr/local/tomcat/conf/context.xml
     subPath: context.xml
  4. Add this fragment in the spec element:
    volumes:
     - name: tomcat-context
     configMap:
     name: tomcat-context
By doing so, we will have API Fortress to accept the original protocol as the actual protocol being used.

Deployment – Red Hat OpenShift (Self-Hosted)

Before We Start:

  • This tutorial assumes that the reader is familiar with some standard procedures in OpenShift (creating secrets, creating config-maps.) If you are not familiar with these processes, please refer to the OpenShift documentation.
  • The memory settings configured for each container are to be intended as the minimum for a production environment. Wherever applicable, this document will provide settings for a minimum for a test drive environment and optimal for a larger scale production environment
  • If your cluster is not allowed to communicate with a server on the internet, the “Create ImageStream” process will need to be performed by manually pulling (from DockerHub) and pushing (to your image streams) images.
  • This guide, and the provided starter configuration files will assume the deployment will occur in the apifortress project/namespace. If this is not the case for your setup, please update all current host name references to apifortress, as in  postgres.apifortress.svc or tools.apifortress.svc

Starting the Main Services

Step 1 – Creating the ImageStream:

  1. Create a secret in OpenShift that contains the DockerHub user credentials for the account shared with API Fortress. As the repositories on the APIF-side are private, you must submit the same account that was submitted with the configuration survey.
  2. Create the API Fortress OpenShift image streams with the provided apifortress-imagestream.yml with:
oc create -f apifortress-imagestream.yml
3. Configure apifortress.yml, downloader.yml and core-server.yml to point at the established image stream. Changing the bracketed value in the below example would change the selected imagestream.
spec:
      containers:
      - name: apifortress
        image: '[imagestream-changeit]/apifortress/apifortress:16.5.3'
        resources:
          limits:
            memory: 8Gi

Step 2 – Configure apifortress.yml:

    1. Ensure that the cluster is capable of supporting the default image memory limits. The apifortress container is set for 8GB of memory. The optimal memory setting is 16GB, the minimum memory setting is 4GB;
    2. memorySettings (optional parameter) describe the minimum and maxium heap memory the process can use. Xmx should be set to 1/2 of the available memory of the process. You don’t need to tweak these values if you don’t change the overall available memory. This is an example of the setting to be placed among the environment variables:
        #- name: memorySettings
        #  value: '-Xms1024m -Xmx4098m'
3. Ensure that any critical key/value pairs have been defined. The configuration files should be populated with the values submitted with the pre-configuration survey, but for safeties sake a user should ensure that grailsServerUrl has been passed the URL that the instance will be reached through, that license has been passed a license key and that adminEmailadminFullName and companyName have been defined. These values are all found in the env section of the apifortress.yml file. While it is not critical to deployment, it is strongly recommended that the user configures the mailer service as well. This section in env:
 - name: apifortressMailEnabled
          value: 'true'
        - name: apifortressMailFrom
          value: info@example.com
        - name: apifortressMailSmtpHost
          value: ''
        - name: apifortressMailSmtpPassword
          value: ''
        - name: apifortressMailSmtpPort
          value: '25'
        - name: apifortressMailStartTLS
          value: 'true'
        - name: apifortressMailSmtpUsername
          value: info@example.com
        - name: apifortressMailUseSES
          value: 'false'
as well as the settings in the AFMAILER Microservice should be completed to allow the platform to generate emails. 4. The NodePort is the mechanism for communicating with the platform. This can be replaced with a LoadBalancer if required. When creating an OpenShift Route, this is where the Route should point.
# >>> API Fortress NodePort >>>
apiVersion: v1
kind: Service
metadata:
  name: apifortress
  labels:
    app: apifortress
spec:
  type: NodePort
  selector:
    app: apifortress
  ports:
  - port: 8080
    name: http
  loadBalancerIP:
  sessionAffinity: ClientIP
---

Step 3 – Configure dependencies.yml

Each of the database services in dependencies.yml has a preconfigured definition for the amount of disk space allocated to the service. These values can be edited to match the available disk space that you wish to provide for said services. For MongoDB the proposed memory setting is 8Gi. The minimum is 1Gi, the optimal is 16Gi. However, for the inner workings of MongoDB, any increase in memory will result in better performance. For PostgreSQL the proposed memory setting is 1Gi which is considered also an optimal setting. The minimum is 512Mi. NOTE: MongoDB will store most of the data produced by the platform, so make sure the disk size is reasonable for your use case
  volumeClaimTemplates:
  - metadata:
      name: mongovol
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 50Gi
  volumeClaimTemplates:
  - metadata:
      name: psqlvol
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

Step 4 – Start the main services

Start the dependency services by typing:
oc create -f dependencies.yml
Once these services have spun up, you can start the main API Fortress platform with:
oc create -f apifortress.yml

Step 5 : verify

Access the platform with the URL provided in the apifortress.xml file. Login using the default admin username and the default password (“foobar” – change it ASAP). You should now be able to access the API Fortress Dashboard.

Configure the Downloader

The API Fortress downloader is the agent that retrieves the resources to be tested. Downloaders can be installed in various locations, so factors such as latency and download time can be measured by remote consumers. Click here to learn more about the Downloaders.

Step 1 – Create a Downloader in API Fortress

Login to API Fortress with the admin user, access the API Fortress admin panel by clicking the “user” icon in the top right, then click Admin Panel. login Choose “Downloaders” from the list of actions and click on the “Add Downloader” button.

Step 2 – Configure the Downloader

Fill in the following fields: Name: Write a recognizable name. Location: A representation of where the downloader is. ie. Chicago Latitude / Longitude: The geographical position of the downloader. Last Resort: Check this to make it the default downloader used. URL: The address of the downloader, followed by port (default 8819) and path /api. In our OpenShift deployment, our downloader address would be https://downloader.apifortress.svc:8819/api API Key, API Secret: Write these two values down for use later.

Step 3 – Move the key and secret values to downloader.yml

Edit the  downloader.yml file and enter the API Key and API Secret provided by the platform in the previous step.

Step 4 – Start the Downloader

Start the downloader with:
oc create -f downloader.yml
7. Open the HTTP client from the tools drop-down menu in API Fortress. Attempt to contact a site that is accessible from this server environment. API Fortress should now be able to successfully communicate with other websites.

Configure the Load Agent

Step 1 – Define the maximum users per Load Agent

Users per agent are the maximum number of virtual users that each Load Agent can provide. It’s important to remember that large numbers of simulated users will require large amounts of hardware resources. Contact your DevOps team to develop a strategy for resource allocation. 
  • Locate and open the file named application.conf. It is located in the core-server-etc directory.
  • Line 14 of this file (fixed-pool-size) should have its value adjusted to match the desired number of maximum users per agent.
  • Line 48 of this file (nr-of-instances) should have its value adjusted to match the desired number of maximum users per agent. These two values should match.

Step 2 – Configure config.yml

  • Locate and open config.yml. It is located in core-server-etc.
  • First, we have to configure the baseURL
    • baseURL is located on line 3.
    • If the Load Agent and the API Fortress Dashboard are located on the same cluster, then you can replace the baseURL with the internal address and port of the Dashboard on the server.
    • If the Load Agent and the API Fortress Dashboard are located on different clusters, you can replace the baseURL with the actual URL of the Dashboard. That is to say, the URL you would use to access it via web browser.
  • Next, we need to provide the API Key and Secret.
    • Open the main API Fortress dashboard and click the gear icon in the upper right corner to access the settings menu
    • Click the “API Keys” option in the left sidebar.
    • Click “+API Key” 
  • Copy the API Key to line 5 of config.yml.
  • Copy the Secret to line 6 of config.yml.

Step 3 – Adding the Engine

  • The next step is to add the new Engine to API Fortress itself.
  • Log into API Fortress as an administrator.
  • Click the user icon in the upper right corner, and then click “Admin Panel”
  • Click “Engines” on the left side of the screen.
  • Click “+Engine”
  • Enter the name and location of the Engine.
  • The CRN value defaults to a random string. It is very recommended that you change it to something human-readable, but unique in the list. This is the internal name of the engine.
  • After modifying the CRN, copy the value to line 11 of config.yml
  • Copy the secret to line 12 of config.yml
  • Select the Owning Company of the Engine. An Engine must be owned by a single company. The default value (Public Engine) should not be chosen.
  • Select “Yes” for “Dedicated to Load Testing
  • Click the green check to save the Engine settings.

Step 4 – Creating the Config-Map

Create a config-map called ‘core-0’ from the core-server-etc directory.

Step 5 – Tweak the memory settings if necessary

The memory settings may vary a lot based on the number of virtual users the load agent is meant to support. The default 2Gi is generally OK for up to 50 virtual users. It is to be noted that as the process is memory, CPU and network intensive, better results are achieved by introducing more load agents versus increasing the size of each one. For the very same reason, it’s generally pointless to run multiple load agents in the server server.

Step 6 – Start the Load Agent service

Start the load agent service with:
oc create -f core-server.yml

Step 7 – Verify the deployment

Access the Load Testing tool by clicking on the Tools dropdown at the top of the view in API Fortress. The Load Agent that you just deployed should be visible on the right side of the screen.

General tweaks

HTTPS to HTTP

If you’re having the dashboard go through a gateway, it is likely that you will want to run the container in HTTP and the gateway in HTTPS. Therefore the grailsURL in the configuration will need to be in HTTPS. At this point the API Fortress dashboard will perform a hard check on the protocol at each request which will always appear as in HTTP, causing an illegal redirect. This is done for security reasons. To overcome this issue you will need to override one configuration file in the Tomcat configuration via a configMap. This is not the default in the API Fortress Dashboard image on purpose, again, for security reasons. We will assume that the gateway will forward the x-forwarded-proto header. The file to be added is located in the deployment files you have been provided: (works for both OpenShift and Kubernetes): /tomcat_conf/context.xml
  1. Tweak the file according to your needs
  2. Create a config map for the single file named tomcat-context
  3. Change the apifortress service in the apifortress.xml file as follows: Add this fragment within the containers element:
    volumeMounts:
     - name: tomcat-context
     mountPath: /usr/local/tomcat/conf/context.xml
     subPath: context.xml
  4. Add this fragment in the spec element:
    volumes:
     - name: tomcat-context
     configMap:
     name: tomcat-context
By doing so, we will have API Fortress to accept the original protocol as the actual protocol being used.