API Fortress Command Line Tool

Welcome to the API Fortress Command Line Tool!

The tool itself: https://github.com/apifortress/afcmd/releases

The documentation for the API that that tool leverages:  https://apifortressv3.docs.apiary.io/

The tool, or rather, pair of tools, are designed to reduce the amount of legwork that goes into executing or uploading API Fortress tests. The following readme will explain each part of the process.

APFCMD allows a user to easily integrate API Fortress testing into other workflows. Example use cases are:

  • Executing API Fortress tests from a CI/CD tool
  • Incorporating API Fortress tests in a Git version control plan.
  • Pushing test code from an IDE to the API Fortress platform.

All of these scenarios, and more, can be accomplished with the tool.

Lets take a look at the two major components of the tool:

APIF-RUN

Run allows us to execute tests on the platform and do things with that data. We can run tests via API either in an authenticated or unauthenticated state. By passing credentials, we receive a more verbose test result. We can output this result to a file. We also have access to all of the standard options that API Fortress provides in its API (silent run, dry run, etc.)

RUN EXECUTION FLAGS

  • run-all – RUN ALL – This will execute all of the tests in a chosen project.
  • run-by-tag – RUN BY TAG – This will execute all tests with a selected tag (requires the -t flag to set tag)
  • run-by-id – RUN BY ID – This will execute a test with a specific ID (requires the -i flag to set id)
  • hook – HOOK – This is the webhook of the project you are working with. This can be either an API Fortress URL, or the key from a configuration file (set the path to the config file with the -c tag)

ex: to run all of the tests in a specific project, we would use the following command string:

python apif-run.py run-all http://mastiff.apifortress.com/yourWebHook

RUN OPTION FLAGS

  • -S – SYNC – This will provide a response body with the result of the test.
  • -f – FORMAT – This will determine the format of the test result output (JSON, JUnit, Bool). REQUIRES SYNC MODE (-S)
  • -d – DRY – This will cause the test run to be a dry run.
  • -s – SILENT – This will cause the test to run in silent mode.
  • -o – OUTPUT – This will write the result of the test to a local file. You must provide the path to the file to be created. Remember your filetype! (.json/.xml)
  • -c – CONFIG – This provides the path to a configuration file which can provide webhooks and user credentials. If no path is specified, the program will look for a config.yml in the same directory as it is (./config.yml)
  • -C – CREDENTIALS – This allows you to manually pass user credentials (username:password) (SUPERSEDES CONFIG FILE)
  • -t – TAG – This is how you pass a tag for RUN BY TAG mode.
  • -i – ID – This is how you pass an ID for RUN BY ID mode.
  • -e – ENVIRONMENT – This is how you pass environmental/override variables. The format is key:value. You can pass multiple sets of environmental variables like so: key:value key1:value1 key2:value2

APIF-PUSH

Push allows us to push tests into API Fortress. When tests are downloaded from the platform, they come as 2 XML files (unit.xml & input.xml). We can use this tool to push those files back to an API Fortress project, either individually or in bulk.

PUSH EXECUTION FLAGS

  • hook – HOOK – This is the webhook of the project you are working with. This can be either an API Fortress URL, or the key from a configuration file (set the path to the config file with the -c tag)

PUSH OPTION FLAGS

  • -p – PATH – This provides the path to the test file you wish to upload. You can pass multiple paths.
  • -r – RECURSIVE – This flag will make the call recursive; It will dive through the directory passed with -p and grab every test in all of its subdirectories.
  • -b – BRANCH – This allows you to specify a Git branch that these test files are attached to. Default is master.
  • -c – CONFIG – This provides the path to a configuration file which can provide webhooks and user credentials. If no path is specified, the program will look for a config.yml in the same directory as it is (./config.yml)
  • -C – CREDENTIALS – This allows you to manually pass user credentials (username:password) (SUPERSEDES CONFIG FILE)
  • T – TAG – This allows you to pass tags to be appended to the test after it is pushed. This will OVERWRITE ANY EXISTING TAGS. Multiple tags can be passed.
  • -t – ADD TAG – This will allow you to add additional tags to a test that already has tags attached.

CONFIGURATION FILE

A configuration file is a YAML file that is formatted as follows:

hooks:
  - key: cool_proj1
    url: https://mastiff.apifortress.com/app/api/rest/v3/A_WEBHOOK
    credentials:
      username: (your username)
      password: (your password)
  - key: uncool_proj
    url: https://mastiff.apifortress.com/app/api/rest/v3/ANOTHER_WEBHOOK
    credentials:
      username: (another username)
      password: (another password)
  - key: unauth_proj
    url: https://mastiff.apifortress.com/app/api/rest/v3/JUST_A_WEBHOOK_WITHOUT_CREDENTIALS
test_directory: /tests

Once you create a configuration file, you can pass the path with -c and the key to the data in place of the normal hook URL. If you also pass credentials, they’ll override the credentials in the configuration file. If you don’t include credentials in the config file, you can pass them manually or leave them out entirely.

EXAMPLES

Execute all of the tests in a project and output the results to a JUnit/XML file via an authenticated route:

python apif-run.py run-all http://mastiff.apifortress.com/yourWebHook -S -C my@username.com:password1 -f junit -o some/route/results.xml

Push all of the tests from a directory and all of its subdirectories to a project:

python apif-push.py http://mastiff.apifortress.com/yourWebHook -C my@username.com:password1 -r -p some/directory/with/tests

Execute one test in a project by ID, using a config file for credentials and webhook:

python apif-run.py run-by-id config_key -c path/to/config/file -i testidhash8924jsdfiwef891

NOTES

  • The order of the optional arguments passed does not matter.
  • Remember, in a bash environment, anything that has a space in it needs to be wrapped in quotes. This goes for paths, filenames, etc.

POST-RECEIVE SCRIPT FOR GIT

This Post-Receive script is meant to assist in the incorporation of API Fortress in your Git workflow. Dropping the file into the hooks directory of your .git file will cause newly committed API Fortress test code to be pushed to the API Fortress platform. The ‘test_directory‘ key in the config.yml will let the scripts know which folder the tests themselves are located in. It will then watch for commits from this folder and push the appropriate code to the platform.

Execution Context in API Fortress

Preamble: the nature of fields

Among the pieces of information you introduce in an API Fortress test some are:

  • Taken statically, as strings. These are ingested as they are in the system. Examples:
    • the var field in the SET component (the name of the variable itself)
    • The type field in an ASSERT-IS component
  • Evaluated as expressions. This means that whatever you put in there, it’ll be considered a “piece of code”, something to be evaluated as a logical expression. Examples:
    • The expression field in all assertions
    • The data or object field in the SET component

Most of the times, they are selectors, as in payload.person.age

  • Evaluated as string templates. These are ingested as static strings unless variables are present in it. When variables are present, these get replaced with the values taken from the scope.
    Examples:

    • The content of the COMMENT component
    • The body of the postBody component

They are generally used to print a string with variable content as in:
{
 “person”: {
    “age”: ${age}
 }

}

 

Data manipulation in evaluated fields

Every evaluated field, such as expressions and variable references in templates, allow data manipulation operations. This means that you’re not limited to just selection of data, but you can manipulate the data to make it what works for your needs.

To do that, you can leverage multiple functions.

Expression language extensions

These extensions are unique in the API Fortress engine, and allow you to perform various operations that come handy in your daily work. The full reference is here: https://apifortress.com/doc/expression-language-extensions/

 

Here are a few examples:

  • I need to create a payload that contains a date in milliseconds, that is certainly in the future compared to the current moment. It also needs a unique ID for the request:
    {
    “futureDate”: ${D.plusDays(D.nowMillis(),3),
    “id”: ”${WSCrypto.genUUID()}”
    }
  • I need to pick one random item from an array, and store it in a variable for later use:
    <set var=”my_item” object=”payload.myarray.pick()”/>
  • I need to put my randomly picked item an a JSON payload, in JSON format:
    {
     “item”:${my_item.asJSON()}
    }

Language specific functions

While the extensions can be seen as useful functions for API related tasks, other times you may be in need to perform less specific operations, in a more programmer-like fashion.

Splitting, cutting, searching strings is quite a common thing, as much as accessing specific items in arrays, and so on.

For all these general purpose tasks, API Fortress allows you to use the Groovy programming language in all evaluated fields.

Note: on the cloud, just a subset of these commands are available, while on-prem you get the full language coverage, unless set otherwise in the configuration.

The full semantics documentation is located here: http://groovy-lang.org/semantics.html

 

Here’s a few typical use cases:

  • Take a certain integer from a payload, and store it multiplied by 10:
    <set var=”item” object=”payload.counter+10”/>
  • Append a suffix to a variable already set:
    <set var=”item” value=”${item+’-foobar’}”/>
    But this would also work:
    <set var=”item” value=”${item}-foobar”/>
  • Split a string on the comma, and iterate on it with an EACH component:
    <each expression=”payload.the_string.split(‘,’)”>
  • Make sure that the prefix (before the – dash) of a certain piece of data is an integer (as in: “123-foobar”):
    <assert-is expression=”payload.id[0..id.indexOf(‘-‘)-1]” type=”integer”/>
    Reads: substring payload.id from index zero to the index before the first occurence of ‘-’

 

The SET lang component

The SET component also has a special mode that allows you to write a little Groovy snippet when stuff get rough. It can be accessed by choosing the “Language” mode, and it allows you to write logic like the following:

def items = []

10.times{ it->

  items += it

}

return items

The assigned variable will contain an array of integers initialized with the numbers from 0 to 9.

Appendix: string vs number dichotomy

In API Fortress, most built in data structures are strings, such as:

  • The variables from the vault
  • The variables from the input sets
  • The environments
  • The variables passed in an API Run call

But also everything generated by the evaluation of a template string, such as:

    • The comments (obviously)
    • The request payloads (obviously)

 

  • The value fields

 

This is why the SET operation has both a value field and an object (Data) field.

Assuming I’ve set 2 variables like this:

<set var=”data1” value=”5”/>

<set var=”data2” object=”5”/>

And I had to create a third variable incrementing the previous variable by two:

WRONG <set var=”data3” object=”data1+2”/>

data3 is 52 as data1 is a string

 

OK <set var=”data3” object=”data2+2/”>

data3 is 7 as data2 is an integer

 

For the very same reason:

<set var=”data3” value=”${data2+2}”/>

data3 would indeed store 7, but as a string, not a number. Which may be OK in most cases, unless you need to manipulate the number more.

 

So what if I wanted to increment data1 by 2 then?

<set var=”data3” object=”data1.toInteger()+2”/>

The toInteger() method is always there to help you. And if unsure whether a piece of data is already an integer or not, the toInteger() method won’t complain if the data is an integer already.

Best Practices for Disaster Recovery

Note: This document is referential only to the API Fortress-HA (High Availability) deployment.

Components:

Databases:

  • PostgreSQL
  • MongoDB

Message queues:

  • RabbitMQ

API Fortress:

  • API Fortress Dashboard
  • Microservices (mailer, scheduler, connector)
  • Remote workers (downloaders, core-server)

Resiliency / High availability

Databases can be replicated using their specific mechanism and the systems will connect to the clusters. Each replica will carry the full database in a streaming replication fashion.

Therefore, a failure (software, hardware, network) of any of the instances will not cause a service disruption.

When a replica is brought back to life, whether it’s the same server or another, their specific replication systems will synchronize the new instance.

Databases are the only components in need of a persistent state, therefore the machines spinning them need to be able to provide a persistent storage.

The message queue is stateless (therefore does not require persistent storage) and queues and exchanges are replicated using the high availability internal mechanism. Services can connect to both so that if one replica goes down, the other will take care of the work without service disruption.

The API Fortress dashboards are stateless (with the exclusion of in-memory web sessions) and can be scaled horizontally and load balanced.

The API Fortress microservices are stateless single-instance services that can be respawed in any server, without any specific concern.

The API Fortress remote workers are stateless multi-instance services that can be scaled horizontally and load balanced.

Backup and Restore

Backup

There are 2 primary types of backups:

  • Taking snapshots of the persisted database disks.
    The procedure is runtime dependent (AWS, GCloud, OpenShift etc.)
  • Dumping databases to files for classic restore.
    These procedures are described  here. The actual commands may vary based on the runtime.

Restoration

  • Given the snapshot of a disk, the runtime should provide the ability to create a new disk from it.
  • Given the dump files, you can follow the procedure described here. The actual commands may vary based on the runtime.

Note: No service except the two databases require access to persistent storage.

Disaster recovery

Databases:

  • In case of a database being unreachable for connectivity issues, the system will continue working using a replica. When the issue is solved, the system will sync itself automatically. No service degradation is expected.
  • In case of a system failure, disk failure, or data corruption, spin a new server in the same cluster with the same hostname. This will trigger the database automatic replication. No service degradation is expected.
  • In case of a global failure of all replicas, API Fortress will stop working. Spin a new database cluster starting from a backup and restart all services. Service degradation is expected. Data loss may occur, depending on your backup strategy.

Message queues:

  • In case of a message queue being unreachable for connectivity issues, the system will continue working using a replica. A respawn of the failing message queue will bring it back to the cluster. No service degradation is expected.
  • In case of a system failure, spin a new server in the same cluster with the same hostname.  No service degradation is expected.
  • In case of a global failure of all replicas, API Fortress will stop executing scheduled tests and will not send notifications. Start a new message queue cluster. A restart of all services is not required but recommended. Service degradation is expected.

Connecting to TestRail

testrail + apif

Test case managers are often critical to helping modern teams manage cases, plans, and runs. Communication of the test results is key, and that’s why API Fortress makes it easy to integrate with many leading platforms today. TestRail is one of them.

API Fortress makes it easy to automate the testing of your APIs, and to trigger those tests to run automatically on a schedule, and during a build process (eg: Jenkins). That test result data can be pushed to your TestRail instance automatically.

Here is a quick guide to setup of how to set it up.

First, click the gear icon in the upper right corner of any view in API Fortress, highlighted in the below image.

2

Next, we’re going to click “Alerts Groups” on the left navigation bar, followed by “+ Alert Group” to create a new group, name it, and finally click the connector button. The GIF below demonstrates this procedure.


AlertGroup

Next, we need to add the TestRail connector to the alert group. Click “+ Connector” and select TestRail in the dropdown that appears.

Screen Shot 2018-06-27 at 11.29.36 AM

Next, we need to define the parameters that we’re going to pass to the TestRail connector. Click the pencil icon to edit the parameters, and then fill out the fields in the modal.

 

1

Username: Your TestRail Username.
Password: The password for the TestRail account you’re using.
Project_Id: The ID (an integer) of the TestRail project you’d like the API Fortress results to populate.
Domain: The subdomain of your TestRail instance. It’s the part of the URL that comes between “https://” and “.testrail.io”

Next, we need to add the alert group to the project. Go to the projects view and click the “settings” icon on the project that you’d like to use the connector for.

4a

In the dropdown that appears, if you begin typing the name of the alert group in the bottom field, it will populate automatically. Select it and click the green check to complete the connection process.

Screen Shot 2018-06-27 at 10.57.38 AM

Your project in API Fortress is now connected with TestRail! It’s important to note that only test results that are generated automatically, either through the scheduler or an API call, will trigger the connector. Manually executed tests (Run Test button for example) will not trigger the connector.

As soon as a test is triggered automatically, we will see the pass/fail result appear in the project of our choice in TestRail, with a link to the test report in API Fortress. Everything you need to know about your API test results in your TestRail instance.

5a

Key/Value Store

The Key/Value Store

The Key/Value store allows API Fortress users to create temporary key/value pairs that can be accessed across different tests. The Key/Value store is accessed via the Key/Value Store Component.

Screen Shot 2018-05-24 at 1.22.48 PM

An extremely important point to note is that these key/value pairs are temporary. They expire after 24 hours has elapsed since the last update to the value itself. 

The Key/Value Store Component has 4 methods available for use. They are:

Set

Set will create a new key/value pair in the Key/Value store. The value is entered in the “Object” field.

Screen Shot 2018-05-24 at 10.50.19 AM

Load

Load will recall a value from the Key/Value store when provided with a key.

Screen Shot 2018-05-24 at 10.50.36 AM

 

Push

Push will add a value to the end of an existent value of the datatype “Array” in the Key/Value store. If no such key exists, it will create a new array containing the passed in value.  The passed in value is entered in the “Object” field.

Screen Shot 2018-05-24 at 10.51.09 AM

 

Pop

Pop will remove a value from the end of an existent value of the datatype “Array” in the Key/Value store.

Screen Shot 2018-05-24 at 10.50.52 AM

 

Basic Workflow

Let’s take a look at how this workflow works in a practical setting. The first example will be a simple set and retrieve of a value in the Key/Value Store.

First, we’ll make a GET request to an endpoint.

Screen Shot 2018-05-24 at 1.21.40 PM

Next, we’ll add a K/V Store component.

component

This first K/V Store component (we’re going to incorporate several) is going to set the Key/Value pair in the Store, so we’re going to use “Set.

Screen Shot 2018-05-24 at 1.46.41 PM

In this case we’re setting the Key “prods” equal to “products[0].name”, which in this case evaluates to “Baseball Cap.”

Next, we’re going to retrieve this Key/Value pair from the store with the “Load” method. In the K/V Store “Load” component, we’re going to assign the retrieved value to the variable “kvprods.”

Screen Shot 2018-05-24 at 1.47.22 PM

Finally, we’ll add in a “Comment” component to ensure that the data was recovered successfully.

Screen Shot 2018-05-24 at 1.48.01 PM

When we run the test, we’re presented with the following result:

Screen Shot 2018-05-24 at 1.48.28 PM

Success!

Push/Pop Workflow

Next, we’re going to take a look at how “Push” and “Pop” work. “Push” and “Pop” are both array methods and behave as they normally do outside of this context. “Push” will append a value to the end of an array. “Pop” will remove the last value in an array.

First, we’re going to use “Push.” It should be noted that “Pop” works similarly but with the opposite result. “Popalso assigns the removed value to a variable which can be used in the context of the test, but can no longer be accessed from the Key/Value Store. We’ll discuss this further when we take a look at “Pop.”

First, we’re going to send a GET request and assign a key in the Key/Value Store to a value from the response body. In this case, we’re going to use “Color,” which is an array.

Screen Shot 2018-05-24 at 1.49.16 PM

Next, we’re going to “Load” and “Comment” this key. We’re doing that so we can actually see the change on the test report at the end of this workflow.

The next step is to “Push” the new data on to the end of the existing array.

Screen Shot 2018-05-24 at 2.43.53 PM

In this case, we’re pushing the integer 999 onto the prods array.

Finally, we’re going to “Load” the modified data into the test from the K/V Store.

Screen Shot 2018-05-24 at 1.51.48 PM

When we run the test, we’re presented with the following test report.

Screen Shot 2018-05-24 at 1.51.59 PM

The comments show us clearly that we have pushed the number 999 onto the array stored in the key prods. 

Now, we’ve added something to the array. Let’s remove it with “Pop!”

The first step is to introduce a “Pop” K/V Store component.

Screen Shot 2018-05-24 at 2.31.17 PM

We provide the “Pop” component with the name of the key from the Key/Value Store, and the name of the variable we’d like to assign the popped value to. Remember, “Pop” removes the last value in an array and returns the value itself. In this case, we’re going to assign it to a variable called “Popped.”

Next, we’re going to recall the modified key from the Key/Value Store. Then, we’re going to Comment both the recalled Key/Value Store value AND the previously popped value.

Screen Shot 2018-05-24 at 2.28.58 PM

In the Test Report, we can clearly see the full workflow. First, we assigned an array to the Key/Value Store with “Set.” Then, we added to that array with “Push.” Finally, we removed the added value with “Pop.” Each time we made a change, we used “Load” to retrieve an updated value from the Key/Value Store.

Screen Shot 2018-05-24 at 2.29.09 PM

The last two comments show the final state of the array in the Key/Value Store and the popped value itself. The popped value will only be available within the scope of this test run. The array in the Key/Value Store will remain retrievable and until 24 hours after it’s most recent modification.

Note: “Load” does not reset the timer. Only “Set,” “Push,” and “Pop” reset the timer. 

Scheduler

The scheduler allows a user to schedule when a test should run.
The scheduled tests must be published.

They should retrieve resources on their own using a GET/POST/PUT/DELETE I/O operations.

You can reach the scheduler page from the Test List page:

schedulerFromTestList

or from the Test Control Panel page:

scheduleFromIntersitial

In the Scheduler, select + Create New Run in the left panel.

schedulerTopPage

Name: The name of the run. Makes it easy to recognize it from a list.
Downloader: Choose from which datacenter the resources should be retrieved from. You can select one or more.
Paused: If checked, the run will be paused and won’t trigger executions.
Try a second execution…: If a test execution fails, another execution will be run after 2m 30s.
Dedicated engine (On Prem): If you are using the On Premises version you can select a dedicated engine to run the test from.
Minutes: In which minutes of the hour the test is going to run. Minimum interval is 5 minutes but the interval you can choose from depends on the account type.
Hours: In which hours of the day the test is going to run.
Days: In which days of the month the test is going to run.
Months: In which months of the year the test is going to run.

Note: The scheduler works by intersecting the provided settings.
Example: set minutes: 5, 15 and days: 1, 5. The test will run every hour at minute 5 and 15, only if the day is either the 1st or 5th.

schedulerOverrides

Overrides: This section allows you to override any variable that is defined in the global section or in data set. You can either write the key/value couples or Import values from presets. For example, if you wrote a test against your production environment and want to keep an eye on how the staging environment reacts to the same test, you can override the ‘domain’ variable with the staging domain for a specific scheduled item that will therefore run the test altering the target host.

In the top part of the page:

schedulerGlobal

Test (drop down): The list of all available test for scheduling (all the test that are published). You can switch from one test schedule to another one without exit from the schedule page. As first item in the list there is the Global option, see below for more details about this option.
Pause All/Run All:  The buttons allow you to pause all or run all the scheduled runs with a single click.
Delete Run: Deletes the selected run.
Save Run: Save the run.

Global Scheduler

Selecting the Global option from the Test drop down you can schedule a single run for all or some of the test available in the project.

Unlike the scheduler for a single test, this one has an extra section where you can select the tests you want to schedule together.

globalSection

Note: The key/value couples inserted in the overrides section at the bottom of the page will be used for all the selected tests. If you need to add values for only a specific test from the scheduled ones, you do not have to insert them there but add them for the single test. To do so, you have firstly to save the scheduled run. Once the schedule is saved, nearby each test name will appear an icon for adding overrides values for the specific test.

overrideGlobal

 

Use the Vault (Stored and Reusable Variables / Code Snippets)

The vault allows you to store variables and code snippets that can be used across an entire project.

Explainer Video!

The link to access the Vault is at the top of the window, as shown below.


The first column shows all of the projects of a company and the Global Vault. Code snippets and variables saved in a specific project are only available in that project. They are not available across projects. If a variable and/or code snippet needs to be available in more projects within the company, they must be saved to the Global Vault. The Global Vault has been built to make variables and code snippets available across all of the projects in a company.



In the snippet section, you will find all of the snippets you have created using the composer (see here for more details). Once you have saved the snippet, from the composer, you can choose whether you want to save it and make it available only for the current project, or for all the projects within the company by saving it in the Global Vault. If you already have a snippet saved for the current project but you need to make it available across all projects, you can easily export them from the current project to the Global Vault by using the import/export feature. 

A good use case for the snippets feature is an authentication flow; you don’t need or want to rewrite all of the steps in every test. You just need to call the snippet that contains the authentication snippet. Another good example is integration testing, where you can reuse various tests to create one larger flow.

In the variable section, you can define variables which will be part of the scope of the tests.

 

If a variable with the same name is defined within the test, it will override the one defined in the Vault. For identical variable names in the global vault and in the project vault, the latter will have higher priority.

Defining a variable in the Vault is helpful when you need to use the same variable across multiple tests. This way, you don’t need to rewrite it every time. For example, a password could be saved as a variable and reused in multiple places.

Just like code snippets, if you need a variable available across multiple projects you can save it in the Global Vault or import it directly from another project.

When you open the Vault tab in the Composer, global snippets and variables are highlighted for ease of identification. 


Here is a quick example on how the Vault can be used in a test.

The Authentication Snippet

First, create a new test. Go to the test list, click +New Test, enter the test name and click Compose. Once the composer appears, we need to enter the call. For this example, we will add a GET request that logs in using a Basic authentication:

Consider a scenario where this login will be required for all the endpoints we have to test. It makes sense for this call to be stored in the Vault.

Select the GET, open the Vault panel and click the + button. Enter a name and description.

 

Now you can proceed creating the test. Once done we need to create the other tests for our API. Once again, click +New Test. Once you are in the composer, you can open the Vault panel and select the snippet we saved in the previous step.

 

To use the login call in the new test, we just need to click the down arrow button next to the snippet, and it will be added into the test.

 

Now we can call the endpoint we want to test. Let’s use the search endpoint. We pass the ‘id’ variable as a query parameter. The authorization token that we parameterized after the login call is passed in as well:

 

Now consider the case where we want to use the same ‘id’ in multiple tests. We don’t set the id as a global param or an input set. We add it to the vault instead. Save the test and exit from the composer. Click on Vault in the header and add the variable ‘id’ here:

 

Once done, go back to the test and check that the variable is available in the Vault panel:

 

Now if you launch the test you can see that the ‘id’ will be replaced with the value you have set in the Vault.

Update Input

The update input component allows you to persist a variable defined inside of the test so that the value will be accessible outside the current scope of the test.

Usually, the component is used in conjunction with the set variable component. First, we set a variable. Then, we make it available outside of the current test with the update input component.

We pass the update input component the name of the variable that we need to persist outside of the test. The component will first try to update a variable of the same name in the current input set. If that doesn’t exist, it will search for a global variable of the same name. If there is no global variable of the same name, it will check the vault. If the variable doesn’t exist there, it will create one with the same name.

Important note: the update input component works only outside of the composer. That is to say, it will only function when a test is executed from the Test List, the Scheduler, or via the API.

In the image above, after calling the login endpoint, we have created a variable called access_token with the set var component. Then, we have updated the value with the update input component. In doing so,  the value of the variable will persist throughout and the value can be used in follow-on tests.

 

JDBC

The JDBC component allows a test to query data from a database.
Typical use cases are:

  • to retrieve data items to use as input data
  • to perform data driven testing

The currently supported databases are: MySQL, PostgreSQL, and Microsft SQL Server.

Configuration keys:

  • Url: the JDBC url to the database. Depending on the database type, URLs will look like the following:
    • jdbc:mysql://database.example.com/databaseName
    • jdbc:postgresql://database.example.com/databaseName
    • jdbc:sqlserver://database.example.com;databaseName=databaseName;
  • Driver: the type of driver; you can choose it from the options available in the drop down:
    • org.postgresql.Driver
    • com.microsoft.sqlserver.jdbc.SQLServerDriver
    • com.mysql.jdbc.Driver
  • Username: the username to access the database
  • Password: the password to access the database
  • Content: the SQL query
  • Variable: the name of the variable that will store the results

The result of the query will be represented as an array where each item is a row.
Every row is a key/value map, as in:

[
  {"id",123,"first_name":"John","last_name":"Doe"},
  {"id",456,"first_name":"Annie","last_name":"Doe"}
]

Therefore, you can then iterate over the results to use them according to your needs.

read-file (on-premises only)

In an on-premises deployment, the read-file command allows you to read a text file from the server local storage, in the /data directory.

Parameters:

Name Type/Value Required
path String Yes
var String Yes
mode “json”,”xml2″,”text”,”csv” Yes

path: the path of the file, relative to the /data/ directory

var: the name of the variable that will carry the read values

mode: how the file has to be parsed

If the file is successfully read, the variable declared in the “var” attribute will contain the structured (in case of json, xml2, csv) or unstructured (in case of text) information you can use as any other piece of data.