Testing GraphQL

GraphQL is a fantastic tool for creating versatile, eminently flexible servers. With API Fortress, testing GraphQL queries is as easy as testing regular REST endpoints.

API Fortress provides a demonstration GraphQL environment at:

GraphServerlet

The query and mutation below are formatted for this environment.

Testing a Query:

If the server has GraphicQL enabled, creating your queries through that tool and then copying the generated URLs into an API Fortress GET component is an acceptable process. However, this process does not lend itself to either readability or replicability. The preferred method is passing the query as a POST body.

If we’re sending a query request to our GraphQL server, we would format our POST body as follows:

{
 "query": "query (\$id: Int!) { course(id: \$id) { id, title, author, description, topic, url }}", 
 "variables": { "id": 1} 
}

The above object says the following: we are querying for a specific course by ID, and wish our response body to contain the ID, Title, Author, Description, Topic, and URL of that entry. If we send this body as a POST to our test GraphQL environment, our response will look like this:

{
 "data": {
   "course": {
   "id": 1,
   "title": "The Complete Node.js Developer Course",
   "author": "Andrew Mead, Rob Percival",
   "description": "Learn Node.js by building real-world applications with     
    Node, Express, MongoDB, Mocha, and more!",
   "topic": "Node.js",
   "url": "https://codingthesmartway.com/courses/nodejs/"
    }
  }
}

Since our response is a JSON body, API Fortress is fully capable of automatically generating  a test to validate future responses from this query.

Testing a Mutation:

A mutation is a GraphQL operation that makes a change to the data being provided. Whereas a GraphQL Query that performs a READ, a Mutation performs a CREATE, UPDATE, or DELETE, rounding out our CRUD operations.

A Mutation is also passed as a POST body to the GraphQL endpoint in question:

{
 "query": "mutation (\$id: Int!, \$topic: String!) { updateCourseTopic(id: \$id, topic: \$topic) { title, topic }}", 
 "variables": { "id": 1, "topic" : "Ruby" } 
}

This Mutation is executing the ‘updateCourseTopic’ operation on the database entry with the ID of 1 and changing its ‘topic’ value to ‘Ruby’. Note the ‘mutation’ keyword in place of the ‘query’ keyword from the Query example. This Mutation would return the following response from our test GraphQL environment:

{
 "data": {
    "updateCourseTopic": {
    "title": "The Complete Node.js Developer Course",
    "topic": "Ruby"
    }
  }
}

Again, as this is a valid JSON response body, API Fortress is able to generate a test automatically to validate its schema in the future.

Import / Export Tests

API Fortress makes it very simple to import and export your tests. This is useful when moving tests to another project, making another API Fortress deployment, or sharing with our support team for help.

Export Tests

After you login to the platform, the flow is as such:

  1. Click Tests for a Project (folder) of your choice
  2. To download click the circles to the left of the test name
  3. Click Export Selected Tests

This will download a zip of the tests you exported. See below for a GIF of the process.

Export Tests
Export Tests

Import Tests

To import a test is as easy as clicking the +Import Tests button next to the +New Test button. See image below.

Import Tests
Import Tests

API Fortress Command-Line Tool (APIF-Auto)

Welcome to the API Fortress Command-Line Tool!

The tool itself: https://github.com/apifortress/afcmd/releases

The documentation for the API that that tool leverages:  https://apifortressv3.docs.apiary.io/

The tool, or rather, pair of tools, are designed to reduce the amount of legwork that goes into executing or uploading API Fortress tests. The following readme will explain each part of the process.

APFCMD allows a user to easily integrate API Fortress testing into other workflows. Example use cases are:

  • Executing API Fortress tests from a CI/CD tool
  • Incorporating API Fortress tests in a Git version control plan.
  • Pushing test code from an IDE to the API Fortress platform.

All of these scenarios, and more, can be accomplished with the tool.

Lets take a look at the two major components of the tool:

APIF-RUN

Run allows us to execute tests on the platform and do things with that data. We can run tests via API either in an authenticated or unauthenticated state. By passing credentials, we receive a more verbose test result. We can output this result to a file. We also have access to all of the standard options that API Fortress provides in its API (silent run, dry run, etc.)

RUN EXECUTION FLAGS

  • run-all – RUN ALL – This will execute all of the tests in a chosen project.
  • run-by-tag – RUN BY TAG – This will execute all tests with a selected tag (requires the -t flag to set tag)
  • run-by-id – RUN BY ID – This will execute a test with a specific ID (requires the -i flag to set id)
  • hook – HOOK – This is the webhook of the project you are working with. This can be either an API Fortress URL, or the key from a configuration file (set the path to the config file with the -c tag)

ex: to run all of the tests in a specific project, we would use the following command string:

python apif-run.py run-all http://mastiff.apifortress.com/yourWebHook

RUN OPTION FLAGS

  • -S – SYNC – This will provide a response body with the result of the test.
  • -f – FORMAT – This will determine the format of the test result output (JSON, JUnit, Bool). REQUIRES SYNC MODE (-S)
  • -d – DRY – This will cause the test run to be a dry run.
  • -s – SILENT – This will cause the test to run in silent mode.
  • -o – OUTPUT – This will write the result of the test to a local file. You must provide the path to the file to be created. Remember your filetype! (.json/.xml)
  • -c – CONFIG – This provides the path to a configuration file which can provide webhooks and user credentials. If no path is specified, the program will look for a config.yml in the same directory as it is (./config.yml)
  • -C – CREDENTIALS – This allows you to manually pass user credentials (username:password) (SUPERSEDES CONFIG FILE)
  • -t – TAG – This is how you pass a tag for RUN BY TAG mode.
  • -i – ID – This is how you pass an ID for RUN BY ID mode.
  • -e – ENVIRONMENT – This is how you pass environmental/override variables. The format is key:value. You can pass multiple sets of environmental variables like so: key:value key1:value1 key2:value2

APIF-PUSH

Push allows us to push tests into API Fortress. When tests are downloaded from the platform, they come as 2 XML files (unit.xml & input.xml). We can use this tool to push those files back to an API Fortress project, either individually or in bulk.

PUSH EXECUTION FLAGS

  • hook – HOOK – This is the webhook of the project you are working with. This can be either an API Fortress URL, or the key from a configuration file (set the path to the config file with the -c tag)

PUSH OPTION FLAGS

  • -p – PATH – This provides the path to the test file you wish to upload. You can pass multiple paths.
  • -r – RECURSIVE – This flag will make the call recursive; It will dive through the directory passed with -p and grab every test in all of its subdirectories.
  • -b – BRANCH – This allows you to specify a Git branch that these test files are attached to. Default is master.
  • -c – CONFIG – This provides the path to a configuration file which can provide webhooks and user credentials. If no path is specified, the program will look for a config.yml in the same directory as it is (./config.yml)
  • -C – CREDENTIALS – This allows you to manually pass user credentials (username:password) (SUPERSEDES CONFIG FILE)
  • T – TAG – This allows you to pass tags to be appended to the test after it is pushed. This will OVERWRITE ANY EXISTING TAGS. Multiple tags can be passed.
  • -t – ADD TAG – This will allow you to add additional tags to a test that already has tags attached.

CONFIGURATION FILE

A configuration file is a YAML file that is formatted as follows:

hooks:
  - key: cool_proj1
    url: https://mastiff.apifortress.com/app/api/rest/v3/A_WEBHOOK
    credentials:
      username: (your username)
      password: (your password)
  - key: uncool_proj
    url: https://mastiff.apifortress.com/app/api/rest/v3/ANOTHER_WEBHOOK
    credentials:
      username: (another username)
      password: (another password)
  - key: unauth_proj
    url: https://mastiff.apifortress.com/app/api/rest/v3/JUST_A_WEBHOOK_WITHOUT_CREDENTIALS
test_directory: /tests

Once you create a configuration file, you can pass the path with -c and the key to the data in place of the normal hook URL. If you also pass credentials, they’ll override the credentials in the configuration file. If you don’t include credentials in the config file, you can pass them manually or leave them out entirely.

EXAMPLES

Execute all of the tests in a project and output the results to a JUnit/XML file via an authenticated route:

python apif-run.py run-all http://mastiff.apifortress.com/yourWebHook -S -C my@username.com:password1 -f junit -o some/route/results.xml

Push all of the tests from a directory and all of its subdirectories to a project:

python apif-push.py http://mastiff.apifortress.com/yourWebHook -C my@username.com:password1 -r -p some/directory/with/tests

Execute one test in a project by ID, using a config file for credentials and webhook:

python apif-run.py run-by-id config_key -c path/to/config/file -i testidhash8924jsdfiwef891

NOTES

  • The order of the optional arguments passed does not matter.
  • Remember, in a bash environment, anything that has a space in it needs to be wrapped in quotes. This goes for paths, filenames, etc.

POST-RECEIVE SCRIPT FOR GIT

This Post-Receive script is meant to assist in the incorporation of API Fortress in your Git workflow. Dropping the file into the hooks directory of your .git file will cause newly committed API Fortress test code to be pushed to the API Fortress platform. The ‘test_directory‘ key in the config.yml will let the scripts know which folder the tests themselves are located in. It will then watch for commits from this folder and push the appropriate code to the platform.

Updating an On Premises Instance

Updating an On Premises instance of API Fortress is done as follows:

  • Back up the databases. (Optional, but recommended) 
  • Stop the containers
    • From the ‘core’ directory, issue a docker-compose stop command and wait for the operation to complete. This command stops the currently-running Docker containers.
  • Pull the updated containers
    • From the ‘core’ directory, issue a docker-compose pull command and wait for the operation to complete. This command pulls updated images from the API Fortress Docker repository.
  • Restart the containers
    • From the ‘core’ directory, issue a ./start_all.sh command to restart the containers and wait for the operation to complete. This script restarts the containers in the proper order.

Once the preceding steps are completed, the On Premises version of API Fortress will be fully updated.

 

Different ways to compose a Request Body

In this post we will show you the different ways you can compose a Request Body: from the simplest to the most complex.

  1. Copy and paste the body from somewhere

    The first and easiest way is when we have a body from somewhere to copy and paste as is into the call. Let’s see how this is done:

    1. In the composer we add the POST component and type the url and all of the required fields.
      Url: https://mydomain.com/myPost (the url of the resource you want to test)
      Variable: payload (the name of the variable that contains the response)
      Mode: json (the type of the response)

      post_comp

    2. Now we add the Body component and after selecting the Content-Type we paste the body in Content field.
      Content-Type: application/json
      Content: {"method":"post","url":"http://www.testme.com/api/run/test"} (the body required in your call)

      paste_body

    3. Now we can execute the call and proceed with the test.
      final_call
  2. Using Variables in the Request Body

    Another way to compose a request is using variables in the body.

    1. In the composer we add the POST component typing the url and all the required fields.
      Url: https://mydomain.com/myPost (the url of the resource you want to test)
      Variable: payload (the name of the variable that contains the response)
      Mode: json (the type of the response)

      post_comp

    2. We add the Body component. In the Content-Type we choose the proper one, let’s say application/json. In this scenario we need to use a variable so in the Content field we enter the following:
       {
       "user": ${user},
       "password": ${password},
       "url": "http://www.testme.com/api/run/test"
       }

      In this scenario “user” and “password” are not directly passed in the body but they are variables defined as global parameters in the data set.
      var_body
      data_set

    3. The POST has been done and can be executed.
      final_call2
  3. Using a Variable from another call

    The next way to compose a Request Body is by using a variable from another call. Let’s see how this can be done.

    1. The first thing we need to do is add the call we will retrieve the variable from. Let’s consider, as example, the common scenario where we need to perform a login for authentication and retrieve the authentication token required for the following call.
      Url: http://www.mydomain.com/login (the url of the resource you want to test)
      Variable: payload (the name of the variable that contains the response)
      Mode: json (the type of the response)
      Header: 
         Name: Authorization
         Value: Basic YWRtaW46cGFzc3dvcmQ= (this value comes from encoding username:password in Base64)

      loginAuth

    2. Executing the login we will have as response the desired token. Let’s see it using our console.
      response_token
    3. Now we need to save the token as variable.
      Var: token (the name of the variable)
      Variable mode: String (the type of the variable)
      Value: ${payload.access_token} (we retrive the value from the previous 'payload')

      var_token

    4. Once the token has been saved as variable we can proceed adding the second call and use that token in the Request Body.
      Content-Type: application/json
      Content: {"token":"${token}"}

      bodyWToken

  4. Using an object from another call

    In the next example we will show you a more complex case. We will consider the scenario where we need to use an object retrieved from a previous call into the body of a subsequent call. Let’s take a look at an example:

    1. First, we perform the call we retrieve the object from.
      search
    2. Let’s execute the call in our console in order to see the response.
      {
       "id": 123,
       "items": [
       {
       "id": 11,
       "name": "stuff1"
       },
       {
       "id": 12,
       "name": "stuff2"
       },
       {
       "id": 13,
       "name": "stuff3"
       }
       ]
      }

      search_response

    3. Let’s say we need the object ‘items’ as the body in the subsequent call. So, as a second call, we will add a POST and we will type the following as body:
      ${searchPayload.items.asJSON()}

       

      objectInBody

    4. Now we can proceed with the test.

     

  5. Creating a new structure to add as a body

The last scenario is yet another more complex one. In this case, we consider the scenario where we need to create a new structure to add as a body, using data from a previous call. Let’s see how we can do this:

  1. The first thing we have to do is to perform the call which retrieves the data we’re using. Let’s consider a GET that returns an array of items.firstCall
  2. Let’s see the response using our console.
    {
     "items": [
     {
     "id": 11,
     "price": 5.99
     },
     {
     "id": 12,
     "price": 6.99
     },
     {
     "id": 13,
     "price": 10.99
     },
     {
     "id": 14,
     "price": 15.99
     }
     ]
    }

    response_get

  3. Now we need to create the new data structure. To do so, we add a SET component as follow: 
    payload.items.each { it -> it.currency='$'  }; return payload.asJSON(); (for each item in the array, we add the currency attribute with "$" as value)

    newData

  4. Now we can add the POST and add the new structure as the POST request body:postWithNewStructure
  5. That’s it. Now we can proceed with the test.allDone

Note: in this post we have used the POST method but all of the examples shown can be applied to the other REST methods. In the same way, we have demonstrated scenarios with Request Bodies, but all of the examples can be used for Header or Param cases.

 

 

Bamboo – Integrate API Tests & Results

Passing data from API Fortress to Atlassian Bamboo allows Bamboo users to include API Fortress test results in their CI/CD process.

Step 1: Generating a Webhook

The first step to integrating API Fortress into your CI/CD process is to grab the generated API hook for the project in question. To do so, head to the Settings panel in API Fortress. This view, seen below, can be accessed from anywhere in the application by clicking the Gear icon in the top right corner of the screen. Please note you need Manager access to generate a webhook. From Settings, click the API Hooks section and generate the hook for your project. The process can be seen in detail in the .gif below.

hook

Step 2: Select or Create a Bamboo Project

After we’ve created our webhook, calling it from within Bamboo is a fairly simple process. First, create a new project in Bamboo. You can also add to an existing project from this screen.

project

Step 3: Adding an HTTP Call

Next, we need to add an HTTP Call component and enter the webhook we generated. Depending on what you wish the call to API Fortress to trigger, you may append different routing on to the end of the webhook. The API Fortress API Documentation is located here.

httpcall

Step 4: Parsing Results

After the request is sent to the API Fortress API, we’ll need to save the JUnit data that’s returned. We do so by adding a JUnit Parser step.

junit

Once the above steps are completed and saved, the build sequence will make a call to API Fortress upon execution, receive the results of the tests, and parse the results.

summary

Integrate With Your CI/CD Platform (simple)

API Fortress was specifically built to make integrating with a CI/CD platform simple. Using our webhook you can execute API tests as part of your build and deployment flow, and get the results back in JSON or Junit format.

This is a general walk through of how that works, but we also have specific docs for:

Step 1 – Install an HTTP Plugin

Depending on your CI/CD platform, you may need to install a plugin that allows for HTTP requests during the build process. API Fortress will need that to execute the tests.

Step 2 – Generate an API Hook

The first step to integrating API Fortress into your CI/CD process is to grab the generated API hook for the project in question. To do so, head to the Settings panel in API Fortress. This view, seen below, can be accessed from anywhere in the application by clicking the Gear icon in the top right corner of the screen. Please note you need Manager access to generate a webhook. From Settings, click the API Hooks section and generate the hook for your project.

The next step depends on what you’re trying to test. The following steps are going to assume that you wish to run all of the tests in a project. You can also run a single test, or a series of tests with a certain tag. If you would like to learn more about that please contact API Fortress.

As it stands, our API hook is as follows:
https://mastiff.apifortress.com/app/api/rest/v3/86f81b19-2d29-4879-91d9-6dbb2271fec0861

The normal command to run all of the tests in the project, per the API Fortress docs is /tests/run-all, so we append this on to the end of the API call. You may need to request JUnit output. To do that simply takes some query parameters. First, we need to set sync to true, and then we can set format to JUnit. In short, we need to append ?sync=true&format=junit to the webhook call. That gives us the final API call:
https://mastiff.apifortress.com/app/api/rest/v3/86f81b19-2d29-4879-91d9-6dbb2271fec0861/tests/run-all?sync=true&format=junit

Great! If we make this API call via a browser or a tool like Postman, we can see our results in JUnit. We’re almost there.

Step 3 – Execute HTTP Call From Your Platform

From your CI/CD platform’s dashboard, you’ll need to paste the webhook call to the flow. We have more specific docs, such as for Jenkins available.

The test results can then be passed along to platforms like qTest or Zephyr in your CI/CD pipeline.

Keywords: cicd, jenkins, bamboo, microsoft tfs, team foundation server, gitlab ci/cd, travisci,

Setup Connectors (DataDog)

Here is a quick guide to setting up a DataDog integration.

  1. First, we need to generate a new API key in DataDog.
    1. Log in to your DataDog account.
    2. Mouse-over Integrations and then click API
    3. Create a new API key at the top of the view (Note: You must have Admin DataDog account access.)

datadog

  1. In API Fortress go to company settings (top right gear icon)
  2. Click on Alert Groups
  3. Create a new Alert Group (if necessary)
  4. Add recipients to the Alert Group (if necessary)
  5. Click on the Connectors icon
  6. Choose one of the DataDog connectors from the dropdown
  7. Add your DataDog API Key created previously and the DataDog host you wish the connector to pass data to.

connector

Once this process is complete, API Fortress will begin passing data to DataDog where it can be charted in any way you like!

Note: This connector shares events with Datadog, which are outages. If you would like to include performance metrics, such as latency and fetch, please let us know and we can help set that up. It requires a small script.

Load Agent Deployment

A Load Agent is a server instance that provides the simulated users in a load test. Load Testing cannot function without at least one Load Agent.

The provided files (contained in core-server.tgz) are all that you need in order to deploy a Load Agent. This tutorial will explain what changed need to be made to the files within in order to properly deploy the Load Agent.

Before starting the process, there is a step that needs to be taken for clients who received their API Fortress containers before the introduction of Load Testing.

Step 0 (Not for all users) – Activate the Node Container

Open the docker-compose.yml in the main API Fortress directory. It can be located at /core/bin/docker-compose.yml

  • Paste the following code snippet in after the #RABBITMQ section and before the #APIFORTRESS DASHBOARD section:
#NODE
apifortress-node:
   image: theirish81/uitools
   hostname: node.apifortress
   networks:
      - apifortress
   domainname: node.apifortress
   labels:
      io.rancher.container.pull_image: always
  • In the links section of the #APIFORTRESS DASHBOARD configuration, add the following line:
- apifortress-node:node.apifortress
  • Save and close the docker-compose.yml.
  • Open the start_all.sh file in a code editor. It is also located in /core/bin.
  • Copy and paste the following and overwrite the entire contents of the file:
#!/bin/bash
sudo docker-compose up -d apifortress-postgres
sleep 5s
sudo docker-compose up -d apifortress-mongo
sleep 5s
sudo docker-compose up -d apifortress-rabbit
sudo docker-compose up -d apifortress-node
sleep 30s
sudo docker-compose up -d apifortress
sleep 1m
sudo docker-compose up -d apifortress-mailer
sudo docker-compose up -d apifortress-scheduler
sudo docker-compose up -d apifortress-connector
  • Your API Fortress instance can now utilize the API Fortress Node Container which powers Load Testing.

Step 1 – Unzip the provided file (core-server.tgz)

First, unzip the provided file.

Screen Shot 2018-06-05 at 11.44.28 AM

Step 2 – Define the maximum users per Load Agent

Users per agent are the maximum number of virtual users that each Load Agent can provide.

It’s important to remember that large numbers of simulated users will require large amounts of hardware resources. Contact your DevOps team to develop a strategy for resource allocation. 

  • Locate and open the file named application.conf. It is located in core-server/etc.
  • Line 14 of this file (fixed-pool-size) should have it’s value adjusted to match the desired number of maximum users per agent.
  • Line 48 of this file (nr-of-instances) should have it’s value adjusted to match the desired number of maximum users per agent. These two values should match.

Step 3 – Configure Config.yaml

  • Locate and open config.yaml. It is located at core-server/etc.
  • First, we have to configure the baseURL
    • baseURL is located on line 3.
    • If the Load Agent and the API Fortress Dashboard are located on the same server, then you can replace the baseURL with the internal address and port of the Dashboard on the server.
    • If the Load Agent and the API Fortress Dashboard are located on different servers, you can replace the baseURL with the actual URL of the Dashboard. That is to say, the URL you would use to access it via web browser.
  • Next, we need to provide the API Key and Secret.
    • Open the main API Fortress dashboard and click the gear icon in the upper right corner to access the settings menu
    • Click the “API Keys” option in the left sidebar.
    • Click “+API Key” 

Create API Key

(Click image for GIF of procedure)

  • Copy the API Key to line 5 of config.yml.
  • Copy the Secret to line 6 of config.yml.

Step 4 – Adding the Engine

  • The next step is to add the new Engine to API Fortress itself.
  • Log into API Fortress as an administrator.
  • Click the user icon in the upper right corner, and then click “Admin Panel”
  • Click “Engines” on the left side of the screen.
  • Click “+Engine”
  • Enter the name and location of the Engine.
  • The CRN value defaults to a random string. You must change it to something human-readable. This is the internal name of the engine.
  • After modifying the CRN, copy the value to line 11 of config.yml
  • Copy the secret to line 12 of config.yml
  • Select the Owning Company of the Engine. An Engine must be owned by a single company. The default value (Public Engine) should not be chosen.
  • Select “Yes” for “Dedicated to Load Testing
  • Click the green check to save the Engine settings.

Add Engine

(Click image for GIF of procedure)

Step 5 – Deploy the Load Agent

At the desired server location, use the “docker-compose up -d” command to deploy the Load Agent container. After the operation is complete, the Load Agent will be visible to your API Fortress Load Tests. 

Jenkins – Zephyr Enterprise Integration

Step 1 – Install the Zephyr Enterprise Jenkins Plugin

The first step to exporting data to Zephyr Enterprise is to download and configure the Zephyr Enterprise plugin.

From the Jenkins main page, click “Configure” and then “Manage Plugins.” From the “Manage Plugins” window, search for and install “Zephyr Enterprise.”

jenkinsAddons

Step 2 – Configure the Zephyr Enterprise Jenkins Plugin

Click the “Configure System” option in the “Manage Jenkins” menu.

JenkConfig

Scroll down to “Zephyr Server Configuration” and enter your domain and login credentials.

Screen Shot 2018-05-29 at 10.30.39 AM

Click “Test Configuration.” If the test is successful, your Jenkins is properly configured to communicate with your Zephyr instance.

Step 3 – Generate an API Hook

Next, we need to create an API Fortress Webhook to export the test data to Jenkins. To do so, head to the Settings panel in API Fortress. This view, seen below, can be accessed from anywhere in the application by clicking the Gear icon in the top right corner of the screen. Note: You need Manager access to generate a Webhook. From Settings, click the API Hooks section and generate the hook for your project.

The next step depends on what you’re trying to test. The following steps are going to assume that you wish to run all of the tests in a project. You can also run a single test, or a series of tests with a certain tag. If you would like to learn more about that please contact API Fortress.

To import our data into Jenkins as JUnit, we’ll export it in JUnit format using a query parameter. Since we already have our API hook, we just need to add the parameter to do so.

As it stands, our API hook is as follows:

https://mastiff.apifortress.com/app/api/rest/v3/86f81b19-2d29-4879-91d9-6dbb2271fec0861

The normal command to run all of the tests in the project, per the API Fortress docs is /tests/run-all, so we append this on to the end of the API call. We also need to request JUnit output via query parameters. First, we need to set sync to true, and then we can set format to JUnit. In short, we need to append ?sync=true&format=junit. That gives us the final API call:

https://mastiff.apifortress.com/app/api/rest/v3/86f81b19-2d29-4879-91d9-6dbb2271fec0861/tests/run-all?sync=true&format=junit

Great! If we make this API call via a browser or a tool like Postman, we can see our results in JUnit.

Step 4 – Execute HTTP Call from Jenkins

From the Jenkins dashboard, let’s create a New Item. Next, we’re going to name and create a Freestyle Project. Click the OK button to proceed.

Scroll down the page until you see the “Add Build Step” pulldown menu. Select “HTTP Request.” This option will only be available if you installed the HTTP Request plugin in the previous step. We’re going to paste the API call we created above into the URL line. If we save this configuration, we can run the build and see Jenkins receive our JUnit test results in real time.

Next, we’re going to click the “Advanced” button. Scroll to the bottom of the newly opened view and enter a filename of your choice into the “Output Response to File” line.

Step 5 – Publish JUnit Test Results in Jenkins

Now that we’re receiving JUnit data from API Fortress in Jenkins, we need to publish the data so that we can use it further downstream. Click “Add Post-Build Action” and then “Publish JUnit Data.”

In the new window, enter the same filename that we saved our JUnit data to in the API call in the previous step.

Now, we’ve enabled Jenkins to execute API Fortress tests and receive the test data in JUnit format. Next, we’re going to allow it to pass this data on to Zephyr.

Step 6 – Exporting Data to Zephyr

Click “Add Post-Build Action” and select “Publish Test Results to Zephyr Enterprise.” Since we configured the Zephyr plugin in step 2, Zephyr information should populate automatically from your Zephyr Enterprise instance. Select the project, release and cycle of your choice and save the build.

Screen Shot 2018-05-29 at 10.30.05 AM

Test data will now export to Zephyr every time this project is built.

Screen Shot 2018-05-29 at 10.31.14 AM