The Shape of Things to Come

The Shape of Things to Come

A message from the API Fortress CTO

API Fortress is a small team of dedicated people trying to deliver the best experience possible for automated API testing and monitoring across entire organizations. It may sound like a marketing pitch, but it pretty much describes what the company has been trying to do since its inception.

At the core of our continuous improvement, we embrace an iterative approach that quickly integrates critical customer feedback.

At the same time, it is unavoidable that we sometimes must push back on certain requests and ideas that don’t quite match up with our platform’s overall design and philosophy. This is not because we believe that certain feedback doesn’t make sense, but it is often because accepting them in our pipeline would require a major reengineering of the system. A reengineering that, due to our capacity, could not take place.

It is, however, the duty of a CTO to know when the fundamental philosophy of a platform is rooted in assumptions that must evolve to maintain pace with or stay ahead of a rapidly evolving market. A CTO must be able to act ahead of time, rethink what’s been done and be ready to succeed in a future market.

I think it is time to visualize the shape of the things to come in the API testing space, and while this whole preamble may sound a bit dramatic (talk to my coworkers, even my watercooler chats sound dramatic), I can assure you this is possibly the most exciting moment in the history of this company and hopefully for its customers.

Consider our latest API Fortress projects: Mastiff 2 and Forge 2. These two updates involve a major redesign and reengineering of the API Fortress platform that will bring huge benefits to all users.

The first API Fortress platform made an unparalleled improvement on the ability of QA engineers to write effective API tests, while allowing product owners to gain clear insight on what’s going on with the organization’s APIs. The upcoming API Fortress platform will extend its unique ease of effective test generation, enhanced visibility, and seamless collaboration to developers and project managers.

While developers are already using API Fortress by the hundreds, the upcoming API Fortress platform will offer enhanced UI and capabilities that specifically appeal to developers. Our plan is to extend what’s really working for QAs to developers, enabling a whole new level of control.

All updates to the platform will be without losing a bit of backward compatibility.

I know, it’s a bit vague, but I want it to be this way in this first post on this topic. We will gradually release more details about where we’re going.

We are so galvanized about our updating process that we decided to keep you posted on our project evolution. We’ll continuously update you on engineering details and other news through the voice of the people that are making this possible. 

A SVELTE NEW WORLD

a Svelte New World

By Lorenzo Fontana

Both Mastiff 2 and Forge 2 will undergo a major reengineering of the front end. While we are not rewriting the composer (at least for now), everything else will become more modern, easier to maintain, modular, and capable of being packaged both as a Web and Electron application.

I unavoidably needed to abandon the UI-framework-free approach that is characteristic of the API Fortress front-end, and embrace the right UI framework. As you probably know, the UI framework marketplace is crazy – different philosophies, different approaches, and completely different strengths and weaknesses. Choosing the right UI framework can be a head-spinning decision, because once it’s done, there’s really no turning back. Fortunately, I knew exactly what key features I was looking for so I could at least shrink the search.

Use React! That’s what many of my friends told me. Not so fast. Just because something is popular, doesn’t mean you must go for it. You might miss out on finding something that is more suitable to your unique needs. Six years ago, I decided to dismiss the popular frameworks, Angular and React. Sure, these frameworks have built good track records, but my reasons for dismissing them six years ago still exist today.

I needed something that not only would allow me to write a good piece of complex software, but that could really help me in a moment of need. What I really wanted was something that offered an intuitive, effective, and repeatable workflow that would avoid piling up boilerplate by reducing the amount of code needed, instead of increasing it.

When you have a battalion of developers, all you need to care about is whether the framework reliably does what it’s supposed to do, but when your resources are limited, you need to make sure that the framework will drastically reduce the number of days you hate yourself.

Rich HarrisSvelte surprised me immediately.

Svelte is a fully reactive (as in “reactive programming“) UI framework with a mission statement that could be simply summarized as – no bullshit. Actually, it is hard to categorize Svelte as a UI framework at all. Svelte is more like a powerful reactive UI development library. You “use” Svelte, you don’t “adopt” it. The main aim of Svelte is not to control your webapp with a ton of functionalities and philosophies you didn’t ask for; rather Svelte gives you the building blocks to create powerful UIs. As a free thinker, as you may have guessed by my previous statements, I don’t let anyone tell me what I should think, let alone a piece of JavaScript.

Svelte’s syntax is compact, extremely readable and remarkably effective (again, no bullshit) to the point that I can show a snippet to my CTO (a strictly back-end engineer) and not see him puke all over the office.

The boilerplate is simply non-existent, and by that, I mean you don’t need to write specific code to make it work. The only code you need is the code that effectively does something – your business logic, so to speak. If you’re a React developer, I invite you to have a look at these examples and then go cry in your safe space.

Finally, Svelte’s performance is shockingly good. Now, this area wasn’t at the top of my list of needs. Pre-optimization = bad. However, I realized that seasoned API Fortress users are used to high performance, and a step back in this area in any context is not desirable. So this framework’s surprising performance is a very pleasant discovery. Check out this nice lecture to learn more.

Of course, I didn’t become a Svelte champion without proof that Svelte was what I wanted. I’m an evidence-based carbon unit, and I needed to validate my hypothesis. In my spare time, I created a whole standalone, fully functional project to make sure that not only could I use Svelte, but that I could also overcome any unexpected issues. Although this project had its gotcha moments for sure, I was able to complete the project with a fraction of the pain and effort that I had been prepared to endure. In fact, the largest problems I encountered were related to the abilities of other tools, such as UI kits that needed to be directed effectively by Svelte. This was not surprising given its young age and lack of explicit integrations.

In conclusion, Svelte is now a key component of my tooling and the API Fortress that is upcoming. I strongly recommend you to at least give it a look, and regardless of whether you’ll include it in your rack or not, I think there’s a lot to learn from the way it was engineered.

API Fortress is a small team of dedicated people trying to deliver the best experience possible for automated API testing and monitoring across entire organizations. It may sound like a marketing pitch, but it pretty much describes what the company has been trying to do since its inception.

A BRAND NEW HTTP CLIENT

A Brand New HTTP Client

By Lorenzo Fontana

We introduced an HTTP Client in API Fortress almost as an add-on. After all, API Fortress doesn’t necessarily need one to operate. Little did we know how important that tiny piece of software would become for our users.

No matter how powerful a testing tool is, many people want to experience the immediacy of performing an API call right away before jumping head first into the act of writing a test. This is a testament to how Postman has become a key tool for many people all over the world.

But in acknowledging that the HTTP Client is a necessity, we must also acknowledge that it could also be more functional, intuitive, and effective than anything currently available. While the original implementation of the HTTP Client was meant to be part of our core product offerings, we are compelled to make several critical improvements to help ensure that it can fulfill an increasingly vital role for our customers.

This is why the implementation of Forge 2 started from the HTTP Client.

The core needs that will be satisfied by the new HTTP Client include:

  • Work well locally as well as on the web. This HTTP client will be part of Forge 2, sure, but it will be an essential item of our cloud platform as well.
  • Leverage local and remote download agents. One of the key features of API Fortress is that our remote download agents allow you to perform an HTTP call originated from different locations. This feature was not available locally from Forge.
  • Feature everything an HTTP client needs
  • Be More Intuitive and Responsive
  • Manage every piece of information loaded and saved as directories and text files, so that versioning becomes the most natural thing in the world

While we have a long road ahead, we would like to show you where we are right now with a couple of videos:

Grails – Somebody That I Used to Know

Grails – Somebody That I Used to Know

“I left Grails as an esoteric, and somewhat immature framework, only to find it again, years later, as a solid, predictable, and well put together platform.”

By Simone Pezzano

Back in the day, the adoption of an application framework that could provide tooling for an enterprise application was not in question. It wasn’t because developers, like me, had no alternative, but simply because the amount of work ahead was impossible to finish without the help of a fully featured platform. But I had to accept the compromises of adopting a beast to get the level of support I needed.

The most logical solution seemed to be Grails, a relatively young application framework for Java and Groovy. The stack was familiar, checked all the boxes, and definitely introduced a high number of shockingly advanced features such as its very unique ORM written on top of Hibernate.

There were, however, things I didn’t like about it – most of all, its esoteric approaches to a number of tasks. For the most part, it worked well, even though many of the things it was doing were mysterious. But when it didn’t work well, debugging could be a complete nightmare.

When Grails’ new major versions were released, I soon realized that the Grails team was doing more than just upgrading one of my favorite toys; they were rewriting it, and in doing so, they were unwittingly introducing more and more obstacles to migrating our code. I was not confident that my team and I would have enough time to deal with the changes.

So it shouldn’t come as a surprise that while getting ready for our new Mastiff 2 initiative, I looked at my old pal, Grails, with a little hesitation. Maybe Spring Boot with its plugins could serve in the role occupied by Grails well enough, and fill all the gaps that come with such a radical decision.

It would have been irresponsible, however, to simply dismiss Grails on the get go. While some backward compatibility was certainly lost, a lot of it was definitely maintained and, after all, the new Grails 4 is not the Grails 2 that I used to know.

I left Grails as an esoteric, and somewhat immature framework, to find it again, years later, as a solid, predictable and well put together platform. The funky build process has been replaced by a standardized Gradle build that pulls together all the components that I could possibly need in a very elegant, tidy manner. In fact, you can immediately tell things have improved drastically based on the shorter build/bootstrap times.

The new Grails 4 is not the Grails 2 that I used to know. Years later, it is a solid, predictable and well put together platform.

Grails 4 is built on top of Spring Boot. As the standard for teams that want to start microservices small and then iterate quickly, Spring Boot is possibly the top choice to create Java-based microservices. In Grails 4, GORM (our beloved ORM) is still there, now based on Hibernate 5, as well as GSP (Groovy Server pages). Most of the configuration that used to be programmatic can now be replaced with comfy YAML files, while still allowing programmable configuration. Also, extremely important for me, is the full and systemic access to the dependency injection provided by Spring through its classic annotations. While this sounds pretty obvious, it wasn’t really the case in Grails 2.

Grails Views for JSON is really something that impressed me a lot as well. After all, the new Mastiff 2 platform will interact with the rest of the system solely via APIs, and this tool could definitely expedite the process of creating and maintaining them.

The new Mastiff 2 platform will interact with the rest of the system solely via APIs

But even more than the features breakdown, it’s how the framework feels. Solid, predictable, featureful, maintainable, and with a (finally) reasonable resource footprint.

Combine these factors with the huge amount of existing code that I want to port, and the choice to use the much improved Grails 4 is inevitable.

Bloodhound: A Better Breed of API Debugging (Open-Source Microgateway)

Capture, Transform, Track, and Debug Live API Conversations

Today, API Fortress announces the generally available release of Bloodhound, a lightweight API debugging gateway that is free to download and open source. Watch the Bloodhound Demo video or visit the Bloodhound page to see how easily you can start using the most powerful tool for API transaction debugging available today. Download Bloodhound at our GitHub.

Before Bloodhound, it was hard to send API calls to a logger for the right kind of analysis to quickly solve difficult bugs (see examples below). Also, you may have been limited in how you could test APIs. If you’re trying to test an API but can’t leverage the data stuck in a database, your functional and integration tests aren’t going to be as good as they need to be. It’s just one of the many reasons why so many API errors go live. 

A New Best Friend for Developers and Engineers

Here’s how it works:

With Bloodhound, you can route API calls to any logger for comprehensive analysis to uncover solutions to difficult bugs, or test an API in ways not possible before. Now, you can get the right insights to make sure that microservices and APIs are behaving like they should in the real world.

Eliminate More False-Positives and False-Negatives

Simone Pezzano, CTO at API Fortress says, “There are a lot of powerful API gateways, but many are difficult to work with, and not built with the goal to help test and debug APIs. So we were driven to build Bloodhound, a microgateway that you can deploy and use more easily. Deploy Bloodhound locally or in the cloud via a Docker or K8s container.”


Patrick Poulin, CEO at API Fortress adds: “It’s never been easier to build new APIs. But the mindset of how we test and monitor them hasn’t evolved. Writing a handful of functional tests using a small subset of fake data against a staging environment is not enough. With Bloodhound, you can do more. Get more accurate test results by reproducing real world scenarios, and find clarity while trying to debug any problem.”

A FALSE SENSE OF SECURITY

Did you know that the API Fortress platform can reuse your data-driven, functional, integration, and load tests as holistic Functional Uptime Monitors? Use Bloodhound to build the right tests with better capabilities, and then convert those tests into monitors that precisely inform you about functional uptime in any environment before and after going live. For more information, view the eBook: API Monitors: A False Sense of Security – Why API monitors give so many false-negatives and fail to catch human error

USE CASES

Before the generally available release of Bloodhound, the gateway was deployed to several API Fortress customers that are among the world’s largest retail, financial services, healthcare, and telecom companies. While the API Fortress platform is flexible and makes it easy to create addons, customers implemented several out-of-the-box use cases including: 

  • Transform Databases into APIs: Solve the problem of creating data-driven functional tests when test data is locked in a database (PostGres, MySQL, MS SQL Server, MongoDB, Redis, and more).
  • Test APIs Beyond a Normal Functional Test: Extend what can be tested by transforming the API into unique scenarios such as throttling, broken or unexpected headers, invalid payloads, and status code changes, etc.
  • Detect Signals from Noise: Understand the interdependencies in complex API call arrays to help teams create or improve documentation for new API projects.
  • Implement an Echo Server: Understand what requests look like from an API server’s POV to capture issues not revealed during a send or receive.
  • Enforce Internal Policy: Add authorization layers to unsecured APIs – popular when exposing APIs to third parties or contractors.
  • Conduct Live Contract Validation: Compare Swagger/OpenAPI specs to live API transactions to detect potentially dangerous anomalies.

Download Bloodhound for Free

Visit our GitHub for more documentation about Bloodhound. And download Bloodhound.

Adopt Bloodhound

Bloodhound Press Release:

API Fortress Releases Open Source API Debugging Microgateway – Bloodhound
Capture, Transform, Track, and Debug Live API Conversations

New York, NY — June 9, 2020 — API Fortress, the leader in data-driven and functional API testing and monitoring, announces Bloodhound, a lightweight API debugging gateway that is free to download and open source. In less than 3 minutes, developers and engineers can begin using a powerful, purpose-built tool for API transaction debugging. Watch the Bloodhound Demo video

Bloodhound allows teams to route API calls to any logger for comprehensive analysis to uncover solutions to difficult bugs, or test an API in ways not possible before. This gives QA teams the insights to ensure that microservices and database-connected APIs behave as they should in real-world conditions.

Patrick Poulin, co-founder and CEO at API Fortress remarks: “It’s never been easier to build new APIs. But the mindset of how we test and monitor them hasn’t evolved. Writing a handful of functional tests using a small subset of fake data against a staging environment is not enough. With Bloodhound, you can do more. In capturing and transforming your APIs, you can reproduce real world scenarios, and find clarity while trying to debug any problem.”

Before the generally available release of Bloodhound, the gateway was deployed to several API Fortress customers that are among the world’s largest retail, financial services, healthcare, and telecom companies. While the platform is flexible and creating addons is simple, several out-of-the-box use cases included: 

Transforming Databases to APIs: Create data-driven functional tests when test data is locked in a database (PostGres, MySQL, MS SQL Server, MongoDB, Redis, and more).
Testing APIs Beyond a Normal Functional Test: Extend what can be tested by transforming the API into unique scenarios such as throttling, broken or unexpected headers, invalid payloads, and status code changes, etc.
Detecting Signals from Noise: Understand the interdependencies in complex API call arrays to help teams create or improve documentation for new API projects.
Acting as an Echo Server: Understand what requests look like from an API server’s POV to capture issues not revealed during a send or receive.
Internal Policy Enforcement: Add authorization layers to unsecured APIs – popular when exposing APIs to third parties or contractors.
Live Contract Validation: Compare Swagger/OpenAPI specs to live API transactions to detect potentially dangerous anomalies.


For more information about Bloodhound from API Fortress, please visit APIFortress.com.
Download Press Release

Send API Test Results to Elastic and Visualize in Kibana

API Fortress’ API-first architecture means that we can seamlessly integrate with any tool in your toolchain. One of the most popular tooling companies in the world is Elastic. Not only do they provide logging and search, but they have a great visualization tool for all the data they collect called Kibana. With the API Fortress data API, all of your test results can be exported in real-time to the data analysis platform of your choice.

Due to its popularity with our customers, we’ve decided to spend some time discussing the Elastic integration. The official doc is here. The connector is freely available to all customers, and it’s already preloaded in our cloud instance.

Kibana Dashboard

API Fortress test results are incredibly detailed, and that level of detail allows for two major advantages:

  1. Rapid Diagnosis
  2. Pattern Recognition

Our reporting when an assertion fails does much more than simply tell the user there is a single failure. We accelerate diagnosis by telling users which assertion failed, how it failed, what we expected, and even include the header information and the entire payload as well. This is particularly important when creating data-driven tests with hundreds of payloads and results. Our reports can help find the needle in the haystack.

The second advantage is an even more interesting differentiation that we make possible by unifying functional, integration, and load tests into “functional uptime” monitors that can run in any environment. This allows API Fortress to aggregate far more usable real-time data for deeper, more accurate insights. To really explain the utility of data aggregation and analysis, let me describe what happened recently with one of our e-commerce/e-retail customers: 

A large book publisher created two partner APIs, one for listing active ISBNs, another to get product details for ISBNs. The publisher’s internal API monitoring platform was suspiciously reporting 100% uptime when the publisher approached API Fortress about getting a second opinion. The following Monday morning, API Fortress detected hundreds of errors and 404 soft errors from 6-8 a.m. Some limited manual testing was conducted by the publisher, but did nothing to help diagnose the root problem.

The issue happened again the next two weeks. The QA team reported to the chief architect that they couldn’t understand what caused it. Fortunately, the architect had set up API Fortress’ data API to send the data to Elastic, and he had set up a dashboard in Kibana.

When the architect reviewed the data in Kibana, a lightbulb went off. He noticed that the failures always happened on Monday mornings from 6-8 a.m. That’s all he needed to see to know the exact cause. They were using an API gateway, and for performance purposes, they had set up the gateway to cache their listing API. The problem was that they were updating their ISBNs database (which feeds the listing API) every Monday morning at 6 a.m. The gateway only refreshed the cache every two hours. That meant every Monday, for months, the book publisher had exposed hundreds of bad ISBNs for two hours with partners to start their week, and they had no idea. A full analysis revealed that this single uncaught API flaw had caused thousands in lost sales, frustrating partners and causing damage to reputation.

The book publisher had no idea their partner API was failing their partners every Monday morning until they created proper integration tests and monitors. Even in taking those steps, they had trouble diagnosing a remediation to the issue until they had the holistic data to see that the issue happened at the same time every week. While this example was simple, it was indicative of the sort of benefit that can come from integrating with a platform like Kibana for your organization.

This sort of scenario happens far too often. We recommend that you don’t just test on release, but use those functional tests as monitors as well. Then use that rich test data in proper data analysis tools like Elastic + Kibana. Unified testing and monitoring allows you to transform your QA efficiency, effectiveness, and ROI overnight with both legacy and new services.

About Kibana:

Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack so you can do anything from tracking query load to understanding the way requests flow through your apps.

About API Fortress:

API Fortress is a continuous testing and monitoring platform for APIs that was built from the ground up for shift-left automation and simplified collaboration across teams. By unifying data-driven functional tests with monitoring that can run in any environment, API Fortress detects a much wider range of API issues early in the lifecycle, while significantly accelerating diagnosis with detailed reporting. Now, achieve unlimited quality-at-speed as you integrate API Fortress into any CI/CD platform or DevOps toolchain. Use API Fortress on our hosted cloud at APIFortress.com, or your cloud with a self-managed (on-premises) container.

API Fortress for Cucumber BDD: Add APIs To Your BDD Testing in a Unified Workflow

BDD Testing vs. API Testing

In the API economy, the stories for Minimum Viable Products (MVPs) are becoming more complex as business owners seek faster, and sometimes, more exotic differentiation. For example, a product team at a global bank may aggressively push for open banking to create competitive new features for the bank’s personal finance app. However, developers and testers may not be familiar with, for instance, distributed ledgers and blockchain technologies, resulting in a bottleneck that can significantly delay go-to-market or raise the risk of falling short of the business case.

Behavior-driven Development (BDD) has emerged as a proven methodology to narrow the gap between business owners and developers by improving collaboration throughout the development lifecycle. Test-driven Development (TDD) is closely related to BDD in that both methodologies support continuous testing to reduce software and API defects. However, it is important to know how the two methodologies work in harmony while testing for very different capabilities. BDD frameworks such as the popular Cucumber were never designed to stray into certain core elements of Test-driven Development, particularly, API testing.

In “Why You Shouldn’t Use Cucumber for API Testing” from StickyMinds, developer Byron Katz, writes: 

It is not uncommon for people to use Cucumber (or other BDD framework) to test API endpoints…. [However], Cucumber is a BDD framework, and it should be used solely to support BDD. API testing focuses on coverage of the API endpoints and is more oriented to the technical solution, unlike BDD testing, which is oriented to business capabilities.

Essentially, BDD testing is focused on user acceptance tests that validate a user story. Cucumber offers Gherkin, an easy DSL (domain-specific scripting language) that allows testers or business stakeholders with zero coding background to easily formulate tests in natural language (“prose”) that approximates the user story in terms of “given, when, then” scenarios. These scenarios change slowly. A user story about remotely starting a car via a smart device on a cold night to warm it up before the driver gets into the vehicle does not involve a high number of dynamic requirements. 

But that all changes when reimagining the user story in terms of function, integration, and performance requirements. These requirements involve a high rate of change that should be tested continuously. API testing “digs” behind the user story to validate API function with positive and negative coverage. Complex user stories almost always involve a complex array of API calls. Whereas Cucumber BDD may empower testers to build the remote vehicle app specifically to user stories, Cucumber BDD cannot verify code like API testing when that remote vehicle app must satisfy the user story while taking X amount of time to load. If it takes several minutes for the app to start the vehicle, it may create a terrible customer experience while satisfying the user story (false positive).

In a perfect world, Cucumber BDD testing and API testing are deployed together, and not discretely. Recently, we at API Fortress created guides to show customers how to melt the barriers between BDD and modern API testing. The results are smarter, more effective testing processes that reduce risk throughout the lifecycle, and help to erase fears of “false positives.”

Run API Fortress from Gherkin DSL
Read our Executing from Cucumber doc to learn how you can easily integrate modern API tests powered by API Fortress into your Gherkin DSL (Cucumber). View sample DSL scripts on Github.

Seamless Integration: Cucumber BDD + API Fortress

API Fortress was built from the ground up for continuous testing and TDD, which requires the sharing of critical API testing resources, test results, and reports between stakeholders such as developers, testers, and product. With API Fortress for Cucumber BDD, it is now possible for all stakeholders to close any gaps in understanding of what should be tested and why something doesn’t work. By giving business owners a little insight into how their user stories may impact API functionality, resilience, and performance, API Fortress for Cucumber BDD can transform the efficiency and effectiveness of BDD and TDD in three key areas: 

 

Improve Collaboration Simultaneously validate user acceptance while verifying API function. API Fortress integrates seamlessly with Cucumber, allowing developers, testers, and business owners to view BDD and API test results and reports throughout the lifecycle.
Expand Coverage Start by ensuring the testability of both business and technical capabilities. Verify positive and negative coverage of API endpoints, and run data-driven testing for technical capabilities with a high rate of change.
Accelerate TDD Shift API testing left alongside BDD testing as early as the design stage. API Fortress unfies functional, integration, and load tests to extract more reliable, accurate, and usable testing data. Measure functional uptime to detect API flaws early, and diagnose API flaws quickly.

Add API testing powered by API Fortress
to your Cucumber BDD

Sign up for a free trial of API Fortress for Cucumber BDD: optimize API functionality, resilience, and performance.


Connect API Fortress to Any Database

Easily Connect API Fortress to Any Database

Option 1: Leverage the JDBC component on the API Fortress platform to connect with any  JDBC-compliant database including PostGres, MySQL, and Microsoft SQL.

Option 2: Use an API Fortress helper app to convert most popular databases, CSV and other files into an API, which API Fortress can then use for data-driven testing.

Data-driven Testing (DDT) and Data-driven APIs

Data-driven testing is critical in validating that APIs meet your needs for reliability, resiliency, and performance.

Most mobile and web apps require APIs that connect to multiple databases that undergo constant, iterative changes. Variable data can be provided by the provider or consumer, and the only constant is how inconsistent those requests can be. For example, an ecommerce app may allow users to select different shoes with varying colors, sizes, and prices. 

What this means is that testing those APIs using static calls from something like a  CSV won’t properly reproduce real world conditions. Yet, that’s the most common method of testing APIs.

Fortunately, there is a new push towards data-driven tests (DDT). The obvious utility in this method is allowing APIs to be run in numerous unpredictable manners, and therefore properly tested for normal and edge cases. 

We have years of experience in seeing the difference between using CSVs and proper data-driven tests, and even wrote an ebook detailing some of those – API Fortress eBook. One of the stories involves a book publisher with a partner API that had tens of thousands of ISBNs. That API was used by resellers to know what items were in stock and still for sale. Before using API Fortress the publisher recognized a 99%+ uptime.

When API Fortress was introduced, the publisher created data-driven tests that did simply this:

  1. Call the partner API with all ISBNs
  2. Choose 500+ of those ISBNs at random and dive into the product information

What they found was that hundreds of these allegedly “valid ISBNs” were actually not valid. How could that be possible when the ISBNs were actually pulled from an API of good ISBNs?

Simply put, they were updating their product databases once every two weeks, but weren’t resetting the cache of their API gateway. This isn’t even an API failure, but a database and gateway failure that only proper API testing could have captured. There are more examples in the ebook, but we wanted to convey a really unique problem that is exclusively captured by proper data-driven testing.

Problems such as this are exacerbated by the complex constellation of APIs that each feed off independent databases. Most QA and test automation teams don’t have the time or resources to manually add proper data iteration into their testing suites due to the complexity and time involved. Platforms like API Fortress are built for this sort of testing from Day 1. 

No added complexity. Just smart testing.

$2.84 trillion was the estimated total cost of poor quality software in the U.S. alone during 2018

Source: Consortium for IT Software Quality (CISQ)

Data-driven API Testing Checklist

Data-driven API testing focuses on validating variable data from a database or file repository. Therefore, data-driven testing can be conducted early in the software development lifecycle. But to successfully enable continuous data-driven API testing throughout the life cycle, QA and testing automation teams really need two key elements: centralization and simplicity.

With proper attention to centralization and simplicity, modern data-driven testing provides distributed QA teams with a central repository to unify testing across teams. It also makes it simple for QA teams to ensure that the test logic built from requests and assertions accurately captures real world scenarios at the API level. 

Here’s a Best Practices Checklist for modern data-driven API testing:

  • Centralize data-driven tests and queries for easy reuse

  • Simplify test execution in multiple environments

  • Connect databases with JDBC drivers or convert databases or files into APIs for connections

  • Store variable data sets for easy sharing and reuse

  • Design data-driven test logic, and designate data-driven tests at the Global or Project level

  • Enable any team members to run tests

Data-driven Testing with API Fortress

Sign up for a free trial of API Fortress and put your data-driven testing to the test. Schedule a demo today.

Automate a Jenkins CI/CD Pipeline with API Fortress

Automate a Jenkins CI/CD Pipeline with API Fortress

With a CI/CD pipeline, the work of distributed teams come together in an automated flow to build, test, and deploy new code. That means rewriting the rules of how releases are built and tested. One of the first things that the Jenkins wiki (Jenkins Best Practices) tells newcomers to CI/CD is that “unit testing is often not enough to provide confidence [of desired quality].” The wiki then talks about the necessity to automate API testing throughout the lifecycle to ensure that all distributed teams are continually working with good services and data.

Let’s take a closer look at those two stipulations of a Jenkins (or any) CI/CD pipeline: 

  • Run API Testing Continuously: CI/CD pipelines produce iterative releases so that services and mobile apps can evolve quickly without increasing the number of bugs or vulnerabilities released. Thousands of enterprises are trying to move from monolith to microservices/modern APIs, but face challenges properly incorporating their Jenkins CI/CD pipelines.

API Fortress was built from the ground up to solve these issues for the digital enterprises of today. Our new breed of API testing automation includes unique capabilities to standardize and collaborate across teams thanks to a platform architecture that can be deployed on-premises or cloud. This is good news for any enterprise that needs to strike the right mix of speed and quality concerning their new IT investments and initiatives.

Download The Datasheet


How to Integrate API Fortress with Jenkins

1. APIF-Auto Command-Line Tool: API Fortress is an API-first platform with a robust set of APIs. To make life easier for our customers we created a command-line tool named APIF-Auto that makes it very easy to add a pipeline script and get results in JUnit format. To learn more about that setup you can read here.

2. Connect by API: API Fortress allows anyone with manager access to create a webhook, that can be easily called from within Jenkins. You can read the details on that here.

Advantages of Using the API Fortress with Jenkins

    • Total Automation: Fire your entire test suite, or just some tests with specific tags, as part of your CI pipeline. Test in seconds what used to take days or weeks.
    • Minimal Setup Required: Use the command-line tool or webhook to automate test execution in minutes.
    • Flexibility in Workflows: API Fortress can execute tests stored in the platform, or from your chosen VCS. See those docs here, or speak to your API Fortress rep to learn more.
    • Works with Any CI/CD: Thanks to APIs the platform is completely CI platform agnostic. If your organization changes to another platform such as Azure DevOps, TravisCI, or Bamboo, for example, the integrations are nearly identical and just as simple.

SECURE YOUR API TESTING RESULTS AND DATA

Deploy on-premises or hybrid cloud using our Kubernetes or Docker deployments. Or use our SaaS platform at apifortress.com. Get complete flexibility to allow your company to experience complete control over data and tests, with minimal setup headaches. Use Kubernetes or Docker

We invite you to sign up for a free trial and demo of API Fortress and put your testing to the test.

95% of API Breaches are Caused By This – Yet Few Test for It

Nordic APIs wrote a great article, with input from industry experts, about the security threats to watch for in 2020. In the story, they mention the usual suspects such as stealing credentials and mass overload (i.e., DDoS) attacks. It’s a great read, and the seriousness of those threats cannot be understated. What is understated, however, is the much larger threat that we all ignore: The vast majority of security failures are caused by human error.

“Through 2022, at least 95% of cloud security failures will be the customer’s fault.”

Jay Heiser, VP at Gartner

This rarely acknowledged vulnerability is of particular interest to us. We have previously written about human error breaches at the USPS and in India. Most of these errors could have been caught with a proper API testing methodology. In SmartBear’s 2019 State of APIs report, they found that roughly 50% of organizations of all sizes don’t have a standardized API testing methodology. From our own experiences, we believe it’s even lower than that.

Why is this? What causes this fear of the smaller threat, and indifference toward the much more prevalent threat of human error? It might be the fact that the story of a hacker breaking in is more interesting, and therefore reported on more often. If true, there is an echo chamber in the media that allows people to focus on the micro and not the macro. In the US, the #1 cause of death is heart disease, yet the nightly news rarely has a segment on it. It’s okay to be more interested in sensationalism: it can be more intriguing. But we’re not talking about how to spend our free time. APIs are our jobs, and protecting the information those APIs have access to is paramount.

In Akamai’s State of the Internet report they found that 83% of all web traffic in 2018 was by API. This number isn’t getting smaller, and it’s important to remember the power these APIs have. Now with PSD2 and Open Banking, European financial institutions have publicly available APIs with all our banking information. Are you convinced they are doing everything possible to not fall victim to human error?

It is human to make mistakes. It will always be so. But it shouldn’t be used as a reason to just shrug, but rather it should serve as a rallying cry. If human error was something more sensational like a giant killer moth attacking a city, then every major city would instantly build moth fighting capabilities with varying levels of success. The difference is they felt obligated to try. We need to treat human error like Mothra, and not just accept our fate but fight back.

One data breach can lead to thousands of lives affected, and jobs lost, all because of a simple human error. Banks, financial services, healthcare, and other enterprises must do more to correct human error. Start by setting up comprehensive automated regression tests and schedule functional uptime monitors. Do what it takes to become an organization that takes the 95% probability more seriously, and isn’t entirely distracted by the 5%.