At API Fortress, we often find ourselves involved in a very familiar discussion. The premise is typically something along the lines of “We plan on building out our own automation platform for API testing.” While this is a good approach in theory, there are a few reasons why it might not work for an organization.
1. Upfront Cost
A big part of this question is the notion of expense. We’re not talking about monetary expense (at least, directly) just yet. We’re talking about the labor capital that needs to be invested in building out a platform. In order to properly develop the automation platform, resources are first going to have to be dedicated to choosing the tools to be used. The world of open-source testing is huge, and there are many tools for every need. Time and talent will need to be dedicated to finding the right tools for the job and testing them against current code to assure the team that the tools are up for the job. (View API Fortress’s Low Cost Pricing Options)
2. Learn as You Go
A further expense in this process is time. After time has been spent collecting and appraising the necessary tools for the job, more time must be spent learning how the tools work and integrating their workflows. Considerations need to be made for edge cases. A major part of this process is considering what needs to happen in order to scale the platform. While the platform may perform admirably in a test environment, what happens when it’s exposed to large-scale testing of production level code? If one or more components fail at this late stage in the development cycle, we’re back to the drawing board.
3. Maintenance Cost
Once a platform is developed, an organization needs to consider the cost of maintaining it. The individual tools that constitute the platform will be iteratively updated as time goes on. Occasionally, an update to one component can have a dramatic effect on how it communicates with other components in the framework. When something like this happens, development time must be dedicated to refactoring these connections. The time spent by developers (who are by their very nature the most expensive resources that a company can leverage) to fix these problems has a price tag.
Since we’re leveraging open-source software to build our framework, we need to consider the consequences of one of these pieces of software becoming obsolete or being discontinued. This problem is very similar to the problem of updated software, but has a more dramatic consequence. In the case of a piece of software losing its support, it will, over time, become obsolete. This obsolete software will have to be replaced, bringing the organization back to the drawing board with regards to their QA process. Once the obsolete software is replaced, we once again have to worry about learning the ins and outs of the new software and integrating it with our existing process.
At this point, perhaps we’ve solved all of the problems outlined above. We’ve selected appropriate tools, integrated their workflows and trained our team on how to use them. We’ve ensured that the tool is capable of scaling, and that it can test both small amounts of code and full production-scale codebases. We’re successfully testing.
But what are we doing with all of this test data? How are we viewing the analytics? Where is our dashboard? How do we export data to other platforms? We’ve established a CI/CD pipeline, for example, but have no means of engaging our test platform from it. Our testing platform exists in a black box until we put in the work to integrate it with our other tools. Building cross-platform connectivity on top of our testing architecture will be yet another expenditure of time and talent.
While the notion of building a bespoke testing platform is alluring, the cost of such an undertaking can easily outweigh the benefits. Locking developers into the building of the tool, pushing the tool through QA and then deploying the tool to the development environment at large can be a costly process in time, talent, and money.