Datadog & API Fortress: A Guide to Proper API Monitoring

API Fortress has been around since 2014, and our first main focus was to provide best in class API monitoring. Lately the industry has been more intrigued by the story of automating testing as part of the CI/CD pipeline, which means we don’t get to talk about monitoring enough. That’s why we were so excited to be at Datadog’s Dash conference last week. Over 700 people excited about the great Datadog platform, as well as monitoring in general.

It gave us a chance to show off our functional uptime monitoring muscle, and our new Datadog connector! A 200 doesn’t guarantee things are ok. You should also analyze the entire payload prior to viewing an API as up and functional. That’s where a tool like API Fortress is necessary. Our plugin can give you insight to true API uptime numbers, and put that information in your Datadog instance. One place for all your metrics.

An Example:
We recently had a customer (major media publisher) that brought us in solely for monitoring. This customer’s API showed uptime of over 99.4% based on internal metrics. We showed them how to create a test that not only exercised the endpoints, but also validated the response. What we discovered is that while the API’s ability to respond with a status code of 200 was 99.4% (which is all they were originally testing for), functionally speaking the API was only up around 95%. The problem wasn’t the API, but a database that feeds one of the objects within that response. It had sporadic outages, but that was only obvious upon a full analysis of each response.

We also have a case study with a Fortune 100 insurance company.

Our CTO often likes to use the analogy that microservices are a house, and the API is the door to go in and out. Think of API Fortress as a home inspector. It can go through the door and make sure everything from the roof to the basement is in working order. We may talk a lot about testing as part of development and deployments, but our origins are in monitoring.