This article originally appeared on Treblle blog and has been republished here with permission.
Let’s talk for a minute about the most boring, intentionally overlooked, deeply maligned, oft-renamed, and otherwise neglected part of API development: machine-readable documentation.
Also known as API specifications, API design-doc, Swagger, OpenAPI, OAS, or just API specs, this engineering document describes the technical implementation of an API. It defines specific elements such as base URL, endpoints, methods, authorization, descriptions, response codes, examples, etc, as well as configuration requirements related to security and performance. Basically it is a blueprint for all the important details that developers need to know in order to develop, version, or integrate your API.
Like architectural blueprints, there are a couple ways to approach the creation of machine-readable documentation. In an ideal world, your API specs serve as a true design doc created prior to writing any code or creating the API itself; this is also known as API-design-first or API-spec-first. More commonly, API specs are auto-generated from existing API code on an as-needed basis.At this point we could dive down a rabbit hole of questions about specification types, formats, where these documents live, processes for keeping our API and specs in sync, and so forth.
But we’ll avoid this rabbit hole for the moment. At Treblle, we don’t really care how you produce your API specifications, but we do believe that they are highly important. Like schematics for a Falcon 9 rocket, API specs serve as a contract between how the API functions in reality and what engineers need to know about how the API functions. It needs to reflect the highest level of precision in describing the API - to ensure the functionality is “nominal” to borrow a word from mission control and to ensure fellow developers can align their work with what the schematics allow.
This is why API specifications need to be high quality. It’s also the reason why we created API Insights. We want to give users a simple way to take their API specifications and find out if they are good or not. We do this by examining the surface area of your API specification and grading it under three broad categories. But we’ll return to that momentarily. First, we want to talk about the product and engineering process behind API Insights.
Our Solution
We talk to API developers every day. One of the reasons developers use Treblle is that the platform gives them a comprehensive and actionable view of their APIs, but at a later stage of the API lifecycle: when the code has been written and the API is running within an application.
Not surprisingly, we are often asked how we can help improve APIs at an earlier stage of development. The team thought deeply about this question, about what a solution might look like, and how it would fit with our company strategy.
We already had a firm belief in the importance of high-quality API specifications and felt that this would be one of the most valuable ways to help developers improve their APIs. This, of course, brought us to the rabbit hole I mentioned above.
However, instead of allowing the rabbit hole to overwhelm and overcomplicate the solution (or paralyze us entirely), we turned to our users and let them guide us: we would create a tool to ingest OAS 3.x API specs that provided a score and actionable results based on a robust number of design, performance, and security best practices and standards checks.Subsequently, we spent a lot of time looking at API specifications, examining linting tools, reading through Spectral rules, OWASP Top 10, and understanding other open source rulesets, etc.
While we were mostly frustrated that the existing solutions were so cumbersome or required a lot of manual configuration and reading, it did help us zero in a number of tests and an approach that we felt would provide a very comprehensive but not overwhelming view of the API’s quality. While there were technical challenges to overcome when implementing the tests, we also needed to determine the grading score, not just for the individual tests but the overall score. For tests, we use a mix of pass/fail and weighted criteria to determine the results.
Then, for the overall score, we decided to use both the well-known ABCDF alpha and more universal 0-100 numeric systems that are a sum of all test scores converted to percentage out of 100 and re-displayed. While one part of the team focused on tests and scoring, the other worked on designing the interface. Our goal was to create a design that is minimalist and elegant, easy to use, and easy to understand. More strategically, we wanted to create reusable components and templates that users will see across other Treblle products.
Some Problems
Once we had written tests and the design was done, we started building our APIs. But we not-so-quickly found that the first version was overengineered, leading us to a growing series of questions about performance, capabilities, and scoring decisions. We were using queues and the user experience was growing far more complicated than we wanted for a free app as well.
Once the user uploaded their OpenAPI spec they had to wait quite a while because we had to call the same endpoint multiple times to get the full report. In the end, we spent months working on a lot of things that we needed to get rid of. That wasn’t an easy pill to swallow for the team (think Matrix and rabbit hole).The holy grail would be if we could get the report with just one API call instantly. That was a challenge because of the amount of the tests we needed to process.
So we came up with a new idea that would return the report much more quickly: rather than loading the complete OpenAPI spec into memory, we treated it as a simple text. Our process then involved doing lookups within this text, allowing us to consume less memory and run simple text queries. Next, for tests that actually hit the endpoints, if they do not exist or are not in production yet, we simply skip them.
After finalizing the designs and APIs, we started working on the web and Mac apps. So much work before we even build the thing! We developed the Mac app to enable real-time tracking of changes made to your OpenAPI spec file. This functionality allows us to send you push notifications, prompting you to rerun tests whenever modifications occur. The web app, on the other hand, had more challenges when dealing with updates to local files, so we decided to solve that problem later. Chalk one up for not having 100% parity.
Following the development and pre-production phase of the apps, we dedicated several days to thorough manual testing. Automated testing is a luxury, particularly for small teams like ours. And so our all-hands-on-deck approach to “try & break it” was just as quick as someone writing those tests to perform those same actions. But testing will certainly have to change if we want to scale. The last piece for us was the launch plan.
We thought we had a solid checklist to help with this, but marketing is something that takes a lot more time than you think, especially if you want to have the largest impact across a broad target market and sustain that interest for a period of time. Here, we had a lot of learnings that we can talk about at a different time, but this was harder both conceptually and on execution than building the app since you are working with concurrent timeframes across a lot of different channels, platforms, and people.
Where to now?
To date, we think the app has been a success. One month after its launch, API Insights has scanned over 55,000 endpoints across 1,100 APIs with a total average grade/score at D/66. We’ve heard from numerous developers that they appreciate how the app is just “ready to use,” that they don’t have to create an account, spend time filling out forms, or wait for an email to get instant value.
Many expressed shock at their spec’s low score (low-quality APIs on the whole is a thing!). Others wanted to see how their specs scored against well-known companies and/or competitors. Still others gave us feedback on what we need to change and how we can improve the app going forward. Most importantly, everyone we’ve talked to says that this app delivers on our goal: to help developers make higher-quality APIs quickly.
And for those who have given us ideas for enhancements, we’ve already started working on several of those! We already added a way for users to manually delete their data (instead of waiting for our 30-day time period to remove it), and we added a technology section to provide even more context for your APIs.
We have another handful of updates that will be rolling out to our first version of the app in the next few weeks, including a helpful onboarding modal, additional tests, a comparison feature, and more detailed information to help developers make changes to items that score low. Stay tuned!
Top comments (0)