APIs are incredible tools! We harness their power and reap the rewards, yet the thrill of unlocking their full potential and truly understanding user interactions often eludes us.
It’s quite humorous, actually — it seems we've taken a different route with our web applications, armed with analytical tools that dissect every moment of the user journey as they dance through our meticulously crafted front-end experiences. But the API? That's more like the mysterious black box of an airplane, delved into only when disaster strikes, conducting post-mortem evaluations to pinpoint why our API took a nosedive.
This is where API Observability and Monitoring can become useful, not just as buzz words but as actual workflows. Let me tell you a story.
I was working at a company, we will call Acme. We had 3 APIs, one internal, one customer facing, and one partner facing. Our internal API was connected to a front-end application, as was the customer and partner APIs. All separate, with lots of other stuff going on behind the scenes. We wanted to improve the performance across the board, reducing latency and error rates. Nothing we aren't all used to having to do right? But, how can I reduce API performance if I don't really know how well it is performing? Yes, I could send a few HTTP requests using Curl or Postman or similar, and take an average of the response time for certain endpoints. But at this point, I am only seeing a tiny snapshot that is extremely skewed.
I wanted to be able to improve our APIs, but to do so I needed to know what was actually going on. I had no reference data, nothing to compare anything to. I knew that on "average" a request to a few endpoints was hitting around 1.4 milliseconds, not too bad - but that was from my somewhat stable internet connection in the UK. How about our users in different locations? The response payload size, were we sending too much data? Are there endpoints we could depricate or bump in version?
To put it simply, I knew nothing about our API. Yet, I was the person in charge of it. I was the person setting the roadmap, planning sprints, organizing work, and accountable for all of it. But I still knew nothing ... Not the best situation to be in right?
I came across Treblle at this point, and thought I might as well try it out. What did I have to lose? It was a simple command line install, slight configuration, and that was it. Under 2 minutes and I was suddenly getting floods of data coming into a dashboard telling me exactly how these APIs were being used.
I could see an average of response size and response time on my dashboard, as well as the locations these requests were coming from. I could go and watch real-time requests coming in and telling me when users were hitting issues. I could then take this data, and use it to help guide me with the roadmap and sprint planning. I could evidence why I wanted to spend 2 weeks tweaking performance on our write operations. I was able to be accountable at this point.
All of this to say that if you want to understand your APIs better, get some good tooling like Treblle in place - and start being more proactive in your approach to APIs.
Top comments (0)