DEV Community

Cover image for How to handle API downtime with 2 lines of code
Corentin for Bearer

Posted on • Edited on

How to handle API downtime with 2 lines of code

In the last years, calling a third-party API has been made very straightforward. As an example, here's all that it takes to show in your app a list of planets from the famous Star Wars movies:

const axios = require("axios")

axios
  .get("https://swapi.co/api/planets")
  .then(console.log)
```



There are now thousands of APIs to do almost anything possible. But APIs are unpredictable. They work most of the time, but [it happens](https://status.box.com/incidents/45rmsv4glz45), [one day or another](https://status.sendgrid.com/incidents/0s9gnnbgv9z2), [that for an unanticipated reason](https://developers.facebook.com/status/issues/2287293591515186/), the request fails.

Debugging these errors on production is pretty tricky. You need to have a good logging habit or rely on third-party services (like Bugsnag or Sentry). This is great, but you don't really focus on API traffic here.

**What if you could make your app API-resilient?** No matters what happens on Stripe, Twilio, or any other services, your app (and your business) will remain above the fray.

At Bearer, this is what we're working on. The first version of our Agent monitors your external API calls, without impacting your network or your app performance. It does it with 2 lines of code (in Node.js).

Let's have a look:



```js
// That's all it takes to monitor external API calls
const Bearer = require('@bearer/node-agent')
Bearer.init({ secretKey: '...' })
```



Adding these two LOC to your app gives you a full overview of outbound API requests that your application is performing.

This helps you to debug all requests made from your app, in real-time:

<figure>
[![Bearer Dashboard showing API monitoring view](https://thepracticaldev.s3.amazonaws.com/i/ypd9v8771uwhhyj1aa0k.png)](https://app.bearer.sh)
<figcaption>Screenshot of my Dashboard with an overview of third-party APIs usage</figcaption>
</figure>

But the Bearer Agent does more. It also protects your app in an active way.

Let's say that requests to [the Star Wars API](https://swapi.co/) are frequently failing. This makes your app buggy, but you know that it's just some network issue with that API. The first step, to fix that issue, is to add retry logic to your app.

Here's how you could do that with [Axios](https://github.com/axios/axios):



```js
const axios = require('axios')

function getPlanets(count_requests) {
  // Max number of retries
  const max_retry = 2

  // Counter on how many requests has been performed yet
  // (will be 0, 1 or 2)
  count_requests = Number(count_requests) || 0

  // If the counter of requests is above the limit
  // of retries, throw an error.
  if (count_requests > max_retry) {
    throw Error(`Unable to make the request (total retry: ${count_requests})`)
  }

  // Make the request and return it
  return axios.get('https://swapi.co/api/planets').catch(() => {
    // If an error happens, retry the request.
    return getPlanets(count_requests + 1)
  })
}

// Make the request
getPlanets().then(console.log)
```



It's starting to look a little bit more complex...

The Bearer Agent has built-in mechanisms to retry requests automatically. So, let's see how retry would look like with Bearer enabled on your app:



```js
const Bearer = require('bearer')
Bearer.init({ secretKey: '...' })

const axios = require('axios')
axios.get('https://swapi.co/api/planets').then(console.log)
```



Looks better, no? That's because the retry logic is handled right into the Bearer Agent. But, retry is just one example of the features that the Agent brings. Our team is also adding fallback, caching, a circuit breaker and more.

If I piqued your curiosity, learn more about [API resilience & monitoring](https://www.bearer.sh) on our website.

_PS: Credit to [Annie Spratt](https://unsplash.com/@anniespratt) & [Unsplash](https://unsplash.com/photos/XMpXzzWrJ6g) for the featured image._

[![Learn more about Bearer.sh (hero image)](https://dev-to-uploads.s3.amazonaws.com/i/5vn0mz3l200z437wubax.jpg)](https://www.bearer.sh/blog?utm_source=dev.to&utm_medium=hero&utm_campaign=handle-api-downtime)
Enter fullscreen mode Exit fullscreen mode

Top comments (16)

Collapse
 
qortex profile image
Michael Monerau

I'm looking for solutions to globally handle 429 errors when doing external API calls (rate-limit reached). I would like failing calls to be retried a few seconds later.

Is your agent a good way to achieve that?

Collapse
 
frenchcooc profile image
Corentin

Yeah! You would need to add the agent to your app; then setup rules & incidents to receive alerts whenever the API calls are failing due to 429 errors.

Feel free to sign-up (it’s free) and get in touch with us using the chat if ever you need.

Collapse
 
qortex profile image
Michael Monerau

Alerts I already have them :)

I would need to have my backend automatically retries the requests after a while.

I will take a deeper look and see if it suits my use case, thanks.

Thread Thread
 
frenchcooc profile image
Corentin

Auto-retries is under active development at Bearer. The documentation will be updated as soon as it's released ;)

Collapse
 
ionline247 profile image
Matthew Bramer

What happens when the service never comes back up or the URI changes for the endpoint?

Collapse
 
frenchcooc profile image
Corentin

Your API calls remains the same when you enable the Agent on your app. So if an API changes the endpoints or release a new version, you will have to updates your code accordingly.

On-the-fly changing your requests is something that we think about, but it has to be very carefully built (and tested) before being production-ready πŸ˜‡

Collapse
 
ionline247 profile image
Matthew Bramer

So the request would continue to reach out over the wire, making the same API call over and over again? How could I stop the retry cycle?

Thread Thread
 
frenchcooc profile image
Corentin

A retry-mechanism needs some limit, otherwise, you enter an infinite loop indeed. One retry seems good enough in most cases.

But you also need to take into account a period of time. If you are making 100 API calls per minute to the same API, you can safely consider that if a certain amount is failing, all the others will fail. What you would need here is a circuit breaker (and it's also a feature that we are actively working on at Bearer).

Collapse
 
alistaiiiir profile image
alistair smith

This could even be one line:

require('@bearer/node-agent').init({ secretKey: '...' })
Collapse
 
tonymet profile image
Tony Metzidis

So when the target server approaches capacity, you've just tripled the demand on the server so it will crash faster.

Can you help explain the problem being solved with this?

Collapse
 
okbrown profile image
Orlando Brown

Is that really a monitoring problem?

Sounds like a infrastructure problem, which can be solved assisted by monitoring to give appropriate alerts to trigger adding more resources to help with peak loads.

Collapse
 
tonymet profile image
Tony Metzidis

It's more of a supply/demand problem, and when there's an issue on the supply side (target server), the demand will go up 3x.

A better solution is to cache the call to the target, and revalidate it in a separate thread.

One easy way to do this is to use a reverse proxy with stale-while-revalidate

Thread Thread
 
frenchcooc profile image
Corentin

Retry is one simple feature to handle an API downtime, but it's surely not the only thing that will help your app to stay up.

  • Caching is a very good solution as well, while you might need some data validation at some point.
  • Using a circuit breaker is also great. It adds some more logic to a retry-mechanism and avoids tripling the bandwidth as you mentioned.

There's a great article on Dev.to on the main mechanisms to improve reslience on your app. At Bearer, we're starting with retry, but we are planning to add them all to our Agent.

Collapse
 
ionline247 profile image
Matthew Bramer

It's not clear in this post what the back off strategy is, but hopefully there's a story or configuration for that.

Collapse
 
frenchcooc profile image
Corentin

Definitely! Configuration will let you handle how many times you should retry (which depends mainly on your the usage of an API). Once the retry is released with our Node.js and Ruby agents, we will update our documentation with all the retry options.

Collapse
 
frenchcooc profile image
Corentin

Happy to discuss here about how you handle API downtime in your app πŸ’ͺ