In the last years, calling a third-party API has been made very straightforward. As an example, here's all that it takes to show in your app a list...
For further actions, you may consider blocking this person and/or reporting abuse
I'm looking for solutions to globally handle 429 errors when doing external API calls (rate-limit reached). I would like failing calls to be retried a few seconds later.
Is your agent a good way to achieve that?
Yeah! You would need to add the agent to your app; then setup rules & incidents to receive alerts whenever the API calls are failing due to 429 errors.
Feel free to sign-up (itβs free) and get in touch with us using the chat if ever you need.
Alerts I already have them :)
I would need to have my backend automatically retries the requests after a while.
I will take a deeper look and see if it suits my use case, thanks.
Auto-retries is under active development at Bearer. The documentation will be updated as soon as it's released ;)
What happens when the service never comes back up or the URI changes for the endpoint?
Your API calls remains the same when you enable the Agent on your app. So if an API changes the endpoints or release a new version, you will have to updates your code accordingly.
On-the-fly changing your requests is something that we think about, but it has to be very carefully built (and tested) before being production-ready π
So the request would continue to reach out over the wire, making the same API call over and over again? How could I stop the retry cycle?
A retry-mechanism needs some limit, otherwise, you enter an infinite loop indeed. One retry seems good enough in most cases.
But you also need to take into account a period of time. If you are making 100 API calls per minute to the same API, you can safely consider that if a certain amount is failing, all the others will fail. What you would need here is a circuit breaker (and it's also a feature that we are actively working on at Bearer).
This could even be one line:
So when the target server approaches capacity, you've just tripled the demand on the server so it will crash faster.
Can you help explain the problem being solved with this?
Is that really a monitoring problem?
Sounds like a infrastructure problem, which can be solved assisted by monitoring to give appropriate alerts to trigger adding more resources to help with peak loads.
It's more of a supply/demand problem, and when there's an issue on the supply side (target server), the demand will go up 3x.
A better solution is to cache the call to the target, and revalidate it in a separate thread.
One easy way to do this is to use a reverse proxy with stale-while-revalidate
Retry is one simple feature to handle an API downtime, but it's surely not the only thing that will help your app to stay up.
There's a great article on Dev.to on the main mechanisms to improve reslience on your app. At Bearer, we're starting with retry, but we are planning to add them all to our Agent.
It's not clear in this post what the back off strategy is, but hopefully there's a story or configuration for that.
Definitely! Configuration will let you handle how many times you should retry (which depends mainly on your the usage of an API). Once the retry is released with our Node.js and Ruby agents, we will update our documentation with all the retry options.
Happy to discuss here about how you handle API downtime in your app πͺ