Introduction
Microservices architecture has become the mainstream choice for IT architecture nowadays. Its flexibility and scalability make it easier for enterprises to cope with rapidly changing business requirements. However, ensuring high availability of microservices has become a key issue in architectural design. In this context, the three core strategies in API service governance — rate limiting, circuit breaking, and degradation — are particularly important.
As an essential foundational component in microservices architecture, the API gateway plays a crucial role in service governance. Apache APISIX, as a new generation cloud-native API gateway, not only boasts high performance and security capabilities but also provides rich traffic management functionalities. In the following discussion, we will delve into the "three-pronged approach" of API service governance and provide detailed insights into how to apply these strategies in APISIX to ensure the high availability of our services.
API Governance Strategies
Rate Limiting
Rate limiting, as the name suggests, is a restrictive mechanism implemented on traffic. Its core principle is to prevent system overload or even crashes caused by excessive traffic. The fundamental concept lies in regulating the volume of requests within specific time intervals, allowing only requests that meet certain constraints to access the system, thus ensuring the stable operation of microservices and the entire system. In real-life scenarios, the concept of rate limiting is also evident. For example, during peak hours at subway stations, multiple access gates are set up for security checks to guide orderly and smooth queuing.
Rate limiting can be implemented in various ways, including:
Based on request counts: Tracking the number of requests within each time period and limiting them within a certain threshold. For example, processing a maximum of 100 requests per second.
Based on request frequency: Restricting the request frequency per client or IP address to prevent an excessive number of requests. For instance, allowing a maximum of 10 requests per minute.
Based on connection counts: Limiting the number of simultaneous connections established to avoid consuming excessive system resources. For example, allowing a maximum of 100 simultaneous connections.
Different rate-limiting strategies enable us to address various scenario requirements. For instance, for valuable API resources, we can limit the number of requests to 10 per minute. Or, to enhance service availability, we can limit concurrent request numbers to reduce service response time, among other scenarios. Proper implementation of these rate-limiting strategies can help ensure the normal operation of services under high concurrency and sudden traffic spikes.
Circuit Breaking
In a microservices architecture, there may be situations where services call each other. Once one service fails, it may affect other services, or even lead to the collapse of the entire system, a phenomenon vividly termed as "cascading failure" or "avalanche effect." Circuit breaking mechanism, as a protective measure against cascading failures in microservices, is used to prevent the spread of failures. When a microservice experiences abnormalities or delays, the circuit breaker will be triggered rapidly by temporarily blocking requests to that service, thereby ensuring the stability of the entire system is not compromised.
The core principle of the circuit breaker mechanism lies in real-time monitoring of service response time or error rates. Once these metrics exceed preset thresholds, the circuit breaker automatically triggers, swiftly halting requests to the faulty service until it returns to normal operation. After the service stabilizes, the circuit breaker automatically closes, resuming access to the service. This mechanism is akin to a resistor in an electrical circuit. When the voltage exceeds its tolerance range, the resistor automatically disconnects the circuit to prevent excessive current from damaging other electronic components. After inspecting and repairing the circuit, the resistor closes again, and the circuit resumes normal operation.
By introducing circuit-breaking mechanisms, microservices architecture can better cope with potential cascading failure issues arising from mutual service calls, ensuring system stability and reliability, especially under high-pressure scenarios.
Degradation
Degradation, as an effective strategy to address high system loads, involves temporarily disabling some non-critical functions or moderately reducing the quality of certain services to ensure the overall system's stable operation. In the microservices architecture, the application of degradation mechanisms can intelligently shield some non-core or temporarily dispensable functions, thus ensuring the continuous and stable operation of core functions. For example, in a video conferencing application, when network bandwidth is limited, we can reduce the video transmission quality or temporarily disable video functionality to ensure clear and stable audio calls, thereby meeting the basic communication needs of the meeting.
Common strategies include:
Function degradation: Temporarily closing or restricting access to certain functions to ensure the normal operation of core services. For example, a social media application may temporarily disable "like" or "comment" functions during peak hours to ensure users can browse content normally.
Quality degradation: During high system loads, lowering the quality requirements of certain services or functions. For example, as mentioned earlier, reducing video clarity or frame rate to ensure smooth communication.
Rate Limiting, Circuit Breaking, and Degradation in APISIX
How can we utilize the aforementioned three major strategies enabled in APISIX to enhance the high availability of microservices? Below are just a few common application examples for illustration.
Rate Limiting with the limit-count
Plugin
APISIX provides various built-in traffic management plugins such as limit-count
, limit-req
, and limit-conn
. Depending on actual needs, we can choose the appropriate method for traffic control. Taking the limit-count
plugin as an example, it restricts the total number of requests within a specific time interval and returns the remaining request count in the HTTP header.
curl -i http://127.0.0.1:9080/apisix/admin/routes/1 \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"uri": "/get",
"plugins": {
"limit-count": {
"count": 3,
"time_window": 60,
"rejected_code": 429,
"key_type": "var",
"key": "remote_addr"
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}'
Circuit Breaking with the api-breaker
Plugin
The api-breaker
plugin in APISIX automatically triggers circuit breaking mechanisms based on preset thresholds to prevent cascading failures. For instance, it can initiate circuit breaking when upstream services return 3 consecutive 500 or 503 status codes and resume access when a 200 status code is received.
curl "http://127.0.0.1:9180/apisix/admin/routes/1" \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"plugins": {
"api-breaker": {
"break_response_code": 502,
"unhealthy": {
"http_statuses": [500, 503],
"failures": 3
},
"healthy": {
"http_statuses": [200],
"successes": 1
}
}
},
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
},
"uri": "/get",
}'
Degradation with the fault-injection
Plugin
APISIX's fault-injection
and mocking
plugins support setting degradation strategies to temporarily disable certain functions or directly return preset data during high system loads, ensuring system stability. For example, the fault-injection
plugin can directly return specified HTTP status codes and body values to clients.
curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"plugins": {
"fault-injection": {
"abort": {
"http_status": 200,
"body": "Fault Injection!"
}
}
},
"upstream": {
"nodes": {
"httpbin.org:80": 1
},
"type": "roundrobin"
},
"uri": "/get"
}'
Conclusion
Rate limiting, circuit breaking, and degradation, as crucial service governance measures in microservices architecture, play an irreplaceable role in enhancing microservices' high availability. They act as solid shields, defending microservices architecture against various potential risks and challenges. Faced with diverse business scenarios, we should flexibly and cautiously apply these measures to ensure the stability and reliability of microservices architecture are optimally safeguarded.
Top comments (0)