What is a Cache?
A cache is a temporary storage area that keeps results from expensive operations or frequently accessed data in memory, so future requests are served much faster. It's a powerful tool to enhance application performance by minimizing direct database calls.
Why Cache? 🤔
Instead of making repeated calls to the database or doing expensive calculation, we can retrieve data from the cache. But what’s the real benefit?
- Low Response Time
- Resource Savings
- Enhanced Application Performance
Then, why don’t we store everything in the cache for ultimate speed?
- Cache memory is volatile — it doesn’t persist data.
- Cache has limited capacity — far less than databases.
To maximize efficiency, we only store high-cost or frequently accessed data in cache, minimizing repetitive database calls or heavy calculations.
Cache Tier 🗂️
A cache tier is a super-fast, temporary data layer that sits between your app and the database. The benefits of having a separate cache tier include better system performance, ability to reduce database workloads, and the ability to scale the cache tier independently
How Does It Work?
- Web server receives a request
- Checks if data exists in cache
- If yes → Returns data to client
- If no → Queries database → Stores in cache → Returns to client
This strategy is known as a read-through cache.
When to Use Cache?
Consider caching when:
- Data is frequently read but rarely modified.
Since cache data is stored in volatile memory, it’s unsuitable for persisting data. If the cache server restarts, all in-memory data is lost. Critical data should always reside in a persistent data store.
Important Cache Considerations 📝
Expiration Policy: It is a good practice to implement an expiration policy. Once cached data is expired, it is removed from the cache. When there is no expiration policy, cached data will be stored in the memory permanently. It is advisable not to make the expiration date too short as this will cause the system to reload data from the database too frequently. Meanwhile, it is advisable not to make the expiration date too long as the data become Stale.
Consistency: This involves keeping the data store and the cache in sync. Inconsistency can happen because data-modifying operations on the data store and cache are not in a single transaction.
Mitigating Failures: A single cache server represents a potential Single Point Of Failure (SPOF) A single point of failure (SPOF) is a part of a system that, if it fails, will stop the entire system from working As a result, multiple cache servers across different data centers are recommended to avoid SPOF. Another recommended approach is to overprovision the required memory by certain percentages. This provides a buffer as the memory usage increases.
Eviction Policy: When cache is full, it evicts data. Least Recently Used (LRU) is a popular eviction policy. Alternatives like Least Frequently Used (LFU) or First In, First Out (FIFO) suit different scenarios.
Best Practices for a Scalable System
- Decouple components to minimize dependencies.
- Avoid any Single Points Of Failure.
- Balance Latency and Consistency
- Enable independent Scaling.
Stay tuned for more insights! 🚀
Top comments (0)