We have all heard about the cached memory that speeds up the performance of the system. This happens by reducing the latency which we have discussed earlier in the section — latency and throughput. Let’s take a real-life example of caching — imagine a supermarket in the basement of your house. You can have the access to the shop anytime you want but you still buy a week worth of groceries to save time. This is caching.
Common scenario
If we visit a particular point of a system time and again, the use of a cache can help your data load faster. The required data is saved on memory rather than disk from where the response is so much faster while making the network requests. Most of the websites these days stay cached in the CDNs (Content Delivery networks). This activity has reduced the load on the back-end server of the websites.
You are familiar that every time you make a network request, your backend server must do some computationally intensive and time-consuming work. But with the use of caching, you can convert your previous look-up time from linear O(N) time to constant O(1) time. This does not only limit here but also when your server must make multiple network requests and API calls to compose the data that the requester receives later, the caching data leads to low latency because of the reduced number of network calls.
Caching is usually inserted on the client (browser storage), between the client and server, or on the server itself. It can occur at many levels or points including the hardware level.
Stale Data Handling
The data caching does not limit itself to the read operations but also to write operations with some of the considerations.
I have listed some of those considerations below: -
For write operations, you need to keep your cache and database in sync.
Since there are more operations to perform and new considerations handling the un-synced stale data need to be analyzed continuously, the complexity might increase.
Sometimes, a new design is needed to handle the syncing with even more considerations like sync method (synchronous/asynchronous), intervals, and so on.
To keep the cached data up-to-date, data ‘eviction’ or turnover is done with the help of LIFO (Last In First Out), FIFO (First In First Out), LRU (Least Recently Used), and LFU (Least Frequently Used).
Conclusion
Caching is best on its way when the data is static or rarely changing. Also, when the sources of change are single operations instead of user-generated operations. However, caching may not be efficient if the cache refreshes done in intervals stays away from the purpose and user experience of the application.
pragyaasapkota / System-Design-Concepts
A repo with some system design concepts.
System Design
Systems design is the process of defining elements of a system like modules, architecture, components and their interfaces and data for a system based on the specified requirements.
This is a index for the concepts of system.
If you wish to open these in a new tab, Press CTRL+click
S.N. | Table of Content |
---|---|
1. | Caching |
2. | Network Protocols |
3. | Storage: The Underrated Topic |
4. | Latency and Throughput |
5. | System Availability |
6. | Leader Election |
7. | Proxies |
8. | Load Balancing |
9. | Endpoint Protection |
10. | HTTPS: Is it better than HTTP? |
11. | Polling and Streaming |
12. | Long Polling |
13. | Hashing |
14. | CAP Theorem |
15. | PACELC Theorem |
16. | Messaging and Pub-Sub |
17. | Database |
18. | Logging, Monitoring, and Alerting |
19. | Distributed System |
20. | Scaling |
21. | Event Driven Architecture (EDA) |
Thank you!!!
I hope this article was helpful to you.
Please don’t forget to follow me!!!
Any kind of feedback or comment is welcome!!!
Thank you for your time and support!!!!
Keep Reading!! Keep Learning!!!
Top comments (0)