DEV Community

Ukagha Nzubechukwu
Ukagha Nzubechukwu

Posted on • Edited on

What is a Cache?

A cache pronounced "cash" is a temporary high-speed storage layer that stores data in a location that is quicker to access. They allow a user to reuse previously computed data. Caches are essential for high-performance computers and applications because they provide faster access to frequently computed data.

If frequently accessed data is stored in a cache, the average data memory access time can be reduced, thus reducing the total execution time of an application. A cache can be found in computers, web browsers, etc. Here are some of the most common ones:

  • CPU (Central Processing Unit) cache
  • Disk cache
  • Web cache
  • Application cache

Memory access time refers to the time it takes to retrieve data from memory.

Caching in hardware and software

Caching is a process of storing frequently accessed data in temporary storage (cache) to speed up future requests for that data.

Most computers now have another level of memory referred to as the cache. The CPU cache is a compact memory layer that is positioned logically between the CPU register and the main memory, which stores computer instructions that are frequently accessed. A cache's storage capacity is less than the main memory but has a high data retrieval performance.

When the CPU needs to access data, it first checks the cache. If the data is found in the cache, it can be accessed much more quickly than if it had to be retrieved from the main memory. This results in faster processing times; which is important in high-performance computing environments where speed is crucial.

Additionally, caches can improve the overall system performance by reducing how often the CPU has to access main memory, which is slower and more resource-intensive.

Your favorite online browser has a cache. A browser will always give you the choice to remove cached files and images whenever you attempt to clear your browsing history.

The browser's cache improves your web experience by enhancing the performance of frequently visited websites.

How do caches work?

When a CPU attempts to fetch data in the cache, and it is available, this is called a cache hit. The percent of trials that result in a cache hit or miss is termed the cache hit ratio or rate.

A cache's data is typically stored in a fast-access memory.

A cache miss will occur when the data fetched is not present. When this happens, the data is taken from the main memory and copied into the cache. Copying new data into a Cache depends on the caching algorithm, cache protocol, and system policies implemented.

Caching algorithms

Caching algorithms are implemented by giving each item in the cache a strength rating and removing the ones with the lowest ratings when new data is stored. The following are some examples of caching algorithms:

  1. Least recently used: When there is not enough room in the cache, It removes an item that was least used recently.

  2. Least frequently used: This algorithm removes the items with the lowest usage rates.

  3. Most recently used: This evicts recently used items first.

  4. First in, first out (FIFO): In a FIFO caching algorithm, any item added first to the cache will be the first one removed when there is no space left.

  5. Last in, first out (LIFO): This caching algorithm will remove any item added last.

Cache policies

  1. Cache read policies: This policy illustrates how a cache responds to data requests. A read request may succeed or fail in fetching data from a cache.
    Should we get a cache miss, the main memory has to serve the request. We would want to reduce the chances of this happening, especially if it is a read-intensive application.

  2. Cache write policies: This policy depicts how data changes in the cache. They are:

  • Write-through cache: This policy stores data directly into the cache and database. It ensures that data is consistent. The cache can serve as a backup since the data will always be consistent with the one in the database.

    This policy is appropriate for applications that do not have high write requests. A high write volume will slow down the application's performance because it stores data in two locations.

Write-through cache illustration

  • Write-back cache: In this policy, data is written only to the cache. The data is copied and stored in the underlying database. This policy promotes low latency.

    The write-back cache policy is utilized in write-intensive applications since writing data to the cache is quicker than writing to the main memory.

Write-back cache illustration

  • Write-around cache: It directly stores data in the database. When the write operation concludes, the cache is then updated. When there are a lot of write I/O operations, the write-around policy protects the cache from being flooded.

    Data is not cached in this method unless it is read from the storage, which is a drawback. This policy is not suited for read-heavy applications.

Write-around cache illustation

How to implement a cache network.

Keeping the most frequently used data in the fast cache memory is the basic idea of a cache network. The average memory access time to retrieve data will thus be reduced.

When implementing a cache layer, one should consider the validity of the data and its importance on the scale of preference. A well-implemented cache network will lead to a high cache hit rate.

Filters such as TTL (time to live) should be used to expire out the data present in the cache as at when due.

Another consideration may be whether or not the cache network requires a much higher availability. In-memory engines such as Redis and NGINX provide high availability.

A well-implemented cache network can function as an independent storage layer.

Types of cache.

There are four types of cache:

  1. CPU cache: This cache is located on the processor chip and stores frequently accessed data and instructions. CPU Cache makes up three groups:

    • L1 cache memory - It tends to be small. It holds only a limited amount of data.
    • L2 cache memory - The secondary cache is an intermediary between your device's processor and the main memory. When a cache miss occurs, the CPU immediately checks the L2 for the missing data.
    • L3 cache memory - The L3 is the slowest among CPU cache units. It is a specialized memory designed to boost L1 and L2 performance.
  2. Web cache: Web caches keep track of information from servers, browsers, and websites so they can be easily retrieved when fetched to shorten loading times. There are four types of web cache:

    • Site cache: The first time you visit a website, the site cache stores information about your activity. When you access the web page offline, it retrieves the previously saved data from the site cache.
    • Browser cache: It also acts like the site cache. It stores the details of the frequently visited websites to serve them up faster whenever requested.
    • Micro cache: It stores the static components of a dynamic website.
    • Server cache: It helps reduce server load.
  3. Application/Software cache: This cache is used by software applications to store frequently accessed data, such as configuration files, templates, and other resources. It helps improve data retrieval performance.

  4. Distributed cache: Big corporations store data across multiple database servers using a distributed cache.

  5. Disk cache: This cache is used by operating systems to speed up disk operations. It stores frequently accessed data in memory so it can be read or written to disk more quickly.

Importance of a cache.

  1. Offline data access: Internet browsers use cache to store data temporarily. By caching regularly accessed data, browsers can serve previously visited web pages from their disk instead of from the internet.

  2. Reduced database cost: Highly available cache such as Redis and Nginx can serve as a standalone storage layer. A cache helps to reduce the number of databases needed, thus reducing the cost of operation.

  3. Data integrity and consistency: Data can remain in a static database where they are durable and thorough. Caches help to maintain data integrity and consistency by taking a copy or snapshot of what is in the database — ensuring that data is consistent across all users.

  4. Increase application efficiency: Redirecting a chunk of the read requests to a cache helps reduce the load on an application and improves the application's performance.

Drawbacks of a cache.

  1. Data validity: The main drawback of a cache is the possibility of displaying outdated data to a user due to improper cache maintenance, which can lead to evictions and low cache hit rates.

  2. Bottleneck: Caches can cause a bottleneck if they are not properly managed. When a cache is too small, it can quickly become overwhelmed; this might lead to cache misses and slower processing times. In some cases, caches can also cause contention for shared resources, thus slowing down processing times.

  3. High cost: Caches are built using high-speed memory chips, which are more expensive. Additionally, caches require complex hardware and software design to ensure that they are effective in improving performance.

  4. Volatile: The cache memory is volatile since it stores data temporarily. Caches are not meant to store data permanently, but rather to provide quick access to data needed frequently. As a result, caches are volatile and require constant refreshing to keep the data up-to-date and accurate.

Conclusion

So far, we have learned what cache and caching are, how to implement a cache network, its types, and its merits and demerits. Without a cache, we would have to start afresh anytime we close a website or a computer.
I know this article is a long read. I am happy you made it to the end. 😊

Resources

Top comments (0)