In many projects, I’ve noticed that while a cache could be handy — especially on the client side— it’s often overlooked. Client-side caching is critical in enhancing user experience by reducing latency and offloading repeated server requests. For example, in applications with infinite scrolling or frequently updated dashboards, caching previously fetched data prevents unnecessary API calls, ensuring smoother interactions and faster rendering times.
In one of my recent projects, implementing a cache reduced API call volume by over 40%, leading to noticeable performance improvements and cost savings. This underlines why client-side caching should be considered a foundational optimization strategy. A cache tends to be one of the last features considered, despite its significant impact on performance with relatively simple implementation, whether due to development time constraints or other priorities.
A cache can be implemented at various levels in architecture: from backend caching using Redis, a CDN for static content, to an in-memory cache on the client or even using localStorage or IndexedDB for persistency. Ideally, these strategies should be combined to reduce the load and cost of databases and APIs, as well as the lag from client-server requests, especially for data that has already been fetched before.
In this article, we’ll explore how to design and implement an LRU (Least Recently Used) cache with TTL (Time-to-Live) support in JavaScript, creating a package similar to my adev-lru. By the end, you’ll have a working example that showcases the core principles and functionality of an effective caching solution.
What is an LRU Cache?
An LRU (Least Recently Used) cache ensures that the most recently accessed items remain in memory while evicting the least recently accessed ones when their capacity is exceeded. This strategy works by maintaining an order of usage: each accessory updates the item’s position in the cache, with the least accessed items being removed first.
Compared to other caching strategies, LRU balances simplicity and efficiency, making it well-suited for scenarios where recent usage is a reliable indicator of future access. For example, applications that cache API responses, thumbnails, or frequently accessed user preferences can leverage LRU to reduce redundant fetch operations without over-complicating the eviction process.
Unlike LFU (Least Frequently Used), which tracks access frequency and requires additional bookkeeping, LRU avoids this complexity while still achieving excellent performance in many real-world use cases. Similarly, FIFO (First In, First Out) and MRU (Most Recently Used) offer alternative eviction policies but may not align as well with usage patterns where recent activity is critical. By combining LRU with TTL (Time-to-Live) support in my implementation, it also handles scenarios where data needs automatic expiration, further enhancing its applicability in dynamic environments like live dashboards or streaming services. It’s especially useful in applications where access to the most recent data is critical.
Implementation
The LRUCache
class is built to be efficient, support flexible configurations, and handle automatic evictions. Below are some key methods:
Creating the Cache
public static getInstance<T>(capacity: number = 10): LRUCache<T> {
if (LRUCache.instance == null) {
LRUCache.instance = new LRUCache<T>(capacity);
}
return LRUCache.instance;
}
This method ensures there is only one instance of the cache in the application, a design choice that simplifies resource management. By implementing the cache as a singleton, we avoid redundant memory usage and ensure consistent data across the application. This is particularly valuable in scenarios where multiple components or modules need access to the same cached data, as it prevents conflicts and ensures synchronization without requiring additional coordination logic. If no capacity is specified, it defaults to 10.
Adding an Item to the Cache
public put(key: string, value: T, ttl: number = 60_000): LRUCache<T> {
const now = Date.now();
let node = this.hash.get(key);
if (node != null) {
this.evict(node);
}
node = this.prepend(key, value, now + ttl);
this.hash.set(key, node);
if (this.hash.size > this.capacity) {
const tailNode = this.pop();
if (tailNode != null) {
this.hash.delete(tailNode.key);
}
}
return this;
}
This method adds or updates an item in the cache. When a key already exists, its corresponding item is evicted and re-added at the front of the cache. To do this the cache uses a Doubly Linked List to save the data as nodes and maintain the ability to delete data from the end of the list — Tail— and move it to the beginning of the list — Head —, to guarantee a constant O(1) read of every node’s data a Hash Table is used to save a pointer to each node of the list. This process aligns with the LRU principle by ensuring that recently accessed items are always prioritized, effectively marking them as “most recently used.” By doing so, the cache maintains an accurate order of usage, which is critical for making eviction decisions when the capacity is exceeded. This behavior ensures that resources are optimally managed, minimizing the retrieval time for frequently accessed data. If the key already exists, the item is moved to the front to mark it as recently used.
Retrieving an Item from the Cache
public get(key: string): T | undefined {
const node = this.hash.get(key);
const now = Date.now();
if (node == null || node.ttl < now) {
return undefined;
}
this.evict(node);
this.prepend(node.key, node.value, node.ttl);
return node.value;
}
This method retrieves stored items. If the item has expired, it is removed from the cache.
Performance Metrics
To evaluate the cache’s efficiency, I implemented performance metrics like hit rate, misses, and evictions:
private getHitRate(): number {
const totalRequests = this.hitCount + this.missCount;
return totalRequests === 0 ? 0 : this.hitCount / totalRequests;
}
Clearing the Cache
public clear(): void {
this.hash.clear();
this.head = this.tail = undefined;
}
This method clears all items and resets the cache state.
In my implementation, I have also added other methods like getOption
which instead of returning T | undefined
it return an instance of the monad Option
for those who prefer a more functional approach. I also added a Writer
monad to track every operation on the cache for logging purposes.
You can see all the other methods involved in this algorithm, very well commented, on this repository: https://github.com/Armando284/adev-lru
Comparing Cache Algorithms
An LRU cache is not the only option. Choosing the right caching algorithm depends heavily on the application’s specific requirements and access patterns. Below is a comparison of LRU with other commonly used caching strategies and guidance on when to use each:
- LFU (Least Frequently Used): This algorithm evicts items that are accessed the least number of times. It’s ideal for scenarios where usage frequency over time is a better predictor of future access than recency. However, it typically requires additional bookkeeping to track access counts, making it more complex and computationally expensive than LRU. Use LFU for applications like recommendation engines or machine learning pipelines where historical usage patterns are critical.
- FIFO (First In, First Out): This approach removes the oldest items, regardless of how often they are accessed. While it is simple to implement, FIFO might not be suitable for most use cases as it doesn’t consider usage patterns. It can work for applications with fixed, predictable workflows, such as caching static configurations or preloaded assets.
- MRU (Most Recently Used): MRU evicts the most recently accessed items, the opposite of LRU. This strategy is best suited for situations where older data is more likely to be reused, such as rollback systems or certain types of undo operations.
When to Use LRU
LRU strikes a balance between simplicity and effectiveness, making it ideal for applications where recent activity correlates strongly with future use. For instance:
- Web and API Caching: LRU is well-suited for reducing redundant API calls, especially in scenarios like paginated views, infinite scrolling, or frequent polling of live data.
- Multimedia Applications: Cache recently played or viewed items like videos or images.
- UI State Management: Store recently accessed component states to improve rendering performance.
In contrast, if access patterns show that frequency or insertion order is more relevant, algorithms like LFU or FIFO might be a better choice. Evaluating these trade-offs ensures that the caching strategy aligns with your application’s goals and resource constraints.
- LFU (Least Frequently Used): Removes the least accessed items based on frequency.
- FIFO (First In, First Out): Evicts the oldest item, regardless of recent usage.
- MRU (Most Recently Used): Removes the most recently added items, contrary to LRU.
Conclusion
Implementing an in-memory cache can significantly enhance an application’s performance, reducing response times and improving user experience.
If you want to see a full LRU Cache in action you can use my npm package https://www.npmjs.com/package/adev-lru I would also love to get your feedback to keep improving it.
Try the package and share your thoughts or contribute if you feel like helping more 😁!
Top comments (0)