LRU Cache

Shivani Mehta
4 min readAug 19, 2020

Hola Guys! In this article, we’re going to discuss a very famous interview problem, which is commonly asked in various top-tech companies.

The problem states that-

Design and implement a data structure for Least Recently Used (LRU) cache. It should support the following operations: get and put.

get(key) - Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1.
put(key, value) - Set or insert the value if the key is not already present. When the cache reached its capacity, it should invalidate the least recently used item before inserting a new item.

The cache is initialized with a positive capacity.

Expected TC — O(1)

Before jumping on various approaches to solving this problem, let’s first understand the theory behind caching and why is it used?

Caching is a technique of keeping frequent data in local storage to allow retrieval without asking for the data from the original source (if the data won’t change frequently). This technique is useful for speeding up the access to the website which one hits often, by reducing the latency (waiting) time of the generated request.

In typical scenarios, there are 4 network calls that take place to generate a response against any request, that is generated by a web browser. Querying such a large database every time for a particular request can take a lot of time. To optimize these network calls, we can store the response of such requests that are frequently generated, in the local cache storage.

A cache is just a storage, where the latency time of reading the data is less, compare to the time of reading it from somewhere else. “Note that Cache comparatively can store very less amount of data at any point in time.”

LRU Cache

LRU caching is a type of caching, which organises data in order of its usage, allowing us to quickly update the cache based on the item that hasn’t been used for the longest amount of time.

It supports LRU eviction policy, which means the item which is least recently used will get kicked off from the cache to make room for new entries (when it reaches its maximum capacity).

Some requirements of LRU cache is:-

  1. Fixed Size:- It should have an upper bound on memory usage.
  2. Fast Access:- It should support Fast lookup and fast updation, preferably in O(1) time.
  3. Eviction Policy:- The design should allow removal of least recently used request whenever it reaches its maximum capacity.

Below is the flow of operations:-

Possible operations:-

Enough of theory. Now let’s ponder upon possible data structures which can be used to design this system and what are the pros and cons associated with them.

Array/Stack/Queue/Deque/Singly Link list:- All being a linear data structure will take O(n) TC in the worst case.

Searching O(n) | Updation O(n) | Deletion O(n)

HashMaps/Maps:-

Searching — O(1) | Updation — O(n) | Deletion(least recently used key) — O(n)

Doubly LinkedList:-

Searching O(n) | Updation O(1) | Deletion O(1)(if the node is known to us)

Based on the above observation, some DS have less search time but are less efficient while updating or deleting a key or vice versa.

So how can we optimize this entire process? Why not combine two data structures to get the most efficient design. What do you think?

When we think of O(1) lookup, obvious data structure comes to our mind is Map. A Map provides O(1) insertion and lookup, but Map does not have any way of tracking the most recently used item.

To track this, we require another data-structure which provides fast insertion, deletion and updation and in case of LRU we use Doubly LinkedList, because it supports O(1) deletion, updation and insertion if we have the address of Node on which these operations has to be performed.

So the implementation of LRU cache will have a Map and Doubly LinkedList. Here Map will hold the keys and address of the Nodes of Doubly LinkedList and Doubly LinkedList will hold the values of keys.

To keep track of the least and most recently used item in the cache, we’ll choose 2 opposite ends of LinkedList. We’ll remove an item from the bottom and add an item at the start of LinkedList and whenever any entry is accessed, it will be moved to the top, so the most recently used entries will stay on Top and Least recently used will stay on Bottom of LinkedList.

That’s all for this article, in the next article we will deep dive on the implementation of the above design.

Hope you find this article useful. If you have any doubt or suggestion, drop me a mail or connect with me on LinkedIn. Also, if you like this article, I won’t mind your pressing the clap icon. LoL!

Till then Happy Coding ‘ )

--

--