Backend Development 7 min read

Understanding Redis Eviction Policies and Implementing LRU Cache in Java

This article explains Redis memory eviction strategies, including volatile and allkeys policies, details the approximate LRU algorithm Redis uses, and provides Java implementations of an LRU cache using LinkedHashMap and a custom doubly‑linked list, complete with code examples and configuration settings.

IT Services Circle
IT Services Circle
IT Services Circle
Understanding Redis Eviction Policies and Implementing LRU Cache in Java

Redis provides several memory eviction policies to free space when the cache is full, divided into volatile policies that affect keys with an expiration time ( volatile-ttl , volatile-random , volatile-lru , volatile-lfu ) and all‑keys policies that consider every key ( allkeys-lru , allkeys-random , allkeys-lfu ).

The volatile-lru and volatile-lfu policies rely on LRU and LFU algorithms, respectively. Redis implements an approximate LRU algorithm: it samples a configurable number of keys (default 5, configurable via maxmemory-samples ) and evicts the sampled key with the smallest LRU timestamp, avoiding the overhead of a full linked list.

In the approximate LRU, data are conceptually organized as a doubly‑linked list with a most‑recently‑used (MRU) head and a least‑recently‑used (LRU) tail; however, Redis stores only a timestamp ( lru field) in each redisObject to track recent access.

To illustrate the algorithm, the article provides a Java example using LinkedHashMap with access‑order set to true and overriding removeEldestEntry to achieve LRU eviction, followed by a custom implementation based on a doubly‑linked list and a hash map, complete with full source code.

maxmemory-samples 50
public class LRUCache<K, V> {
    private Map<K, V> map;
    private final int cacheSize;
    public LRUCache(int initialCapacity) {
        map = new LinkedHashMap<K, V>(initialCapacity, 0.75f, true) {
            @Override
            protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
                return size() > cacheSize;
            }
        };
        this.cacheSize = initialCapacity;
    }
}
class LRUCache {
    class Node {
        int key, value;
        Node pre, next;
        Node(int key, int value) { this.key = key; this.value = value; pre = this; next = this; }
    }
    private final int capacity;
    private Node dummy;
    private Map<Integer, Node> cache;
    public LRUCache(int capacity) { /* initialization */ }
    public int get(int key) { /* get logic */ }
    public void put(int key, int value) { /* put logic */ }
    private void add(Node node) { /* add to tail */ }
    private void remove(Node node) { /* remove node */ }
}
Javabackend developmentRedisLRUCache Eviction
IT Services Circle
Written by

IT Services Circle

Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.