Databases 13 min read

Redis Expiration, Eviction Policies, and LRU/LFU Algorithms

This article explains how Redis handles key expiration, the commands for setting TTL, the three expiration strategies, the eight eviction policies, and the internal LRU and LFU algorithms, including their implementation details, sampling techniques, and configuration parameters for memory management.

Architect
Architect
Architect
Redis Expiration, Eviction Policies, and LRU/LFU Algorithms

Introduction

As a server has limited memory, when Redis runs out of memory and continues to receive commands, it follows specific handling mechanisms to avoid crashes.

Key Expiration

Redis provides four independent commands to set a key's expiration time: expire key ttl (seconds), pexpire key ttl (milliseconds), expireat key timestamp (seconds), and pexpireat key timestamp (milliseconds). The set command can also set a value with an expiration atomically. Remaining time can be queried with ttl key (seconds) or pttl key (milliseconds); they return -1 if no expiration is set and -2 for an illegal expiration.

Expiration Strategies

Redis combines two of the three classic strategies for expired keys: lazy deletion (checking expiration only when a key is accessed) and periodic scanning (regularly scanning keys with an expiration). Timed deletion, which creates a timer per key, is not used because it would consume excessive CPU resources.

Eviction Policies

When memory is full, Redis uses the maxmemory setting (or config set maxmemory <bytes> ) together with maxmemory-policy to choose one of eight eviction policies:

Policy

Description

volatile-lru

LRU eviction of keys with an expiration.

allkeys-lru

LRU eviction of any key.

volatile-lfu

LFU eviction of keys with an expiration.

allkeys-lfu

LFU eviction of any key.

volatile-random

Random eviction of keys with an expiration.

allkeys-random

Random eviction of any key.

volatile-ttl

Evicts keys with the nearest expiration time.

noeviction

Default policy – commands that would exceed memory return an error.

LRU Algorithm

Redis does not use a classic LRU list; instead it approximates LRU by sampling a configurable number of keys (default maxmemory_samples 5 ) and evicting the least recently used among the sample. This avoids per‑key overhead and improves performance.

typedef struct redisDb {
    dict *dict;          // all key‑value pairs
    dict *expires;       // keys with an expiration time
    dict *blocking_keys; // keys blocked by commands like BLPOP
    dict *watched_keys;  // WATCHed keys
    int id;              // database ID
    // ... other fields omitted
} redisDb;

LFU Algorithm

Redis also supports an LFU (Least Frequently Used) eviction algorithm. The 24‑bit lru field of a redisObject stores a 16‑bit last‑decrement time (minutes) and an 8‑bit logarithmic counter. The counter increments probabilistically based on a configurable lfu_log_factor (default 10) and decays over time according to lfu-decay-time (default 1 minute).

lfu_log_factor 10
lfu-decay-time 1

The decay algorithm calculates idle time in minutes, divides it by lfu-decay-time , and subtracts that number of periods from the counter, ensuring that rarely accessed keys gradually lose their frequency weight.

Conclusion

The article covered how Redis manages key expiration, the eight eviction policies available when memory is exhausted, and the inner workings of the approximated LRU and LFU algorithms that power those policies.

Memory ManagementredisTTLLRULFUeviction policies
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.