Backend Development 9 min read

Redis Interview Questions: Solving Cache Penetration, Cache Breakdown, and Cache Avalanche

This article explains common Redis caching issues—cache penetration, cache breakdown, and cache avalanche—detailing their causes and practical mitigation techniques such as parameter validation, null caching, Bloom filters, mutex locks, hot‑data strategies, and staggered expiration to ensure system stability.

Architect's Guide
Architect's Guide
Architect's Guide
Redis Interview Questions: Solving Cache Penetration, Cache Breakdown, and Cache Avalanche

The article presents three popular Redis interview questions and provides detailed solutions for each caching problem.

Question 1: How to solve cache penetration?

Cache penetration occurs when requests query keys that are absent in both cache and database, causing repeated database hits and potential overload. Since caching cannot store all data, especially non‑hot data, this is normal.

Solutions include:

Parameter validation at the API layer to reject malformed requests early.

Storing a short‑TTL null placeholder for missing keys, while monitoring to avoid cache space waste.

Using a Bloom filter to pre‑filter non‑existent keys before hitting the cache or database, noting its probabilistic nature and potential false positives.

The Bloom filter consists of a bitset and multiple hash functions; inserting a key sets the corresponding bits, and querying checks those bits to infer existence.

Advantages: low memory usage, fast O(K) insert/query, and no raw data storage. Disadvantages: false positives, difficulty deleting entries, and possible hash collisions.

Question 2: How to solve cache breakdown (stampede)?

Cache breakdown happens when a hot key expires and many concurrent requests miss the cache, overwhelming the database.

Solutions include:

Adding a mutex lock (e.g., a ReentrantLock) so only the first request queries the database and populates the cache while others wait.

static Lock reenLock = new ReentrantLock();
public List
getData04() throws InterruptedException {
    List
result = new ArrayList<>();
    // read from cache
    result = getDataFromCache();
    if (result.isEmpty()) {
        if (reenLock.tryLock()) {
            try {
                System.out.println("Got lock, fetch from DB and write to cache");
                result = getDataFromDB();
                setDataToCache(result);
            } finally {
                reenLock.unlock(); // release lock
            }
        } else {
            result = getDataFromCache();
            if (result.isEmpty()) {
                System.out.println("Did not get lock, cache empty, sleep briefly");
                Thread.sleep(100);
                return getData04(); // retry
            }
        }
    }
    return result;
}

Other approaches: keep hot data from expiring and refresh it asynchronously, suitable for extreme traffic scenarios.

Question 3: How to solve cache avalanche?

Cache avalanche occurs when many hot keys share the same expiration time, causing a massive simultaneous cache miss and database surge.

Mitigation strategies:

Stagger expiration times by adding random offsets so keys expire at different moments.

Mark hot data as never expiring and update it asynchronously, accepting possible temporary inconsistency.

Apply mutex locks (local or distributed) per key, similar to the cache breakdown solution.

These techniques help distribute load and maintain system reliability.

BackendPerformanceCacheRedisCache AvalancheCache BreakdownCache Penetration
Architect's Guide
Written by

Architect's Guide

Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.