Backend Development 9 min read

Redis Interview Questions: Solving Cache Penetration, Cache Breakdown, and Cache Avalanche

This article explains the concepts of cache penetration, cache breakdown (stampede), and cache avalanche in Redis, and provides practical solutions such as parameter validation, null‑value caching, Bloom filters, mutex locks, hot‑data non‑expiration, and randomizing expiration times, accompanied by Java code examples.

Architect's Guide
Architect's Guide
Architect's Guide
Redis Interview Questions: Solving Cache Penetration, Cache Breakdown, and Cache Avalanche

Redis Interview Question 1: How to Solve Cache Penetration?

Cache penetration occurs when a request queries a key that does not exist in both the cache and the database, causing every request to hit the database and potentially overload it. The article suggests three main mitigation strategies:

Parameter validation: reject illegal request parameters early (e.g., verify ID length with a regular expression).

Cache null values: store a short‑lived placeholder for missing keys to prevent repeated DB hits, while monitoring the cache to avoid excessive null‑value occupation.

Bloom filter: use a probabilistic data structure to pre‑filter nonexistent keys before they reach the cache or database.

The Bloom filter works by hashing each key with K hash functions, setting the corresponding bits in a bitset, and checking those bits on lookup; it guarantees no false negatives but may produce false positives.

Redis Interview Question 2: How to Solve Cache Breakdown (Stampede)?

Cache breakdown happens when a hot key expires and many concurrent requests miss the cache, all querying the database simultaneously. The article proposes two main solutions:

Mutex lock: only the first request acquires a lock to query the DB and repopulate the cache; other requests wait or retry after a short pause.

Never‑expire hot data: keep frequently accessed keys permanently in the cache and update them asynchronously.

static Lock reenLock = new ReentrantLock();
public List
getData04() throws InterruptedException {
    List
result = new ArrayList<>();
    result = getDataFromCache();
    if (result.isEmpty()) {
        if (reenLock.tryLock()) {
            try {
                System.out.println("Got lock, fetch from DB and write to cache");
                result = getDataFromDB();
                setDataToCache(result);
            } finally {
                reenLock.unlock();
            }
        } else {
            result = getDataFromCache();
            if (result.isEmpty()) {
                System.out.println("Did not get lock, cache empty, sleeping briefly");
                Thread.sleep(100);
                return getData04(); // retry
            }
        }
    }
    return result;
}

Redis Interview Question 3: How to Solve Cache Avalanche?

A cache avalanche occurs when a large number of keys share the same expiration time and expire simultaneously, causing a sudden surge of DB traffic. The article recommends three mitigation techniques:

Randomize expiration times: add a random offset to each key's TTL so that expirations are spread over time.

Never‑expire hot data: keep critical hot keys permanently cached and refresh them asynchronously, accepting possible temporary inconsistency.

Mutex lock: apply per‑key locking (local JVM lock or distributed lock) so that only one thread recomputes the value while others wait for the refreshed cache.

These strategies together help maintain system stability under high concurrency and prevent service outages caused by cache‑related failures.

BackendJavaCacheRedisCache AvalancheCache BreakdownCache Penetration
Architect's Guide
Written by

Architect's Guide

Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.