Backend Development 28 min read

Handling Redis Cache Pitfalls: Penetration, Avalanche, Breakdown, and Consistency with Bloom Filters and Distributed Locks

This article explains the four common Redis caching challenges—cache penetration, avalanche, breakdown, and data inconsistency—demonstrates their impact under high concurrency, and provides practical Java Spring Boot solutions including caching null objects, Bloom filters, distributed locks, random expiration, and delayed double‑delete strategies with full code examples.

Top Architect
Top Architect
Top Architect
Handling Redis Cache Pitfalls: Penetration, Avalanche, Breakdown, and Consistency with Bloom Filters and Distributed Locks

When developing high‑traffic applications, Redis is often used as a cache layer, but four special scenarios can cause serious problems: cache penetration, cache avalanche, cache breakdown, and data inconsistency.

Cache Penetration occurs when a request queries a key that does not exist in both the cache and the database, forcing every request to hit the database. The article shows a typical service method that first checks Redis and then falls back to MySQL.

@Slf4j
@Service
public class DocumentInfoServiceImpl extends ServiceImpl
implements DocumentInfoService {
    @Resource
    private StringRedisTemplate stringRedisTemplate;

    @Override
    public DocumentInfo getDocumentDetail(int docId) {
        String redisKey = "doc::info::" + docId;
        String obj = stringRedisTemplate.opsForValue().get(redisKey);
        DocumentInfo documentInfo = null;
        if (StrUtil.isNotEmpty(obj)) {
            log.info("==== select from cache ====");
            documentInfo = JSONUtil.toBean(obj, DocumentInfo.class);
        } else {
            log.info("==== select from db ====");
            documentInfo = this.lambdaQuery().eq(DocumentInfo::getId, docId).one();
            if (ObjectUtil.isNotNull(documentInfo)) {
                stringRedisTemplate.opsForValue().set(redisKey, JSONUtil.toJsonStr(documentInfo), 5L, TimeUnit.SECONDS);
            }
        }
        return documentInfo;
    }
}

In high concurrency, many threads may simultaneously query a non‑existent ID, resulting in repeated database hits. The article proposes two solutions.

Solution 1: Cache Empty Objects – store a placeholder (e.g., an empty string) in Redis with a short TTL when the database returns null, preventing further DB queries.

if (StrUtil.equals(obj, "")) {
    log.info("==== select from cache, data not available ====");
    return null;
}
if (StrUtil.isNotEmpty(obj)) {
    log.info("==== select from cache ====");
    documentInfo = JSONUtil.toBean(obj, DocumentInfo.class);
} else {
    log.info("==== select from db ====");
    documentInfo = this.lambdaQuery().eq(DocumentInfo::getId, docId).one();
    stringRedisTemplate.opsForValue().set(redisKey,
        ObjectUtil.isNotNull(documentInfo) ? JSONUtil.toJsonStr(documentInfo) : "",
        5L, TimeUnit.SECONDS);
}

Solution 2: Bloom Filter – use a Bloom filter to quickly test whether a key might exist. If the filter says the key is definitely absent, the request is rejected without touching the database.

/** Bloom filter add pseudo‑code */
BitArr[] bit = new BitArr[10000];
List
insertData = Arrays.asList("A", "B", "C");
for (String insertDatum : insertData) {
    for (int i = 1; i <= 3; i++) {
        int bitIdx = hash_i(insertDatum);
        bit[bitIdx] = 1;
    }
}

The article also shows how to create a Bloom filter with Guava:

com.google.guava
guava
21.0
public static BloomFilter
localBloomFilter = BloomFilter.create(Funnels.integerFunnel(), 10000L, 0.01);

Cache Breakdown happens when a hot key expires, causing a sudden surge of DB queries. Two remedies are presented:

Do not set an expiration for hot data and update the cache synchronously when the DB changes.

Use a mutex (local synchronized or distributed lock) so that only one thread queries the DB while others wait for the cache to be populated.

@Component
public class RedisLockUtil {
    @Resource
    private StringRedisTemplate stringRedisTemplate;

    public boolean tryLock(String key, String value, long exp) {
        Boolean absent = stringRedisTemplate.opsForValue().setIfAbsent(key, value, exp, TimeUnit.SECONDS);
        if (absent) return true;
        return tryLock(key, value, exp);
    }

    public void unLock(String key, String value) {
        String s = stringRedisTemplate.opsForValue().get(key);
        if (StrUtil.equals(s, value)) {
            stringRedisTemplate.delete(key);
        }
    }
}

Integrating the lock into the service method prevents multiple threads from hammering the DB:

@Override
public DocumentInfo getDocumentDetail(int docId) {
    String redisKey = "doc::info::" + docId;
    boolean mightContain = bloomFilterUtil.existBloomFilterRedis(redisKey);
    if (!mightContain) {
        log.info("==== select from bloomFilter, data not available ====");
        return null;
    }
    String obj = stringRedisTemplate.opsForValue().get(redisKey);
    DocumentInfo documentInfo = null;
    if (StrUtil.isNotEmpty(obj)) {
        log.info("==== select from cache ====");
        documentInfo = JSONUtil.toBean(obj, DocumentInfo.class);
    } else {
        String lockKey = redisKey + "::lock";
        String uuid = UUID.randomUUID().toString();
        if (redisLockUtil.tryLock(lockKey, uuid, 60)) {
            try {
                obj = stringRedisTemplate.opsForValue().get(redisKey);
                if (StrUtil.isNotEmpty(obj)) {
                    documentInfo = JSONUtil.toBean(obj, DocumentInfo.class);
                } else {
                    log.info("==== select from db ====");
                    documentInfo = this.lambdaQuery().eq(DocumentInfo::getId, docId).one();
                    if (ObjectUtil.isNotNull(documentInfo)) {
                        stringRedisTemplate.opsForValue().set(redisKey, JSONUtil.toJsonStr(documentInfo), 5L, TimeUnit.SECONDS);
                    }
                }
            } finally {
                redisLockUtil.unLock(lockKey, uuid);
            }
        }
    }
    return documentInfo;
}

Cache Avalanche occurs when many keys share the same TTL and expire together. The article suggests:

Assign random extra seconds to each key’s TTL.

Avoid setting TTL for some hot data (accepting eventual consistency).

Deploy a high‑availability Redis cluster.

int randomInt = RandomUtil.randomInt(2, 10);
stringRedisTemplate.opsForValue().set(redisKey, JSONUtil.toJsonStr(documentInfo), 5L + randomInt, TimeUnit.SECONDS);

Data Consistency problems arise when cache and DB get out of sync. The article reviews four naive strategies (update cache first, update DB first, delete‑then‑update, update‑then‑delete) and explains why each can fail under failures or concurrency.

It then introduces the delayed double delete pattern: delete the key before writing to the DB, write the DB, and schedule a second delete after a short delay (implemented with a DelayQueue ).

@Data
public class DoubleDeleteTask implements Delayed {
    private String key;
    private long time; // execution time (epoch ms)
    public DoubleDeleteTask(String key, long delay) {
        this.key = key;
        this.time = System.currentTimeMillis() + delay;
    }
    @Override
    public long getDelay(TimeUnit unit) {
        return unit.convert(time - System.currentTimeMillis(), TimeUnit.MILLISECONDS);
    }
    @Override
    public int compareTo(Delayed o) {
        return Long.compare(this.time, ((DoubleDeleteTask) o).time);
    }
}
@Component
public class DoubleDeleteTaskRunner implements CommandLineRunner {
    @Resource
    private DelayQueue
doubleDeleteQueue;
    @Resource
    private StringRedisTemplate stringRedisTemplate;
    private static final int RETRY_COUNT = 3;

    @Override
    public void run(String... args) throws Exception {
        new Thread(() -> {
            while (true) {
                try {
                    DoubleDeleteTask task = doubleDeleteQueue.take();
                    stringRedisTemplate.delete(task.getKey());
                    // retry logic omitted for brevity
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        }, "double-delete-task").start();
    }
}

Finally, the service method uses the double‑delete queue after a successful DB update:

@Override
public boolean updateDocument(DocumentInfo documentInfo) {
    String redisKey = "doc::info::" + documentInfo.getId();
    stringRedisTemplate.opsForValue().set(redisKey, JSONUtil.toJsonStr(documentInfo));
    boolean ok = this.updateById(documentInfo);
    doubleDeleteQueue.add(new DoubleDeleteTask(redisKey, 2000L));
    return ok;
}

Through these examples, the article demonstrates how to protect Redis‑backed caches from common pitfalls in high‑concurrency environments, ensuring system stability and data correctness.

CacheconcurrencyRedisSpringBootBloomFilterDistributedLockCacheInvalidation
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.