Backend Development 8 min read

Understanding Cache Avalanche, Cache Breakdown, Cache Penetration and Common Caching Patterns

This article explains the concepts of cache avalanche, cache breakdown, and cache penetration, outlines their potential impact on database performance, and presents practical mitigation techniques such as mutex locking, data pre‑warming, multi‑level caches, and popular caching patterns like Cache‑Aside, Read/Write‑Through, and Write‑Behind.

Architect's Tech Stack
Architect's Tech Stack
Architect's Tech Stack
Understanding Cache Avalanche, Cache Breakdown, Cache Penetration and Common Caching Patterns

1. Cache Avalanche

What is a cache avalanche? When a large number of cached entries expire simultaneously, a massive amount of requests bypass the cache and hit the database, overwhelming its CPU and memory and possibly causing a crash.

How to prevent it?

1) Mutex locking: use a Redis SETNX to create a mutex key; only the request that successfully sets the key loads data from the database and repopulates the cache, others retry.

2) Data pre‑warming: load essential data into the cache immediately after system startup to avoid initial database hits.

3) Dual‑layer cache: maintain a primary cache (C1) with short TTL and a secondary cache (C2) with longer TTL; when C1 expires, C2 can still serve requests.

4) Periodic cache refresh: for data with low freshness requirements, initialize the cache at startup and update it via scheduled jobs.

5) Staggered expiration: assign different TTLs to keys so that they expire at different times, smoothing load.

2. Cache Breakdown

What is a cache breakdown? When a hot key expires while many concurrent requests target it, all those requests go directly to the database, creating a sudden spike in load.

Consequences include excessive database traffic and potential service degradation.

Solution is to apply a mutex lock on the first request that detects a miss; subsequent requests wait for the lock, and once the cache is populated, they read from it instead of the database.

3. Cache Penetration

What is cache penetration? Queries for non‑existent data bypass the cache and repeatedly hit the database, wasting resources.

Mitigation strategies

1) Cache empty values: store a short‑lived placeholder for empty results (e.g., 5 minutes) to prevent repeated database hits.

2) Bloom filter: maintain a Bloom filter of existing keys; if a key is not present in the filter, return a miss immediately without querying the database.

4. Common Caching Patterns

Cache‑Aside – The application reads from the cache first; on a miss, it fetches from the database, returns the data, and writes it back to the cache. Writes go to the database first, then invalidate the cache.

Read/Write‑Through – All reads and writes go through the cache, which synchronously updates the underlying database, reducing the chance of stale data but increasing reliance on cache availability.

Write‑Behind – Writes are performed on the cache and asynchronously persisted to the database, offering high performance at the cost of potential data loss and consistency challenges.

Source: http://suo.im/5jdgcd

backendperformanceCacheRedisCaching Strategies
Architect's Tech Stack
Written by

Architect's Tech Stack

Java backend, microservices, distributed systems, containerized programming, and more.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.