Backend Development 20 min read

Common Pitfalls and Solutions for Distributed Caching with Redis and Memcached

This article examines the characteristics of Redis and Memcached, identifies typical design mistakes such as consistency, cache avalanche, hot‑key, penetration and breakdown issues, and presents practical mitigation strategies—including consistent hashing, binlog‑driven cache invalidation, distributed locking, and multi‑node replication—to help backend engineers build reliable high‑concurrency cache layers.

Code Ape Tech Column
Code Ape Tech Column
Code Ape Tech Column
Common Pitfalls and Solutions for Distributed Caching with Redis and Memcached

Preface

In my current work I use two distributed cache technologies, redis and memcached , to reduce database pressure in high‑concurrency systems, but improper cache structure design can cause various problems; this article lists common pitfalls and solutions for future reference.

1. Server characteristics of the two common cache technologies

1. Memcached server

Memcached (mc) has no built‑in clustering; all data distribution is handled by the client (e.g., xmemcached ) which uses simple modulo‑based sharding. This can cause massive key invalidation and cache avalanche when nodes are added or removed, so the client enables Ketama consistent hashing:

XMemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses(servers));
builder.setOpTimeout(opTimeout);
builder.setConnectTimeout(connectTimeout);
builder.setTranscoder(transcoder);
builder.setConnectionPoolSize(connectPoolSize);
builder.setKeyProvider(keyProvider);
builder.setSessionLocator(new KetamaMemcachedSessionLocator()); // enable ketama consistent hashing for data sharding

Consistent hashing limits the impact of node changes, but may lead to uneven distribution; virtual nodes are used to balance slots. mc is multithreaded, each value can store up to 1 MB, and expired k‑v pairs are removed on next access after LRU eviction.

2. Redis server

Redis supports cluster mode where key routing is performed by the server. It is single‑threaded, so heavy commands (e.g., keys ) can block other operations. Redis does not use consistent hashing; instead it uses 16384 hash slots. When nodes are added or removed, slots are reassigned, and the server coordinates the migration.

2. Cache structure selection

Memcached provides simple k‑v storage (max 1 MB per value) and is suitable for plain text data. Redis offers rich data structures and, despite being single‑threaded, excels at queries, sorting and pagination that have low complexity and short latency, making it ideal for structured cache data.

A typical pattern is to store lightweight identifier‑based data in Memcached and use Redis’s advanced structures (e.g., sorted sets) for indexes such as leaderboards, then combine the two when assembling the final response.

3. Redis large‑index back‑source problem

When a large amount of data expires, rebuilding Redis indexes can be slow. Using a message queue to construct indexes incrementally (first a few pages, then the rest) reduces the impact on the database.

4. Consistency problems

Cache data must stay consistent with the database. The typical flow is: service A updates the DB, deletes the related cache key, service B reads a cache miss, fetches fresh data from the DB, writes it back to the cache, and returns the result. In multi‑threaded environments this can lead to race conditions, stale reads, and write‑skew.

4.1 Concurrent read‑write inconsistency

Multiple threads may interleave updates and cache misses, causing the cache to contain older data while the DB holds newer data.

4.2 Master‑slave sync delay inconsistency

If reads are directed to a replica that lags behind the master, a cache miss may fetch stale data even after the master has already cleared the cache.

4.3 Cache pollution inconsistency

Changing cache structures (e.g., adding a new field) without versioning keys can cause pre‑release and production caches to interfere with each other.

5. How to handle cache consistency

Common solutions include:

Binlog + message queue + consumer that deletes the corresponding cache key.

Using the replica’s binlog for change detection when reads are performed on replicas.

Key versioning (e.g., appending _v2 ) whenever the cache schema changes.

6. Hit‑rate issues

Frequent data changes generate many binlog messages, causing aggressive cache eviction and low hit rates. Updating the cache directly from the binlog consumer, while preserving message order (single‑threaded or task‑grouped by key/id), can mitigate this.

7. Cache penetration

When a request queries a non‑existent key, it repeatedly hits the DB. The usual mitigation is to cache empty results with a short TTL and optionally filter obviously invalid IDs before querying.

8. Cache breakdown (stampede)

When a hot key expires, many concurrent requests may flood the DB. Adding a mutex or a distributed lock around the back‑source method limits the number of simultaneous DB hits.

9. Cache avalanche

A sudden massive expiration of keys overwhelms the DB. Preventive measures include high‑availability cache clusters, consistent‑hashing‑based sharding, and rate‑limiting back‑source traffic.

10. Hot‑key problem

Hot keys concentrate traffic on a single cache node, risking node failure. Solutions are to create multiple cache replicas for the hot key and to add a short‑lived local cache layer.

Final note

If this article helped you, please like, share, and follow the "Code Monkey Technical Column" public account for more PDFs and community discussions.

backenddistributed systemsRedisCache ConsistencyMemcached
Code Ape Tech Column
Written by

Code Ape Tech Column

Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.