Backend Development 10 min read

Cache Update Strategies: Analyzing Consistency Issues and Practical Solutions

This article examines three common cache‑update strategies, explains why some lead to data inconsistency, and proposes practical solutions such as delayed double‑delete, asynchronous retries, and message‑queue based recovery to maintain cache‑database consistency in high‑concurrency systems.

Architecture Digest
Architecture Digest
Architecture Digest
Cache Update Strategies: Analyzing Consistency Issues and Practical Solutions

Cache is widely used for its high concurrency and performance, but updating cache consistently with the database is challenging. The article first outlines three typical update patterns: (1) update the database then the cache, (2) delete the cache before updating the database, and (3) update the database then delete the cache.

Strategy 1 – Update DB then Cache : This approach is often rejected because concurrent threads can cause stale data (e.g., thread B updates the cache before thread A), and it wastes resources when write‑heavy workloads require frequent cache writes.

Strategy 2 – Delete Cache then Update DB : Deleting the cache first can lead to a race where a read request fetches stale data from the database and repopulates the cache before the write completes, resulting in inconsistency. The article recommends a delayed double‑delete technique to mitigate this.

Example code for the delayed double‑delete strategy: public void write(String key, Object data) { redis.delKey(key); db.updateData(data); Thread.sleep(1000); redis.delKey(key); }

The sleep duration should be based on the expected read‑operation latency plus a safety margin; for read‑write‑splitting architectures, add the master‑slave sync delay.

If the second delete fails, inconsistency reappears. To handle failures, the article proposes two recovery schemes: (1) push the failed key to a message queue and retry until success, and (2) subscribe to the database binlog (e.g., using Canal), extract the key, and retry deletion via a separate worker.

Additional safeguards include setting cache TTLs, performing the second delete asynchronously to improve throughput, and implementing retry mechanisms with exponential back‑off.

The article concludes by summarizing that the discussed strategies and recovery mechanisms provide a comprehensive view of cache‑DB consistency solutions, referencing the Cache‑Aside pattern, Facebook’s practice, and relevant literature.

Backenddistributed systemsCacheDatabaseconsistencydouble-delete
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.