Backend Development 8 min read

Ensuring Data Consistency Between Cache and Database in Double‑Write Scenarios

The article analyzes the challenges of maintaining data consistency when using both cache (local memory or Redis) and a database, classifies data by real‑time requirements, evaluates four double‑write strategies, and proposes practical solutions such as delayed double deletion, message‑queue compensation, and binlog‑driven cache updates.

Top Architect
Top Architect
Top Architect
Ensuring Data Consistency Between Cache and Database in Double‑Write Scenarios

In system optimization, data can be tiered based on real‑time requirements: Level 1 (order and payment flow) requires immediate consistency and bypasses cache; Level 2 (user‑related data) is read‑heavy and cached in Redis; Level 3 (payment configuration) is small, rarely changed, and cached in local memory.

Using any cache introduces the risk of cache‑database inconsistency, especially during double‑write operations. The article lists four common strategies and discusses their pros and cons.

Solution Strategies

Update the database first, then update the cache.

Update the database first, then delete the cache.

Update the cache first, then update the database.

Delete the cache first, then update the database.

Update Database First, Then Update Cache

This approach is rarely used because many cached values are derived from complex calculations; updating them after every write incurs high overhead, especially when write traffic is heavy and read traffic is low.

Update Cache First, Then Update Database

This scenario is essentially the same as the previous one and is not recommended.

Delete Cache First, Then Update Database

When a delete‑then‑update sequence occurs, concurrent read requests may fetch stale data from the database before the transaction commits, leading to inconsistency. A common mitigation is the delayed double‑delete strategy.

However, in a MySQL master‑slave setup, replication lag can still cause stale reads. The article illustrates this with request A (update) and request B (read) sequences and shows how the lag results in outdated data.

Update Database First, Then Delete Cache

If the cache deletion fails after a successful database update, subsequent reads will return stale data. The proposed remedy is to use a message queue for retrying the deletion, sending the Redis key as the message payload.

To avoid heavy code coupling, the article suggests subscribing to MySQL binlog events and updating the cache based on those logs, though this adds system complexity.

Summary

Each strategy has trade‑offs: deleting the cache before updating the database requires forcing reads to the master to avoid stale data; using message‑queue compensation introduces additional infrastructure; binlog‑driven cache updates provide decoupling at the cost of increased complexity. The optimal choice depends on specific business requirements, and no single technique fits all scenarios.

backendcacheRedisdata synchronizationDouble Writedatabase consistency
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.