Backend Development 8 min read

Ensuring Consistency Between Cache and Database During Dual Writes

The article examines how to maintain data consistency between caches and databases during dual-write operations, categorizes data into three levels, evaluates four update strategies, and proposes solutions such as delayed double deletion, message‑queue compensation, and binlog‑driven cache synchronization.

Top Architect
Top Architect
Top Architect
Ensuring Consistency Between Cache and Database During Dual Writes

The author, a senior architect, discusses the problem of keeping cache and database data consistent when both are written simultaneously. Data is divided into three levels based on real‑time requirements: Level 1 (order and payment flow) is written directly to the database without caching; Level 2 (user‑related data) uses Redis caching; Level 3 (payment configuration) uses local‑memory caching.

Four common dual‑write strategies are listed:

Update the database first, then update the cache.

Update the database first, then delete the cache.

Update the cache first, then update the database.

Delete the cache first, then update the database.

The article analyses the drawbacks of each approach. Updating the cache after the database can cause heavy cache‑update overhead when writes are frequent and reads are rare. Deleting the cache after the database may leave stale data if the delete fails. Deleting the cache before the database can lead to a race condition where a read request repopulates the cache with stale data before the database transaction commits. The author illustrates this with request‑A (write) and request‑B (read) scenarios.

To mitigate these issues, the author suggests several solutions:

Delayed double‑delete: delete the cache twice with a short delay to reduce the window of inconsistency.

Message‑queue compensation: if cache deletion fails, send the cache key to a message queue and retry deletion asynchronously.

Binlog‑driven cache updates: subscribe to MySQL binlog events and update or invalidate the cache based on actual database changes, avoiding intrusive code changes.

Each solution has trade‑offs: the double‑delete method adds minimal complexity, the message‑queue approach introduces additional infrastructure, and the binlog subscription increases system complexity but provides a clean separation of concerns. The author concludes that the optimal strategy depends on the specific business scenario, and there is no one‑size‑fits‑all solution.

backendcacheRedisMySQLMessage Queuedatabase consistency
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.