Backend Development 8 min read

Cache Consistency Strategies for Database and Redis: Tiered Storage and Synchronization Techniques

The article examines tiered data storage and evaluates four cache‑synchronization strategies—updating the database before the cache, deleting the cache before updating the database, updating the cache before the database, and deleting the cache after a database update—highlighting their trade‑offs and practical solutions such as delayed double deletion, message‑queue retries, and binlog‑driven cache updates.

IT Architects Alliance
IT Architects Alliance
IT Architects Alliance
Cache Consistency Strategies for Database and Redis: Tiered Storage and Synchronization Techniques

When optimizing a system, the author proposes tiered storage based on data real‑time requirements, dividing data into three levels: level 1 (order and payment flow data, accessed directly from the database without caching), level 2 (user‑related data, cached in Redis), and level 3 (payment configuration data, cached in local memory).

Using any cache—whether local memory or Redis—introduces data synchronization problems, causing inconsistencies between the database and the cache.

The article lists four common consistency strategies: (1) update the database first, then update the cache; (2) update the database first, then delete the cache; (3) update the cache first, then update the database; (4) delete the cache first, then update the database. Each strategy is examined in detail.

Strategy 1 (update DB → update cache) is rarely used because many business scenarios require cache values that are computed, making cache updates expensive; heavy write traffic would cause severe performance loss, as illustrated by a scenario where a value is incremented ten times without intervening reads.

Strategy 3 (update cache → update DB) is essentially the same as strategy 1 and is not recommended.

Strategy 4 (delete cache → update DB) can lead to race conditions: if request A deletes the cache and updates the DB while request B reads the empty cache and repopulates it from the DB before the DB transaction commits, the cache may contain stale data. A delayed double‑delete approach is suggested to mitigate this, though it still suffers from replication lag in master‑slave MySQL setups.

Strategy 2 (update DB → delete cache) may leave stale data if the cache‑deletion step fails. The author proposes using a message queue to retry the deletion, sending the Redis key as a message for later processing. An alternative is to subscribe to MySQL binlog events and drive cache updates directly from the log, eliminating the need for explicit delete operations.

In summary, each consistency method has advantages and disadvantages; the choice depends on specific business requirements, system complexity, and tolerance for coupling. There is no universally optimal solution—only the most suitable one for a given context.

backendRediscachingMySQLMessage Queuedatabase consistency
IT Architects Alliance
Written by

IT Architects Alliance

Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.