Backend Development 8 min read

Optimizing Write‑Heavy High‑Concurrency Cache: Lock Granularity, Horizontal Sharding, and Lock‑Free Strategies

The article analyzes write‑mostly, read‑light cache scenarios such as driver location updates and counter increments, discusses lock bottlenecks, proposes horizontal sharding and per‑record locking, explores lock‑free approaches with data‑integrity signatures, and summarizes practical optimization techniques for high‑concurrency back‑end systems.

Qunar Tech Salon
Qunar Tech Salon
Qunar Tech Salon
Optimizing Write‑Heavy High‑Concurrency Cache: Lock Granularity, Horizontal Sharding, and Lock‑Free Strategies

Business Scenario – Many applications have a write‑mostly, read‑light pattern, e.g., frequent driver GPS updates (hundreds of thousands per second) and occasional reads, or high‑frequency counter increments with rare reads.

Underlying Implementation – Typically a Map<key, value> cache stores fixed‑length keys and values, such as Map<driver_id, DriverInfo> or Map<type, count> .

Critical Resource and Locking – Because the map is shared, concurrent access requires locks: WriteLock(m_lock) before writes and ReadLock(m_lock) before reads, followed by the corresponding unlock calls.

Lock Bottleneck – With millions of drivers updating every few seconds, the single lock m_lock can become a severe contention point, limiting scalability.

Horizontal Sharding (Lock Granularity Optimization) – Split the map into N partitions: i = driver_id % N , each with its own lock m_lock[i] . This reduces both lock contention and data volume per shard by roughly 1/N .

Fine‑Grained Lock (Map → Array) – If driver_id is sequential and memory is ample, replace the map with an array indexed by driver_id . Use a separate lock per array element: WriteLock(m_lock[index]) and UnWriteLock(m_lock[index]) . This achieves near‑row‑level locking, further minimizing conflicts at the cost of many locks.

Lock‑Free Cache – Remove locks entirely for counter updates: Array[type]++ . This yields maximum throughput but can produce dirty data when multiple threads write the same fixed‑length region concurrently, violating data integrity.

Data Integrity and Signatures – To detect corruption, store a short signature (e.g., 16‑bit CRC) alongside each value. On read, verify the signature; if it mismatches, treat the entry as a cache miss and fall back to the database. This approach enables a lock‑free cache while preserving consistency.

Summary – In ultra‑high‑concurrency, write‑mostly cache scenarios: (1) horizontal sharding lowers lock contention; (2) converting a map to an array provides per‑record locks; (3) removing locks maximizes concurrency but risks consistency; (4) adding signatures safeguards data integrity, achieving a practical lock‑free cache.

BackendCacheConcurrencyShardinglock-freelock optimization
Qunar Tech Salon
Written by

Qunar Tech Salon

Qunar Tech Salon is a learning and exchange platform for Qunar engineers and industry peers. We share cutting-edge technology trends and topics, providing a free platform for mid-to-senior technical professionals to exchange and learn.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.