Backend Development 7 min read

How to Choose the Right Distributed Lock: DB, Redis, or ZooKeeper?

This article explains the concept of distributed locks and compares three common implementation approaches—using a database, Redis, and ZooKeeper—detailing their mechanisms, advantages, drawbacks, and suitable scenarios for ensuring consistent access to shared resources in distributed systems.

Lobster Programming
Lobster Programming
Lobster Programming
How to Choose the Right Distributed Lock: DB, Redis, or ZooKeeper?

Distributed locks ensure that only one process or thread accesses a shared resource at any given time in a distributed system, preventing data inconsistency and conflicts. Common implementations rely on databases, Redis, or ZooKeeper.

1. Database‑Based Distributed Lock

The simplest approach creates a lock table with a unique constraint. When a thread wants the lock, it inserts a row; if the row already exists, the lock is held by another thread.

The lock acquisition flow is illustrated below:

Thread A inserts a row and obtains the lock. Thread B later attempts to insert the same key, finds the row already exists, and therefore cannot acquire the lock. After completing its work, Thread A deletes the row to release the lock.

<code>insert into lock_table(`lock_key`,`lock_time`,`lock_duration`,`lock_owner`) values('9875613',10,'s','theard_001');</code>
<code>delete from lock_table where id = #{id};</code>

2. Redis‑Based Distributed Lock

Redis uses the SETNX command to set a key only if it does not already exist. The lock value is typically a randomly generated UUID, and an EXPIRE is set to automatically release the lock after a timeout.

When releasing the lock, the thread checks that the stored UUID matches its own before deleting the key. While Redis can be used directly, frameworks like Redisson provide higher‑level features such as a watchdog that automatically extends the lock’s TTL.

Redisson’s watchdog mitigates the problem of choosing an appropriate timeout, and its RedLock algorithm addresses split‑brain scenarios in Redis’s master‑slave (AP) architecture.

3. ZooKeeper‑Based Distributed Lock

ZooKeeper provides high‑availability coordination via sequential znodes. A client creates an EPHEMERAL‑SEQUENTIAL node under a lock path; the client that holds the smallest sequence number obtains the lock. Others watch the predecessor node and retry when it is deleted.

After completing its work, the lock holder deletes its znode, triggering the next waiting client to acquire the lock. ZooKeeper’s CP model guarantees strong consistency without additional synchronization.

Summary

(1) Database locks are simple but suffer from poor performance and complex fairness implementations, making them unsuitable for high‑concurrency scenarios.

(2) Redis stores lock state in memory, offering high throughput but potential consistency issues in master‑slave setups; it is best for simple lock use cases.

(3) ZooKeeper stores lock metadata on disk, providing strong consistency and better availability for complex coordination tasks, though at the cost of higher latency.

Choose the implementation that matches your performance requirements, consistency guarantees, and operational complexity.

backendDatabaseConcurrencyRedisZookeeperDistributed Lock
Lobster Programming
Written by

Lobster Programming

Sharing insights on technical analysis and exchange, making life better through technology.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.