Fundamental Distributed System Consistency Protocols: CAP, ACID, BASE, 2PC, 3PC, Paxos, Raft, Gossip, NWR, Quorum, Lease
This article explains the core principles and a range of consistency protocols—including CAP, ACID, BASE, 2PC, 3PC, Paxos, Raft, Gossip, NWR, Quorum, and Lease—detailing their roles, mechanisms, trade‑offs, and typical use cases in distributed systems.
Distributed system consistency is crucial and can be weak or strong; most modern protocols aim for eventual consistency, a weak consistency variant.
01. Basic Principles and Theories
CAP ( Consistency , Availability , Partition tolerance ) states that a distributed system can satisfy at most two of the three properties. The P (partition tolerance) is mandatory; designers balance C and A .
ACID ( Atomicity , Consistency , Isolation , Durability ) describes transactional guarantees with strong consistency, typically used in single‑node databases and classified as CP systems.
BASE ( Basically Available , Soft state , Eventually consistent ) reflects the practice of large‑scale internet systems that trade strong consistency for availability, belonging to AP systems.
02. 2PC
Two‑Phase Commit involves a coordinator and participants. In the voting phase the coordinator asks participants to prepare; participants log undo and redo and reply Yes/No. In the commit phase the coordinator sends commit or rollback based on replies. 2PC provides atomicity but suffers from single‑point‑of‑failure, blocking, and possible inconsistency.
Optimizations include timeout cancellation and mutual‑inquiry mechanisms, yet the blocking problem remains. 2PC is widely used in relational databases (e.g., MySQL XA).
03. 3PC
Three‑Phase Commit adds a canCommit phase before the prepare phase, reducing the chance of blocking. It still uses coordinator and participants, but can recover from coordinator failure, though it may still lead to inconsistency.
04. Paxos
Paxos is a complete consensus algorithm with roles Proposer, Acceptor, and Learner. It proceeds through Prepare, Accept, and Learn phases to elect a leader and agree on a value, tolerating failures as long as a majority of nodes are alive.
Google’s Chubby and ZooKeeper’s ZAB are based on Paxos variants.
05. Raft
Raft mirrors Paxos but is designed to be easier to understand. Nodes assume roles Leader, Follower, or Candidate. The leader replicates log entries to followers; once a majority acknowledges, the entry is committed, providing eventual consistency.
Raft is used in projects such as CockroachDB and TiKV.
06. Gossip
Gossip is a decentralized protocol where every node periodically exchanges state with a random peer, eventually converging to a consistent view without a central coordinator.
07. NWR Mechanism
NWR defines the numbers of replicas (N), required successful writes (W), and required successful reads (R). When W+R > N strong consistency is guaranteed; otherwise only eventual consistency.
08. Quorum Mechanism
Quorum (essentially NWR) requires that the sum of read and write votes exceeds the total number of replicas and that write votes exceed half of the replicas, ensuring exclusive access and serializable updates.
09. Lease Mechanism
Leases assign time‑bounded ownership of data to slaves; clients read from slaves within the lease period, and leases are renewed or revoked to avoid split‑brain scenarios.
IT Architects Alliance
Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.