Fundamentals 9 min read

Understanding Consistency in Distributed Systems: Strong vs Weak, CAP, 2PC, 3PC, and Paxos

The article explains consistency concepts in distributed systems—including strong and weak (eventual) consistency, CAP and FLP theorems, and key protocols such as 2PC, 3PC, and Paxos—detailing their mechanisms, advantages, and challenges.

Architects Research Society
Architects Research Society
Architects Research Society
Understanding Consistency in Distributed Systems: Strong vs Weak, CAP, 2PC, 3PC, and Paxos

What is Consistency?

Consistency refers to the protocol by which multiple nodes in a distributed system agree on a value.

It can be divided into strong consistency and weak consistency.

Strong consistency : all nodes hold identical data at any time; a read from node A yields the same value as from node B.

Weak consistency : nodes may diverge; the most common implementation is eventual consistency, where data across nodes converges over time.

Distributed and Consistent Application Scenarios

Multi‑node read/write services that ensure high availability and scalability, e.g., ZooKeeper, DNS, Redis clusters.

Problems Faced by Distributed Systems

Asynchronous messaging: networks are unreliable, causing delays, loss, and lack of synchrony.

Node fail‑stop: nodes crash permanently.

Fail‑recovery: nodes recover after a period, the most common case.

Network partition: the network splits nodes into isolated groups.

Byzantine faults: nodes may behave arbitrarily due to bugs or malicious actions.

Designing a consistent distributed system generally assumes no Byzantine faults (trusted internal network).

The FLP theorem states that with only crash failures, a system cannot simultaneously guarantee availability and strong consistency. The CAP theorem expresses the trade‑off among consistency, availability, and partition tolerance, of which only two can be fully achieved.

Several protocols enforce consistency, including 2PC, 3PC, Paxos, Raft, and PacificA.

2PC (Two‑Phase Commit)

A two‑phase commit protocol ensures atomicity across multiple data shards (distributed transaction).

It separates nodes into a coordinator and participants, and executes in two phases.

Phase 1: The coordinator sends a prepare request; each participant writes undo/redo information to its log and replies with yes or no.

Phase 2: If all participants answered yes, the coordinator issues a commit; otherwise it aborts. Participants then finalize the transaction and release resources.

Advantages: simple principle, easy to implement.

Disadvantages: synchronous blocking, single‑point failure, possible inconsistency if the coordinator crashes after sending commit, conservative timeout‑based failure handling.

3PC (Three‑Phase Commit)

A three‑phase commit adds a pre‑commit stage to avoid blocking, but still cannot guarantee full consistency.

Workflow: after participants vote, the coordinator enters phase 2 (prepare) and sends a prepare‑commit command; participants lock resources but can roll back. After acknowledgments, the coordinator proceeds to phase 3 (commit) and finalizes the transaction.

Paxos Algorithm (Solving Single‑Point Problems)

Paxos is the foundational consensus algorithm; many other consistency algorithms are simplifications of Paxos.

It reaches agreement on a single value by ensuring that any two majority quorums intersect.

Roles:

Proposer : proposes values.

Acceptor : votes on proposals and may accept or reject them.

Learner : collects accepted proposals and determines the final chosen value.

Algorithm Description

Phase 1 – Prepare

Proposer selects a proposal number n and sends a prepare request to a majority of acceptors.

Acceptor: if n is greater than any previously seen number, it promises not to accept lower numbers and may return the highest-numbered accepted proposal.

Phase 2 – Accept

Proposer collects responses; if a majority accept, it sends an accept request with the chosen proposal number.

Acceptor: if the proposal number matches its highest promised number, it accepts the proposal.

Learner

Learners record accepted proposals; once a proposal is accepted by a majority, consensus is reached.

distributed systemsCAP theorem2PCconsistencyConsensusPaxos
Architects Research Society
Written by

Architects Research Society

A daily treasure trove for architects, expanding your view and depth. We share enterprise, business, application, data, technology, and security architecture, discuss frameworks, planning, governance, standards, and implementation, and explore emerging styles such as microservices, event‑driven, micro‑frontend, big data, data warehousing, IoT, and AI architecture.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.