Understanding CAP, BASE, and Eventual Consistency: A Practical Guide
This article explains the CAP theorem, the trade‑offs among consistency, availability, and partition tolerance, introduces the BASE model and its properties, and details how different database systems implement various consistency guarantees such as strong, eventual, and causal consistency.
CAP Theory
In 1998 Eric Brewer introduced the CAP theorem, stating that a distributed system can guarantee at most two of the three properties: Consistency (C), Availability (A), and Partition tolerance (P). The ideal of achieving all three simultaneously is impossible, so designers must sacrifice one.
C: Consistency – every read returns the most recent write.
A: Availability – every request receives a response, successful or not, within a bounded time.
P: Partition tolerance – the system continues to operate despite network partitions.
Because the three cannot coexist, systems choose one of the following combinations:
CA: Give up partition tolerance to achieve consistency and availability (e.g., traditional single‑node relational databases).
CP: Give up availability to achieve consistency and partition tolerance (e.g., systems that block reads during a partition).
AP: Give up consistency to achieve availability and partition tolerance (e.g., many NoSQL stores that return stale data).
Design Choices in Real Products
Different databases adopt different trade‑offs. Traditional MySQL or SQL Server prioritize CA by avoiding partitioning, while HBase and Redis prioritize CP, accepting possible latency for consistency. Choosing a database requires aligning its strengths with business requirements.
BASE and Eventual Consistency
BASE stands for Basically Available, Soft state, and Eventual consistency – the “acid opposite” for NoSQL systems. While ACID emphasizes strong consistency, BASE embraces weaker guarantees to improve availability.
Basically Available: The system remains operational despite some nodes failing.
Soft State: State may change over time without immediate synchronization.
Eventual Consistency: Updates propagate asynchronously; all replicas converge eventually.
Types of Eventual Consistency
Causal Consistency: If operation A happens before B, B sees A’s effects.
Read‑Your‑Writes: A client always sees its own updates.
Monotonic Reads: Once a value is read, later reads never return older values.
Session Consistency: Guarantees read‑your‑writes within a session.
Monotonic Writes: Writes from a single client are applied in order.
Achieving Consistency Levels
In a replicated system, let N be the total number of replicas, W the number of replicas that must acknowledge a write, and R the number of replicas consulted for a read.
W + R > N: Guarantees strong consistency because read and write quorums overlap.
W + R ≤ N: Results in weak consistency; reads may miss recent writes.
W = N, R = 1: Strong consistency with any write failure causing the operation to fail, reducing availability.
For high availability (e.g., HDFS), N is typically ≥3, and the exact values of W and R depend on the application’s tolerance for latency and inconsistency.
Practical Examples
HBase relies on HDFS’s strong consistency: writes must reach all N nodes (W = N) before returning, while reads need only one replica (R = 1). Cassandra allows configurable N, W, and R, letting users balance consistency versus availability per use case.
360 Zhihui Cloud Developer
360 Zhihui Cloud is an enterprise open service platform that aims to "aggregate data value and empower an intelligent future," leveraging 360's extensive product and technology resources to deliver platform services to customers.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.