Databases 9 min read

Understanding Redis Cluster Architecture and Strong Consistency with Raft

This article explains Redis Cluster's decentralized sharding design, the role of master‑slave replication for high availability, and how strong consistency is achieved using consensus protocols such as Raft, highlighting key concepts like global log indexes and commit pointers.

Full-Stack Internet Architecture
Full-Stack Internet Architecture
Full-Stack Internet Architecture
Understanding Redis Cluster Architecture and Strong Consistency with Raft

1. Redis Cluster Architecture

Since Redis 3.0, a decentralized cluster architecture has been introduced, using pre‑sharding with 16,384 hash slots. Each key is hashed and mapped to a specific node, and every node stores a portion of the overall data as well as routing information for all keys.

Because the design is leader‑less, any node can receive client requests; if the requested key resides on another node, the receiving node redirects the request to the correct node.

When a node fails, data stored on that node would be lost, so a master‑slave replication layer is typically added to provide high availability.

In the master‑slave setup, each master has one or more slaves that replicate its data. Replication groups (shown as dashed boxes) aim for full data consistency, but typical client‑write flows either return after the master writes (asynchronous replication) or wait for slaves to acknowledge (synchronous replication), both of which can still lead to inconsistencies.

2. Strong Consistency Protocols Between Replicas

To guarantee consistency across replicas, protocols such as Raft and Paxos are employed. Raft’s data replication works as follows:

A client sends a write request to the Raft cluster; the leader node stores the entry, then broadcasts it to all followers. Once a majority of nodes have persisted the entry, the leader commits it and replies success to the client. If the leader fails, a new leader is elected, and the commit pointer ensures only entries replicated on a majority become visible.

The protocol introduces two key mechanisms:

Global Log Index : each log entry receives a unique, monotonically increasing identifier, enabling precise ordering and comparison across nodes.

Commit Pointer : an entry is considered committed only after a majority of nodes have stored it; only committed entries are exposed to clients, ensuring strong consistency semantics.

Example scenario: with three nodes (Node1=100, Node2=89, Node3=88) and Node1 as leader, the commit pointer is at 89 because two nodes have that entry. If Node1 crashes, a new leader (Node2) is elected, and Node3 synchronizes missing entries, so no data is lost.

3. Summary

The article starts from a seemingly simple question about Redis leaders, delves into the fundamentals of distributed data sharding, high availability, and the challenges of replica consistency, and explains how protocols like Raft provide strong consistency through global log sequencing and commit pointers.

distributed systemsDatabaseRedisclusterconsistencyRaft
Full-Stack Internet Architecture
Written by

Full-Stack Internet Architecture

Introducing full-stack Internet architecture technologies centered on Java

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.