Databases 34 min read

Deep Dive into Redis Multi-Threaded Network Model: From Single-Threaded Reactor to I/O Threading

The article traces Redis’s shift from its original single‑threaded reactor model to the I/O‑threaded architecture introduced in version 6, explaining how atomic operations and round‑robin client distribution let separate threads handle network I/O and parsing while the main thread executes commands, yielding roughly a two‑fold throughput boost but retaining a single‑threaded command core and incurring brief CPU spikes from busy‑wait synchronization.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
Deep Dive into Redis Multi-Threaded Network Model: From Single-Threaded Reactor to I/O Threading

This article provides a comprehensive analysis of Redis's network model evolution and internal architecture. Redis, as the de facto standard for high-performance caching, achieves remarkable throughput of 80,000+ QPS for simple commands and up to 1 million QPS with pipelining on standard Linux hardware.

The high performance stems from four key factors: C language implementation, pure in-memory I/O operations, I/O multiplexing via epoll/select/kqueue, and the single-threaded event loop model. The article explains why Redis initially chose a single-threaded design: to avoid context switching overhead, eliminate synchronization mechanism complexity, and maintain code simplicity and maintainability.

However, Redis is not purely single-threaded. Version 4.0 introduced multi-threading for asynchronous tasks (like UNLINK, FLUSHALL ASYNC commands), while version 6.0 officially added I/O threading to the core network model. The article details the multi-threaded architecture: I/O threads handle network reading/writing and command parsing, while the main thread executes commands. This design uses atomic operations and interleaved access to achieve a lock-free multi-threading model.

Key components include: the AE event library, client structure with querybuf and reply buffers, acceptTcpHandler for connection acceptance, readQueryFromClient for command reading, beforeSleep for handling pending writes, and sendReplyToClient for response writing. The multi-threaded implementation uses Round-Robin load balancing to distribute clients across I/O threads, with busy-waiting and mutex-based thread synchronization.

Performance benchmarks show approximately 2x improvement with multi-threading enabled. The article also discusses the design's limitations: it's not a standard Multi-Reactors pattern (command execution remains on the main thread), and the busy-wait synchronization can cause brief CPU spikes.

performance optimizationRedissource-code-analysisI/O multiplexingReactor PatternEvent LoopMulti-threadingNetwork Model
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.