Why Redis Is So Fast: Single‑Threaded Core, Multi‑Threaded I/O and Performance Mechanics
The article explains how Redis achieves high QPS by using an in‑memory, single‑threaded event loop for simple commands, leverages I/O multiplexing (epoll/select/kqueue) and optional multi‑threaded I/O for heavy operations, and outlines its evolution from a pure reactor model to a hybrid multi‑threaded architecture.
Official benchmarks show a single Redis instance on average Linux hardware can handle over 80,000 QPS for simple O(N) or O(log(N)) commands, and more than 1,000,000 QPS when pipelining is used, making Redis a high‑performance caching solution.
Typical interview answers for Redis’s speed include its C implementation, pure in‑memory data store, use of I/O multiplexing (e.g., epoll / select / kqueue ), and a single‑threaded model that avoids context‑switch and lock overhead.
The single‑threaded design is chosen because CPU is rarely the bottleneck; memory and network I/O dominate. Redis processes commands in a single event loop, and the CPU only becomes a limit when commands are CPU‑intensive.
Key reasons for the single‑threaded approach are:
Avoiding excessive context‑switch overhead.
Eliminating synchronization costs such as locks for complex data structures.
Keeping the codebase simple and maintainable, a philosophy of the creator Salvatore "antirez" Sanfilippo.
Redis’s “single‑threaded” claim applies to the core network model before version 6.0; the database introduced limited multithreading in v4.0 for asynchronous tasks and full I/O multithreading in v6.0.
In the classic reactor model (v1.0–v6.0), a single thread uses aeApiPoll to wait for events, acceptTcpHandler to accept connections, readQueryFromClient to read commands into client->querybuf , and processCommand to execute them, finally writing responses via addReply into client->buf or client->reply .
To handle heavy commands like large DEL , Redis v4.0 added asynchronous commands ( UNLINK , FLUSHALL ASYNC , FLUSHDB ASYNC ) that run in background threads, preventing the main event loop from blocking.
Starting with Redis 6.0, a true multi‑threaded I/O model was introduced: the main thread still accepts connections and executes commands, but dedicated I/O threads read client requests and write responses, improving throughput on multi‑core machines while preserving the simplicity of the command execution path.
The multi‑reactor design distributes connections across sub‑reactors (I/O threads) using a load‑balancing strategy, similar to the Master‑Workers pattern used by Nginx or Memcached.
Overall, Redis balances performance and simplicity by keeping command execution single‑threaded, offloading network I/O to additional threads, and providing asynchronous variants for expensive operations, making it a fast and reliable in‑memory database.
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.