Redis Multi‑Threading Evolution: Lazy Free and I/O Thread Mechanisms
Redis, traditionally a single‑threaded in‑memory cache, introduces Lazy Free in version 4.0 and multi‑threaded I/O in version 6.0 to mitigate blocking during large key deletions and improve performance, with detailed explanations of event handling, code implementations, limitations, and comparisons to Tair’s threading model.
Redis, a high‑performance in‑memory cache, has historically been single‑threaded, which limits CPU utilization and can cause blocking when processing large‑key operations. The article explains the drawbacks of this design, such as using only one CPU core, potential server pauses during massive key deletions, and limited QPS growth.
Single‑Threaded Principle
Redis operates as an event‑driven server handling two kinds of events: file events (socket operations like accept, read, write, close) and time events (periodic tasks such as key expiration). It uses a Reactor pattern with I/O multiplexing, processing file events first and then time events, all within a single thread.
Although the core command processing is single‑threaded, Redis can fork a child process for tasks like RDB snapshot creation, which is outside the scope of this discussion.
Lazy Free Mechanism
To avoid long pauses caused by slow commands (e.g., deleting a Set with millions of members or executing FLUSHDB / FLUSHALL ), Redis 4.0 introduced the Lazy Free feature. It makes heavy operations asynchronous by delegating memory reclamation to a background thread, while the main thread quickly unlinks the key and returns to processing other requests.
For example, the DEL command can be transformed into a non‑blocking UNLINK operation when the configuration lazyfree_lazy_user_del is enabled.
void delCommand(client *c) {
delGenericCommand(c, server.lazyfree_lazy_user_del);
}
/* This command implements DEL and LAZYDEL. */
void delGenericCommand(client *c, int lazy) {
int numdel = 0, j;
for (j = 1; j < c->argc; j++) {
expireIfNeeded(c->db, c->argv[j]);
// 根据配置确定DEL在执行时是否以lazy形式执行
int deleted = lazy ? dbAsyncDelete(c->db, c->argv[j]) :
dbSyncDelete(c->db, c->argv[j]);
if (deleted) {
signalModifiedKey(c, c->db, c->argv[j]);
notifyKeyspaceEvent(NOTIFY_GENERIC, "del", c->argv[j], c->db->id);
server.dirty++;
numdel++;
}
}
addReplyLongLong(c, numdel);
}The asynchronous path calculates a "free effort" value for the object; if it exceeds a threshold ( LAZYFREE_THRESHOLD ) and the object is not shared, the deletion is queued to a background job instead of being performed synchronously.
#define LAZYFREE_THRESHOLD 64
int dbAsyncDelete(redisDb *db, robj *key) {
if (dictSize(db->expires) > 0) dictDelete(db->expires, key->ptr);
dictEntry *de = dictUnlink(db->dict, key->ptr);
if (de) {
robj *val = dictGetVal(de);
size_t free_effort = lazyfreeGetFreeEffort(val);
if (free_effort > LAZYFREE_THRESHOLD && val->refcount == 1) {
atomicIncr(lazyfree_objects, 1);
bioCreateBackgroundJob(BIO_LAZY_FREE, val, NULL, NULL);
dictSetVal(db->dict, de, NULL);
}
}
if (de) {
dictFreeUnlinkedEntry(db->dict, de);
if (server.cluster_enabled) slotToKeyDel(key->ptr);
return 1;
} else {
return 0;
}
}Multi‑Threaded I/O (Redis 6.0)
Redis 6.0 adds a dedicated I/O thread pool to offload network read/write operations. The main event‑handling thread still processes file and time events, but when a read event is ready it distributes the client sockets across multiple I/O threads, waits for them to finish, and then proceeds with command execution.
int handleClientsWithPendingReadsUsingThreads(void) {
// Distribute the clients across N different lists.
listIter li;
listNode *ln;
listRewind(server.clients_pending_read, &li);
int item_id = 0;
while ((ln = listNext(&li))) {
client *c = listNodeValue(ln);
int target_id = item_id % server.io_threads_num;
listAddNodeTail(io_threads_list[target_id], c);
item_id++;
}
// Wait for all the other threads to end their work.
while (1) {
unsigned long pending = 0;
for (int j = 1; j < server.io_threads_num; j++)
pending += io_threads_pending[j];
if (pending == 0) break;
}
return processed;
}The I/O thread loop performs either reads or writes based on the assigned operation:
void *IOThreadMain(void *myid) {
while (1) {
// I/O thread executes read/write operations
while ((ln = listNext(&li))) {
client *c = listNodeValue(ln);
if (io_threads_op == IO_THREADS_OP_WRITE) {
writeToClient(c, 0);
} else if (io_threads_op == IO_THREADS_OP_READ) {
readQueryFromClient(c->conn);
} else {
serverPanic("io_threads_op value is unknown");
}
}
listEmpty(io_threads_list[id]);
io_threads_pending[id] = 0;
}
}While this design improves throughput (roughly 2× faster than the pure single‑threaded version), it has limitations: I/O threads can only handle reads or writes exclusively at a time, and the main thread remains idle during I/O processing, leading to additional polling overhead.
Comparison with Tair’s Threading Model
Tair implements a more elegant separation: a Main Thread for connection handling, an I/O Thread for network I/O and command parsing, and a Worker Thread for actual command execution. Communication between I/O and Worker threads uses lock‑free queues and pipes, achieving higher parallelism (about 3× speedup over Redis’s multi‑threaded version).
Conclusion
Redis 4.0’s Lazy Free introduces asynchronous deletion to eliminate long pauses caused by large‑key removals, and Redis 6.0’s I/O threading adds multi‑core utilization for network operations, though with modest performance gains and notable constraints. Future improvements may focus on more refined slow‑operation threading or leveraging Redis Cluster for scaling, as suggested by the Redis core developers.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.