Understanding Redis Lazy Free and Multi‑Threaded I/O: Architecture and Implementation
This article explains how Redis evolves from a single‑threaded event‑driven cache to using Lazy Free for asynchronous key deletion and multi‑threaded I/O for improved performance, detailing the underlying mechanisms, code implementations, limitations, and comparisons with Tair's threading model.
Single‑Threaded Principle
Redis operates as an event‑driven server that processes two kinds of events: file events (socket operations such as accept, read, write, close) and time events (periodic tasks like key expiration and server statistics). The server uses a Reactor pattern with I/O multiplexing, handling all events in a single thread, which gives high throughput but limits CPU usage to one core.
Because the event loop runs in a single thread, Redis avoids locks and achieves low latency, yet it cannot utilize multiple CPU cores and can block for seconds when deleting very large keys.
Lazy Free Mechanism
Introduced in Redis 4.0, Lazy Free turns slow operations (e.g., deleting a Set with millions of members or executing FLUSHALL ) into asynchronous tasks. The UNLINK command detaches the key from the main dictionary and hands the actual memory reclamation to a background thread, allowing the main thread to return immediately.
Example of the delete command entry point:
void delCommand(client *c) {
delGenericCommand(c, server.lazyfree_lazy_user_del);
}
/* This command implements DEL and LAZYDEL. */
void delGenericCommand(client *c, int lazy) {
int numdel = 0, j;
for (j = 1; j < c->argc; j++) {
expireIfNeeded(c->db, c->argv[j]);
int deleted = lazy ? dbAsyncDelete(c->db, c->argv[j]) :
dbSyncDelete(c->db, c->argv[j]);
if (deleted) {
signalModifiedKey(c, c->db, c->argv[j]);
notifyKeyspaceEvent(NOTIFY_GENERIC, "del", c->argv[j], c->db->id);
server.dirty++;
numdel++;
}
}
addReplyLongLong(c, numdel);
}The asynchronous path uses dbAsyncDelete , which first checks the free‑effort of the value; if the effort exceeds LAZYFREE_THRESHOLD and the object is not shared, it creates a background job via bioCreateBackgroundJob . Otherwise, the deletion falls back to synchronous free.
#define LAZYFREE_THRESHOLD 64
int dbAsyncDelete(redisDb *db, robj *key) {
if (dictSize(db->expires) > 0) dictDelete(db->expires, key->ptr);
dictEntry *de = dictUnlink(db->dict, key->ptr);
if (de) {
robj *val = dictGetVal(de);
size_t free_effort = lazyfreeGetFreeEffort(val);
if (free_effort > LAZYFREE_THRESHOLD && val->refcount == 1) {
atomicIncr(lazyfree_objects, 1);
bioCreateBackgroundJob(BIO_LAZY_FREE, val, NULL, NULL);
dictSetVal(db->dict, de, NULL);
}
}
if (de) {
dictFreeUnlinkedEntry(db->dict, de);
if (server.cluster_enabled) slotToKeyDel(key->ptr);
return 1;
} else {
return 0;
}
}Multi‑Threaded I/O
Redis 6.0 adds a dedicated I/O thread pool to offload socket read/write operations while keeping command processing in the original event‑handling thread. The design distributes pending client reads across N I/O threads, waits for all threads to finish, then resumes command execution.
int handleClientsWithPendingReadsUsingThreads(void) {
// Distribute the clients across N different lists.
listIter li;
listNode *ln;
listRewind(server.clients_pending_read, &li);
int item_id = 0;
while ((ln = listNext(&li))) {
client *c = listNodeValue(ln);
int target_id = item_id % server.io_threads_num;
listAddNodeTail(io_threads_list[target_id], c);
item_id++;
}
// Wait for all the other threads to end their work.
while (1) {
unsigned long pending = 0;
for (int j = 1; j < server.io_threads_num; j++)
pending += io_threads_pending[j];
if (pending == 0) break;
}
return processed;
}The I/O thread main loop performs the actual reads or writes:
void *IOThreadMain(void *myid) {
while (1) {
while ((ln = listNext(&li))) {
client *c = listNodeValue(ln);
if (io_threads_op == IO_THREADS_OP_WRITE) {
writeToClient(c, 0);
} else if (io_threads_op == IO_THREADS_OP_READ) {
readQueryFromClient(c->conn);
} else {
serverPanic("io_threads_op value is unknown");
}
}
listEmpty(io_threads_list[id]);
io_threads_pending[id] = 0;
}
}Because the I/O threads only handle reads or writes exclusively, the event‑handling thread often waits idle, limiting the overall scalability compared to a fully pipelined model.
Limitations
The Redis 6.0 multi‑threaded I/O is not a complete multi‑threaded engine; I/O threads cannot run concurrently with command processing, leading to polling overhead and modest performance gains (about 2× over the single‑threaded version).
Tair Multi‑Threaded Implementation
Tair separates responsibilities into a Main Thread (connection handling), an I/O Thread (read, write, parsing), and a Worker Thread (command execution). Communication uses lock‑free queues and pipes, achieving higher parallelism—benchmark results show a 3× speedup over Redis's single‑threaded mode.
Conclusion
Redis 4.0 introduced Lazy Free to offload large‑key deletions, while Redis 6.0 added I/O threading to improve network‑bound workloads. Although the multi‑threaded I/O provides limited gains, the architecture paves the way for future enhancements such as slow‑operation threading and module‑based key‑level locking. Compared with Tair’s more elegant threading model, Redis’s approach remains less performant but continues to evolve.
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.