Databases 16 min read

Understanding Redis Lazy Free and Multi‑Threaded I/O: Architecture, Mechanisms, and Limitations

The article explains how Redis, originally a single‑threaded in‑memory cache, introduces Lazy Free in version 4.0 and multi‑threaded I/O in version 6.0 to mitigate blocking during large key deletions, describes the underlying event‑driven architecture, shows relevant source code, and discusses the benefits and constraints of these mechanisms.

Top Architect
Top Architect
Top Architect
Understanding Redis Lazy Free and Multi‑Threaded I/O: Architecture, Mechanisms, and Limitations

Redis is a high‑performance, in‑memory cache that historically relied on a single‑threaded event loop, achieving read speeds of up to 110 k ops/s and write speeds of 81 k ops/s, but this design limits CPU usage to one core and can cause server stalls when deleting very large keys.

| Single‑Threaded Principle

Redis operates as an event‑driven program handling two kinds of events: file events (socket operations such as accept , read , write , close ) and time events (periodic tasks like key expiration and server statistics). The server processes file events first, then time events, all within a single thread using the Reactor pattern and I/O multiplexing.

Although the core is single‑threaded, Redis forks a child process for RDB snapshot generation, which is outside the scope of this discussion.

| Lazy Free Mechanism

To avoid long pauses caused by slow commands (e.g., deleting a Set with millions of members or executing FLUSHALL ), Redis 4.0 introduced Lazy Free , which offloads expensive deletions to a background thread. The UNLINK command triggers asynchronous memory reclamation, while the main thread only removes the key reference, allowing it to return quickly.

void delCommand(client *c) {
    delGenericCommand(c, server.lazyfree_lazy_user_del);
}

/* This command implements DEL and LAZYDEL. */
void delGenericCommand(client *c, int lazy) {
    int numdel = 0, j;
    for (j = 1; j < c->argc; j++) {
        expireIfNeeded(c->db, c->argv[j]);
        int deleted = lazy ? dbAsyncDelete(c->db, c->argv[j]) : dbSyncDelete(c->db, c->argv[j]);
        if (deleted) {
            signalModifiedKey(c, c->db, c->argv[j]);
            notifyKeyspaceEvent(NOTIFY_GENERIC, "del", c->argv[j], c->db->id);
            server.dirty++;
            numdel++;
        }
    }
    addReplyLongLong(c, numdel);
}

When the free‑effort of an object exceeds a threshold (e.g., 64 allocations) and the object is not shared, Redis queues a bio job for asynchronous deletion; otherwise it falls back to synchronous free.

#define LAZYFREE_THRESHOLD 64
int dbAsyncDelete(redisDb *db, robj *key) {
    if (dictSize(db->expires) > 0) dictDelete(db->expires, key->ptr);
    dictEntry *de = dictUnlink(db->dict, key->ptr);
    if (de) {
        robj *val = dictGetVal(de);
        size_t free_effort = lazyfreeGetFreeEffort(val);
        if (free_effort > LAZYFREE_THRESHOLD && val->refcount == 1) {
            atomicIncr(lazyfree_objects, 1);
            bioCreateBackgroundJob(BIO_LAZY_FREE, val, NULL, NULL);
            dictSetVal(db->dict, de, NULL);
        }
    }
    if (de) {
        dictFreeUnlinkedEntry(db->dict, de);
        if (server.cluster_enabled) slotToKeyDel(key->ptr);
        return 1;
    }
    return 0;
}

| Multi‑Threaded I/O and Its Limitations

Redis 6.0 added a dedicated Lazy Free thread for large‑key reclamation and introduced multi‑threaded I/O . The I/O threads handle only read or write operations, while the main event‑handling thread distributes ready file events to the I/O threads and waits for them to finish before proceeding with command execution.

int handleClientsWithPendingReadsUsingThreads(void) {
    // Distribute clients across N I/O threads
    listIter li;
    listNode *ln;
    listRewind(server.clients_pending_read, &li);
    int item_id = 0;
    while ((ln = listNext(&li))) {
        client *c = listNodeValue(ln);
        int target_id = item_id % server.io_threads_num;
        listAddNodeTail(io_threads_list[target_id], c);
        item_id++;
    }
    // Wait for all I/O threads to finish
    while (1) {
        unsigned long pending = 0;
        for (int j = 1; j < server.io_threads_num; j++)
            pending += io_threads_pending[j];
        if (pending == 0) break;
    }
    return processed;
}

The I/O thread loop reads queries or writes responses based on the operation type, then clears its task list and signals completion.

void *IOThreadMain(void *myid) {
    while (1) {
        // Process each client in the thread's list
        while ((ln = listNext(&li))) {
            client *c = listNodeValue(ln);
            if (io_threads_op == IO_THREADS_OP_WRITE) {
                writeToClient(c, 0);
            } else if (io_threads_op == IO_THREADS_OP_READ) {
                readQueryFromClient(c->conn);
            } else {
                serverPanic("io_threads_op value is unknown");
            }
        }
        listEmpty(io_threads_list[id]);
        io_threads_pending[id] = 0;
    }
}

Because the I/O threads can only perform reads or writes exclusively, the event‑handling thread often idles, and the design incurs polling overhead. Consequently, the performance gain is modest (about 2× over the pure single‑threaded version), whereas alternative designs such as Tair’s multi‑threaded architecture achieve roughly 3× improvement.

Comparison with Tair

Tair separates responsibilities into a Main Thread (connection handling), an I/O Thread (reading, parsing, and sending responses), and a Worker Thread (command execution). It uses lock‑free queues and pipes to exchange data, resulting in higher parallelism and better throughput.

Conclusion

Redis 4.0’s Lazy Free thread mitigates blocking during large‑key deletions, and Redis 6.0’s I/O threads provide limited multi‑threaded I/O without altering the core event loop. The author notes that true scalability is better achieved via Redis Cluster rather than extensive threading, and future work may focus on slow‑operation threading and module‑level key‑level locking.

PerformanceDatabaseRedisEvent LoopMultithreaded I/OLazy Free
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.