Databases 13 min read

Why Redis Is So Fast: History, Architecture, and Performance

This article explains Redis's rapid growth from its 2009 inception, outlines its version history and popularity, and details the architectural choices—memory‑first design, efficient data structures, smart encoding, and a single‑threaded I/O multiplexing model—that together give Redis its exceptional performance.

JD Tech
JD Tech
JD Tech
Why Redis Is So Fast: History, Architecture, and Performance

Redis is an open‑source in‑memory data‑structure store that has become indispensable in modern development thanks to its memory‑first implementation, carefully chosen data structures, encoding strategies, and thread model.

Redis was first released in 2009 by Salvatore Sanfilippo. It was sponsored by VMware until May 2013, then by Pivotal from May 2013 to June 2015, and by Redis Labs from June 2015 onward. Major releases include 2.6.0 (2012), 2.8.0 (2013), 3.0.0 with clustering (2015), 4.0.0 with modules (2017), 5.0.0 with Streams (2018), 6.0.1 adding multithreading and RESP3 (2020), and 7.0 RC1 (2022) focusing on performance and memory optimizations.

According to db‑engines.com, Redis remains the most popular key‑value storage system, consistently ranking at the top of popularity charts.

The redis-benchmark tool can simulate thousands of concurrent clients; Redis sustains over 50 000 queries per second with more than 60 000 connections, a workload that would typically crash MySQL.

Key reasons for Redis's speed include pure in‑memory operations that avoid costly disk I/O, and a set of highly efficient data structures:

Simple Dynamic Strings (SDS) provide O(1) length retrieval, prevent buffer overflows, and use lazy free space.

embstr and raw encodings choose storage format based on string length (≤44 bytes uses embstr).

Dictionary (hash table) offers O(1) key‑value lookups.

Ziplist stores small lists, hashes, and sets compactly.

Skiplist enables fast range queries for sorted sets.

Relevant code definitions:

typedef struct dict {
    dictType *type;          // interface for polymorphism
    void *privdata;          // extra data
    dictht ht[2];            // two hash tables for incremental rehashing
    long rehashidx;         // current rehash position
} dict;
typedef struct dictht {
    dictEntry **table;       // pointer to the array of entries
    unsigned long size;      // length of the array
    unsigned long sizemask;  // mask for fast hashing
    unsigned long used;      // number of elements stored
} dictht;
typedef struct dictEntry {
    void *key;
    union {
        void *val;
        uint64_t u64;
        int64_t s64;
        double d; // used for ZSET scores
    } v;
    struct dictEntry *next;
} dictEntry;

Redis employs a single network thread that uses epoll/kqueue for I/O multiplexing, allowing one thread to efficiently handle many connections. Background threads are used only for persistence tasks, keeping the core processing lightweight.

Typical use cases include high‑throughput caching, session storage, real‑time analytics, and any scenario requiring low‑latency data access.

In summary, Redis achieves its high performance through pure memory operations, an event‑driven single‑threaded model, carefully engineered data structures, and smart encoding, making it a top choice for demanding applications.

performanceRedisdata structuresThread modelIn-Memory Database
JD Tech
Written by

JD Tech

Official JD technology sharing platform. All the cutting‑edge JD tech, innovative insights, and open‑source solutions you’re looking for, all in one place.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.