Databases 22 min read

How Do Big Keys Slow Down Redis and How to Fix Them?

This article explains why oversized Redis keys (BigKeys) cause data skew, network blockage, slow queries, and high CPU load, shows how to detect them with redis-cli and other tools, and provides practical strategies—including lazy deletion, key splitting, and preventive design—to mitigate their impact on production systems.

JD Cloud Developers
JD Cloud Developers
JD Cloud Developers
How Do Big Keys Slow Down Redis and How to Fix Them?

Background

In JD Daojia's shopping‑cart system, user carts are stored in Redis using a

Hash

where each store ID maps to all items added by the user. When a single store accumulates many items or many stores are involved, the key grows large and degrades online performance.

Definition and Causes of BigKey

A BigKey is a key whose value size or element count exceeds certain thresholds. Typical definitions are:

String: value larger than 10 KB.

Non‑String structures (Hash, Set, ZSet, List): more than 10 000 elements or total size over 100 KB.

Cluster‑wide: total number of keys exceeds 100 million.

Common causes include:

Improper data structures (e.g., using a List for a set of unique elements).

Lack of capacity planning for dynamic growth.

Missing expiration, treating the cache as a permanent store.

Hazards of BigKey

Data Skew

When a key becomes disproportionately large, its shard experiences higher CPU and bandwidth usage, affecting all keys on that shard.

Network Blockage

Redis uses a single‑threaded reactor model with I/O multiplexing. A large key prolongs the processing time of a single operation, causing the event loop to block and leading to network congestion.

Slow Queries

Operations on big keys increase latency, lowering QPS and raising TP99, which can cascade into further slow queries.

CPU Pressure

Repeated access to a massive key can block the main thread, increase CPU load, and even affect fork‑based persistence because large objects require more memory copying.

Detecting BigKey

redis‑cli --bigkeys

Running

redis-cli --bigkeys

scans the entire keyspace and reports the biggest keys per type. Example output:

<code>$ redis-cli --bigkeys
# Scanning the entire keyspace to find biggest keys as well as
# average sizes per key type.
-------- 第一部分start -------
[00.00%] Biggest string found so far 'key-419' with 3 bytes
[05.14%] Biggest list   found so far 'mylist' with 100004 items
[35.77%] Biggest string found so far 'counter:__rand_int__' with 6 bytes
[73.91%] Biggest hash   found so far 'myobject' with 3 fields
-------- 第一部分end -------
... (additional summary omitted for brevity)</code>

Open‑source Tools

Tools like redis‑rdb‑tools analyze RDB files offline to locate big keys, offering detailed reports without impacting the live service.

Mitigation Strategies

Prevention

Set appropriate expiration times and stagger them.

Store data as JSON strings and prune unused fields.

Compress data when possible.

Enforce business limits (e.g., maximum items per cart).

Graceful Deletion

DEL

Older Redis versions (< 4.0) delete keys synchronously, blocking the main thread. Newer versions introduce lazy‑free mechanisms.

<code>int dbDelete(redisDb *db, robj *key) {
    if (dictSize(db->expires) > 0) dictDelete(db->expires,key->ptr);
    if (dictDelete(db->dict,key->ptr) == DICT_OK) {
        if (server.cluster_enabled) slotToKeyDel(key);
        return 1;
    } else {
        return 0;
    }
}</code>

Lazy‑Free (UNLINK)

Since Redis 4.0,

UNLINK

performs asynchronous deletion. The underlying implementation calls

delGenericCommand

with a lazy flag.

<code>void delCommand(client *c) {
    delGenericCommand(c, server.lazyfree_lazy_user_del);
}
void unlinkCommand(client *c) {
    delGenericCommand(c, 1);
}</code>

SCAN‑Based Deletion

Iteratively scan keys and delete them in small batches to avoid blocking.

<code>public void scanRedis(String cursor, String endCursor) {
    while (!end) {
        KeyScanResult<String> result = client.scan(cursor, ScanOptions.scanOptions().count(100).build());
        for (String key : result.getResult()) {
            if (client.ttl(key) == -1) {
                logger.info("Permanent key: {}", key);
            }
        }
        cursor = result.getCursor();
        if (endCursor.equals(cursor)) break;
    }
}</code>

Divide and Conquer

Split large String values into multiple keys and use pipelined

MGET

.

For Hashes, shard fields across several keys based on a hash of the field name.

Avoid storing massive collections in a single key; move them to external storage if necessary.

Summary

Big keys in Redis arise from unbounded growth, unsuitable data structures, and missing TTLs, leading to data skew, network blockage, slow queries, and CPU pressure. Detect them with

redis-cli --bigkeys

or offline tools, then mitigate by setting expirations, using lazy‑free deletion, scanning for incremental removal, and redesigning data models to split large keys.

performanceRedisDatabase OptimizationKey ManagementBigKeyLazy Free
JD Cloud Developers
Written by

JD Cloud Developers

JD Cloud Developers (Developer of JD Technology) is a JD Technology Group platform offering technical sharing and communication for AI, cloud computing, IoT and related developers. It publishes JD product technical information, industry content, and tech event news. Embrace technology and partner with developers to envision the future.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.