Databases 12 min read

Improving Multi-Key mget Performance in Redis Cluster by Refactoring Client Implementations

This article analyzes why Lettuce's mget on Redis Cluster performs poorly due to slot-based architecture, presents client-side refactoring using hashtags and pipeline‑based JedisCluster modifications, and shows benchmark results demonstrating up to double speed improvement over Lettuce.

Zhuanzhuan Tech
Zhuanzhuan Tech
Zhuanzhuan Tech
Improving Multi-Key mget Performance in Redis Cluster by Refactoring Client Implementations

Background

Redis is a widely used NoSQL database, and at ZhaiZhai the team migrated from Codis to Redis Cluster, choosing Lettuce as the client. During migration, Lettuce's multi‑key commands such as mget and mset exhibited poor performance.

Analysis of the Issue

Phenomenon

When writing data to both Codis and Redis Cluster, the same mget request to Redis Cluster took longer than the equivalent request to Codis, even though Codis adds an extra proxy hop.

Root Causes

Redis Cluster Architecture

Redis Cluster distributes keys across slots; a key’s slot is computed by CRC16(key) % 16384 . Multi‑key operations are limited to keys that reside in the same slot. When mget spans multiple slots, the client must split the request, query each node, and merge results, incurring extra network round‑trips.

Lettuce’s mget Implementation

Lettuce splits keys by slot, executes a separate mget per node, and then merges and sorts the results. The relevant code is:

public RedisFuture<List<KeyValue<K, V>>> mget(Iterable<K> keys) {
    // split keys by slot
    Map<Integer, List<K>> partitioned = SlotHash.partition(codec, keys);
    if (partitioned.size() < 2) {
        return super.mget(keys);
    }
    Map<K, Integer> slots = SlotHash.getSlots(partitioned);
    Map<Integer, RedisFuture<List<KeyValue<K, V>>>> executions = new HashMap<>();
    for (Map.Entry<Integer, List<K>> entry : partitioned.entrySet()) {
        RedisFuture<List<KeyValue<K, V>>> mget = super.mget(entry.getValue());
        executions.put(entry.getKey(), mget);
    }
    return new PipelinedRedisFuture<>(executions, objectPipelinedRedisFuture -> {
        List<KeyValue<K, V>> result = new ArrayList<>();
        for (K opKey : keys) {
            int slot = slots.get(opKey);
            int position = partitioned.get(slot).indexOf(opKey);
            RedisFuture<List<KeyValue<K, V>>> listRedisFuture = executions.get(slot);
            result.add(MultiNodeExecution.execute(() -> listRedisFuture.get().get(position)));
        }
        return result;
    });
}

The three steps are: split keys by slot, fetch each slot’s keys with a node‑local mget , then reorder results to match the original key order. Because Lettuce sends commands sequentially over a single Netty connection, the more slots involved, the slower the operation.

Solution

Using Hashtag

Placing related keys into the same slot via a hashtag (e.g., {a} ) forces them onto a single node, eliminating cross‑slot overhead. However, this requires business logic to be aware of Redis Cluster sharding, which is undesirable.

Client Refactoring

Instead, the team refactored the client to use pipeline‑based batch GET commands per node, avoiding Lettuce’s multi‑key split. The steps are:

Group keys by the Redis node they belong to.

Issue a pipelined batch of GET commands to each node.

Collect, sort, and merge the results to preserve the original order.

JedisCluster Refactor

Because Lettuce lacks native pipeline support for multi‑key, the team modified JedisCluster, which already supports pipeline. The refactored method is:

public List<String> mget(String... keys) {
    List<Pipeline> pipelineList = new ArrayList<>();
    List<Jedis> jedisList = new ArrayList<>();
    try {
        Map<JedisPool, List<String>> pooling = new HashMap<>();
        for (String key : keys) {
            JedisPool pool = connectionHandler.getConnectionPoolFromSlot(JedisClusterCRC16.getSlot(key));
            pooling.computeIfAbsent(pool, k -> new ArrayList<>()).add(key);
        }
        Map<String, Response<String>> resultMap = new HashMap<>();
        for (Map.Entry<JedisPool, List<String>> entry : pooling.entrySet()) {
            Jedis jedis = entry.getKey().getResource();
            Pipeline pipelined = jedis.pipelined();
            for (String key : entry.getValue()) {
                Response<String> response = pipelined.get(key);
                resultMap.put(key, response);
            }
            pipelined.flush();
            pipelineList.add(pipelined);
            jedisList.add(jedis);
        }
        for (Pipeline pipeline : pipelineList) {
            pipeline.returnAll();
        }
        List<String> list = new ArrayList<>();
        for (String key : keys) {
            Response<String> response = resultMap.get(key);
            list.add(response.get());
        }
        return list;
    } finally {
        pipelineList.forEach(Pipeline::close);
        jedisList.forEach(Jedis::close);
    }
}

Handling Exceptions

The refactor also adds logic to handle Redis Cluster redirection errors (MOVED and ASKING) and pipeline command failures, ensuring retries or proper exception propagation.

Performance Evaluation

Test Scenarios

Three client configurations were benchmarked for mget with 100, 500, and 1000 keys:

Codis accessed via Jedis (baseline).

Refactored JedisCluster.

Lettuce synchronous client.

Results

Across all key counts, the refactored JedisCluster outperformed Lettuce, often achieving roughly half the latency, and was comparable to or slightly faster than Codis for average latency, though its tail latency (tp999) was sometimes higher.

Conclusion

Redis Cluster’s slot‑based design limits multi‑key commands, causing Lettuce’s mget to be slow when keys span slots. By refactoring the client to execute per‑node pipelined GET operations, performance improves significantly, demonstrating the importance of client‑side adaptation for distributed NoSQL systems.

performanceRedisMGETpipelineLettuceJedisClusterRedis Cluster
Zhuanzhuan Tech
Written by

Zhuanzhuan Tech

A platform for Zhuanzhuan R&D and industry peers to learn and exchange technology, regularly sharing frontline experience and cutting‑edge topics. We welcome practical discussions and sharing; contact waterystone with any questions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.