13 Proven Techniques to Supercharge Redis Performance
Learn how to dramatically boost Redis speed by shortening key/value sizes, enabling lazy free, setting expirations, disabling costly commands, using slowlog, pipelines, avoiding mass expirations, optimizing clients, limiting memory, running on physical servers, tweaking persistence, disabling THP, and adopting distributed architectures.
Redis runs on a single‑threaded model; although it uses non‑blocking I/O and most commands are O(1), its performance is highly sensitive to configuration and usage patterns. This article presents several techniques to make Redis run more efficiently.
Shorten the storage length of key‑value pairs.
Use the lazy‑free (delayed deletion) feature.
Set expiration times for keys.
Disable long‑running query commands.
Optimize slow commands with slowlog.
Batch operations with Pipeline.
Avoid massive simultaneous expirations.
Client‑side optimizations.
Limit Redis memory size.
Deploy Redis on physical machines instead of virtual machines.
Review data persistence strategy.
Disable Transparent Huge Pages (THP).
Adopt a distributed architecture to increase read/write speed.
1. Shorten key/value storage length
The length of a key/value pair is inversely proportional to performance. A test shows that larger values slow down write operations because Redis uses different internal encodings (int, raw, embstr) that become less efficient as data grows.
When the stored data is large, it also increases persistence time, network transfer volume, and memory usage, which can trigger more frequent eviction.
Therefore, keep the stored length as short as possible while preserving semantics, and consider serializing and compressing data (e.g., using Protostuff or Kryo for serialization and Snappy for compression in Java).
2. Use lazy free feature
Lazy free, introduced in Redis 4.0, performs asynchronous deletion in a background I/O thread, reducing blocking of the main thread when deleting big keys.
Four lazy‑free settings exist (disabled by default):
<code>lazyfree-lazy-eviction no<br/>lazyfree-lazy-expire no<br/>lazyfree-lazy-server-del no<br/>slave-lazy-flush no</code>Enable at least
lazyfree-lazy-eviction,
lazyfree-lazy-expire, and
lazyfree-lazy-server-delto improve the main thread’s efficiency.
3. Set expiration time for keys
Configure appropriate TTLs based on business needs so Redis can automatically remove expired keys, freeing memory and reducing the chance of triggering eviction policies.
4. Disable long‑running query commands
Most Redis commands have O(1) to O(N) complexity. Commands with O(N) should be avoided because they can block the single thread. Use
SCANinstead of
KEYS, limit the size of Hash/Set/Sorted Set structures, and perform heavy set operations on the client side.
For large deletions, use the asynchronous
UNLINKcommand instead of
DEL.
5. Optimize slow commands with slowlog
Use the
slowlogfeature to identify and tune time‑consuming commands. Important configuration items:
slowlog-log-slower-than: threshold (in microseconds) for logging a command as slow.
slowlog-max-len: maximum number of entries stored in the slowlog.
Retrieve entries with
slowlog get nand optimize the corresponding business logic.
6. Use Pipeline for batch operations
Pipeline allows the client to send multiple commands without waiting for individual replies, greatly improving throughput.
Java example using Jedis:
<code>public class PipelineExample {
public static void main(String[] args) {
Jedis jedis = new Jedis("127.0.0.1", 6379);
long beginTime = System.currentTimeMillis();
Pipeline pipe = jedis.pipelined();
for (int i = 0; i < 100; i++) {
pipe.set("key" + i, "val" + i);
pipe.del("key" + i);
}
pipe.sync();
long endTime = System.currentTimeMillis();
System.out.println("Execution time: " + (endTime - beginTime) + " ms");
}
}
</code>Result: ~297 ms with Pipeline vs. ~17 276 ms without, about 58× faster.
7. Avoid massive data expiration
Redis scans for expired keys 10 times per second (configurable via
hz). If many keys expire simultaneously, the server may experience noticeable latency.
Mitigate by adding a random offset to each key’s TTL.
8. Client‑side optimizations
Besides using Pipeline, employ a connection pool to avoid the overhead of repeatedly creating and destroying connections.
9. Limit Redis memory size
On 64‑bit systems,
maxmemoryis often left unset, allowing Redis to consume all available RAM and potentially swap, causing latency. Set a fixed memory limit to trigger eviction policies.
Redis 4.0+ provides eight eviction policies:
noeviction : never evict, writes fail when out of memory.
allkeys-lru : evict least recently used keys among all keys.
allkeys-random : evict random keys.
volatile-lru : evict LRU among keys with an expiration.
volatile-random : evict random keys with an expiration.
volatile-ttl : evict keys with the nearest expiration.
volatile-lfu : evict least frequently used keys with an expiration.
allkeys-lfu : evict least frequently used keys among all keys.
10. Deploy on physical machines
Running Redis on a VM shares resources with other VMs, leading to higher latency and memory pressure. Use a physical server when performance is critical. You can check intrinsic latency with:
<code>./redis-cli --intrinsic-latency 100</code>11. Review data persistence strategy
Redis offers three persistence options after version 4.0:
RDB (snapshot).
AOF (append‑only file).
Hybrid persistence (RDB snapshot plus AOF for subsequent writes).
Hybrid persistence combines fast restart with reduced data loss. Check if it is enabled:
<code>config get aof-use-rdb-preamble</code>Enable via command line:
<code>config set aof-use-rdb-preamble yes</code>or edit
redis.confto set
aof-use-rdb-preamble yesand restart.
12. Disable Transparent Huge Pages (THP)
THP can increase memory usage and slow down forked processes. Disable it with:
<code>echo never > /sys/kernel/mm/transparent_hugepage/enabled</code>Persist the setting by adding the same command to
/etc/rc.local.
13. Use distributed architecture to increase read/write speed
Redis provides three main distributed solutions:
Master‑slave replication.
Sentinel for automatic failover.
Redis Cluster for sharding data across multiple nodes.
Cluster distributes keys into 16,384 hash slots, allowing load to be spread across many servers, dramatically improving scalability and fault tolerance.
macrozheng
Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.