Analyzing and Troubleshooting Redis Latency Issues
This article explains common causes of Redis latency spikes, such as high‑complexity commands, large keys, concentrated expirations, memory limits, fork overhead, CPU binding, AOF settings, swap usage, and network saturation, and provides step‑by‑step troubleshooting commands and best‑practice recommendations.
Redis is an in‑memory database that can handle up to 100,000 QPS per instance, but latency may increase dramatically if the underlying usage patterns or operational settings are sub‑optimal. This guide walks through typical latency‑inducing scenarios and shows how to locate and resolve them.
High‑complexity commands : Commands whose time complexity exceeds O(n) (e.g., SORT , SUNION , ZUNIONSTORE ) can become bottlenecks when operating on large data sets. First enable the slow‑log and set a threshold:
CONFIG SET slowlog-log-slower-than 5000
CONFIG SET slowlog-max-len 1000Then query recent entries:
SLOWLOG GET 5If the slow‑log shows frequent O(n) commands, replace them with more efficient patterns or reduce the amount of data processed per call.
Large keys : Storing excessively large values increases memory allocation and deallocation time. Detect big keys with the built‑in scanner:
redis-cli -h $host -p $port --bigkeys -i 0.01The command runs a SCAN over all keys and measures their size using STRLEN , LLEN , HLEN , SCARD , and ZCARD . Avoid writing huge values and consider splitting data across multiple keys.
Concentrated expirations : A burst of keys expiring at the same moment can trigger the active expiration cycle, which runs in the main thread and may pause client requests for up to 25 ms. Search the code base for EXPIREAT or PEXPIREAT and randomise the expiration time, e.g.:
redis.expireat(key, expire_time + random(300))Monitor expired_keys via INFO and alert on sudden spikes.
Memory limit and eviction : When maxmemory is reached, Redis evicts keys according to the configured policy (e.g., allkeys‑lru , volatile‑lru , allkeys‑random , etc.). Eviction adds latency, especially if large keys are removed. Choose a policy that matches the workload, or shard the dataset across multiple instances.
Fork overhead : RDB snapshots and AOF rewrites fork a child process. For large datasets the copy‑on‑write overhead can block the parent for seconds. Check the duration with:
INFO | grep latest_fork_usecSchedule snapshots on replica nodes during off‑peak hours and disable AOF or its rewrite if the workload tolerates occasional data loss.
CPU binding : Binding the Redis process to a specific CPU core can cause the forked child to compete for the same core, worsening latency. Avoid CPU pinning when persistence is enabled.
AOF configuration : Three fsync policies exist. appendfsync always provides the highest durability but incurs the greatest latency because each write is flushed to disk in the main thread. The recommended setting for most workloads is appendfsync everysec , which loses at most one second of data while preserving performance.
Swap usage : If the host runs out of RAM, Redis pages to swap, causing millisecond‑to‑second response times. Monitor memory and swap metrics, and free memory or add RAM before swap is used. If swap has already been engaged, restart the instance after clearing swap, preferably after a failover to a replica.
Network saturation : Persistent high network utilization can introduce packet loss and additional latency. Track interface traffic, set alerts on bandwidth thresholds, and consider scaling out or moving heavy traffic to separate instances.
Summary : Redis latency can stem from command complexity, large keys, expiration bursts, memory pressure, fork‑related blocking, CPU binding, AOF settings, swap, or network overload. Understanding these mechanisms and applying the corresponding diagnostics and mitigations ensures stable, high‑performance Redis deployments.
Laravel Tech Community
Specializing in Laravel development, we continuously publish fresh content and grow alongside the elegant, stable Laravel framework.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.