Redis Memory Management and Optimization Practices
The article explains Redis’s in‑memory architecture, detailing memory components, object encodings, buffer limits, fragmentation, and forked‑process overhead, and offers practical optimization tips—such as using compact encodings, controlling client buffers, disabling THP, and enabling active defragmentation—illustrated by real‑world case studies.
Redis is an in‑memory key‑value database where memory plays a central role. This article analyzes Redis memory structures, introduces optimization techniques, and presents real‑world cases to help quickly locate and resolve memory‑related issues.
1. Redis Memory Management
The memory model includes several components:
used_memory : total memory allocated by Redis (KB), includes dictionaries, metadata, objects, caches, and Lua scripts.
self memory : internal dictionaries and metadata, usually small.
object memory : memory used by all key‑value objects (String, List, Hash, Set, Zset, Stream).
cache : client output buffers (normal, replication, pub/sub) and AOF buffer.
Lua memory : memory for loaded Lua scripts.
used_memory_rss : resident set size reported by the OS.
memory fragmentation : caused by jemalloc’s fixed‑size allocation strategy.
runtime memory and child process memory (forked processes for RDB/AOF rewrite).
Memory fragmentation ratio is calculated as used_memory_rss / used_memory . A value close to 1 indicates low fragmentation.
2. Object Memory and Encodings
Redis chooses different encodings for objects to balance space and speed:
String : int (small integers), embstr (continuous memory, ≤44 bytes), raw (dynamic, >44 bytes, ≤512 MB).
List : ziplist (small lists), linkedlist (fallback), quicklist (default since 3.2, a linked list of ziplists).
Set : intset (all integers, ≤512 entries), hashtable (fallback).
Hash : ziplist (small hashes), hashtable (fallback).
Zset : ziplist (small sorted sets), skiplist (default when ziplist limits are exceeded).
Choosing the proper encoding reduces memory consumption and improves performance.
3. Buffer Memory
Client output buffers (normal, replication, pub/sub) can grow dramatically. They are controlled by the client-output-buffer-limit configuration:
client-output-buffer-limit normal 4096mb 2048mb 120Other relevant parameters include repl-backlog-size for replication backlog.
AOF buffer is not directly limited; it relies on the AOF rewrite policy (always, everysec, no) and the main thread’s fsync behavior.
4. Memory Fragmentation
Jemalloc allocates memory in fixed size classes (8 B, 16 B, …, 4 KB, 8 KB). Allocation to the nearest larger class creates internal fragmentation. Fragmentation is not counted in used_memory but appears in mem_fragmentation_ratio .
Fragmentation can be reduced manually with:
memory purgeor automatically via active defragmentation parameters:
activedefrag yes
active-defrag-ignore-bytes 100mb
active-defrag-threshold-lower 10
active-defrag-threshold-upper 100
active-defrag-cycle-min 25
active-defrag-cycle-max 755. Child Process Memory
Forked child processes (RDB/AOF rewrite) use Linux Copy‑On‑Write (COW). Only pages modified by the parent are duplicated. Enabling Transparent Huge Pages (THP) can increase the COW page size from 4 KB to 2 MB, potentially raising memory consumption during heavy writes.
6. Optimization Practices
Keep keys ≤44 bytes to use embstr encoding.
Combine small strings into hash objects using ziplist encoding.
Avoid large numbers of elements in lists, sets, hashes, or zsets that force non‑compact encodings.
Do not modify ziplist size limits unless necessary.
Limit client output buffers (5‑15 % of total memory, never exceed 20 %).
Avoid using MONITOR in production and restrict pipeline size.
Enable active defragmentation for versions ≥4.0.
Disable THP on Redis nodes.
7. Case Study 1 – Client Buffer Explosion
A Redis cluster showed a rapid memory‑usage increase to 100 %. Analysis revealed that a few instances had normal used_memory growth, but the client output buffer ( omem ) grew in parallel. The offending client was identified with:
client list | grep -i omem=0Key fields such as obl (output buffer), oll (dynamic buffer), and omem (total buffer bytes) were examined. The client was issuing many GET commands, causing large output buffers due to pipelining.
The resolution involved killing the client connection and setting a stricter client‑output‑buffer limit (2‑4 GB). The configuration command used was:
config set client-output-buffer-limit normal 4096mb 2048mb 120Additionally, the development team was advised to reduce pipeline size.
8. Case Study 2 – Slave Node Memory Spike
In a 190‑node cluster, three slave nodes showed memory usage >95 % while masters remained stable. Investigation found:
Slave memory grew while master memory stayed constant.
Slave nodes used the APPEND command, which triggers SDS reallocation (doubling size when needed).
Memory usage differences were confirmed with memory usage key on master and slave.
Solution: align master and slave maxmemory settings, restart affected slaves to release memory, and later increase cluster memory capacity while keeping usage below 70 %.
9. Overall Summary
Understanding Redis’s internal memory allocation mechanisms enables rapid diagnosis of memory anomalies and performance bottlenecks. Proper key design, encoding selection, buffer limits, and awareness of version‑specific features (e.g., active defragmentation, quicklist) are essential for maintaining a stable, high‑performance Redis deployment.
Reference: "Redis Design and Implementation" (book).
vivo Internet Technology
Sharing practical vivo Internet technology insights and salon events, plus the latest industry news and hot conferences.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.