Understanding Low Redis Memory Fragmentation Ratio and Its Real Causes
This article explains why a low Redis memory fragmentation ratio is not necessarily caused by swap usage, demonstrates experiments with disabled swap and varying repl‑backlog‑size, and concludes that large replication buffers and small data sets can naturally produce ratios far below 1, which is normal.
Background Issue
We received a client question: “My Redis memory fragmentation ratio is low, around 0.2. Online sources say this can slow down Redis performance. What should I do?”
The official formula for Redis memory fragmentation ratio is:
mem_fragmentation_ratio = used_memory_rss / used_memory
In plain terms, this is the ratio of the memory Redis requests from the operating system ( used_memory_rss ) to the total memory allocated by Redis’s memory allocator ( used_memory ).
used_memory_rss is the RES memory shown by the top command for the Redis process.
used_memory is the total memory allocated by Redis’s allocator (e.g., jemalloc), including internal structures, buffers, and data objects.
If the ratio is < 1, the fragmentation is considered low; if > 1, it is high. Many articles on the Internet attribute a low ratio to swap usage, but is that really the case?
Verification
In the client’s production environment:
Swap was disabled.
Data volume was about 60 MB.
repl-backlog-size (replication backlog buffer) was configured to 1 GB.
We set vm.swappiness = 1 to keep swap off, changed repl-backlog-size to 512 MB, and started an empty Redis instance.
Running memory stats showed that, with no keys, replication threads, or clients, the memory used by data objects, the replication backlog, and client buffers were all zero. At this point, the allocator’s total allocated memory was 863,944 bytes, the memory requested from the OS was 2,789,376 bytes, and the fragmentation ratio was 3.48.
After adding a replica, the fragmentation ratio instantly dropped to 0.01.
The replica’s configuration showed that the actual replication backlog size matched the allocator’s memory allocation (both 512 MB). When replication started, the total allocated memory increased, but the memory requested from the OS changed little, causing the ratio to drop sharply.
Why doesn’t Redis request the full 512 MB from the OS at this point? The answer is that the OS allocation for the replication backlog occurs only when the slave first establishes replication or reconnects, and it is allocated on demand rather than in a single 512 MB chunk.
Simulating Replica Disconnection
We used the debug command on the replica to simulate a crash, then applied load on the master.
Monitoring memory usage showed that the replication backlog and client output buffers gradually consumed memory, but the OS‑reported used_memory_rss increased incrementally, not all at once.
Does a Ratio Below 1 Relate to Data Volume?
Previous experiments proved that a low fragmentation ratio is not solely caused by swap; oversized replication backlog buffers and very small data sets can also lower the ratio. To see the effect of larger data volumes, we continuously inserted data.
Both used_memory and used_memory_rss grew, and the fragmentation ratio gradually approached 1.
Conclusion
Low Redis memory fragmentation ratio is not only related to swap; disabling swap in production is generally recommended.
When the replication backlog buffer is large and the business data volume is small, the ratio can be far below 1, which is normal and does not require optimization.
Setting a relatively large repl-backlog-size in production aims to avoid frequent full synchronizations that could affect performance.
As the business data volume grows, the Redis memory fragmentation ratio will naturally tend toward 1.
Aikesheng Open Source Community
The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.