Understanding MySQL Memory Usage and the High‑Water‑Mark Phenomenon
This article investigates why MySQL often consumes far more physical memory than the configured InnoDB buffer pool size, explains the high‑water‑mark effect, and explores Linux process memory layout, memory allocators, and MySQL’s internal memory management mechanisms.
Background
In many projects the physical memory used by MySQL far exceeds the size of InnoDB_Buffer_Pool even when there is no heavy concurrent SQL load. The author initially suspected performance_schema or a memory leak, but later realized the cause lies in a misunderstanding of MySQL and Linux memory management.
MySQL Memory Composition
MySQL memory consists of two parts: global_buffers (shared caches such as the InnoDB buffer pool) and all_thread_buffers (per‑thread caches). The InnoDB buffer pool is a resident memory area that stays allocated until the MySQL process exits, while thread‑local caches (join_buffer, sort_buffer, read_buffer, etc.) grow with the number of active connections and are released when a connection closes, though they are not returned to the OS.
Memory High‑Water‑Mark Phenomenon
Using a CentOS 7.3 host with MySQL 5.7.27, the author disabled performance_schema and innodb_buffer_pool_load_at_startup , set the buffer pool to 100 MiB, and ran a sysbench test inserting 1 million rows. Initial memory usage was about 55 MiB (≈50 MiB baseline + 6.9 MiB buffer pool). During a multi‑threaded SELECT benchmark (4 → 64 threads), memory rose sharply at low thread counts due to the buffer pool filling, then stabilized around 194 MiB as all_thread_buffers grew. After the test the buffer pool was fully occupied (no free buffers) and the extra ~40 MiB was attributed to thread buffers that are freed internally but never returned to the OS.
Linux Process Memory Layout
The article reviews the classic 32‑bit Linux virtual address space: read‑only segment, data segment, heap (grows upward), mmap region (grows downward), and stack. Dynamic allocations via malloc() may use brk() for small blocks (<128 KiB) or mmap() for larger blocks, each with different release behavior and page‑fault characteristics.
Memory Allocators
Three common allocators are described: ptmalloc (glibc default), tcmalloc (Google), and jemalloc (FreeBSD). ptmalloc can cause memory waste, fragmentation, and lock contention; jemalloc and tcmalloc handle fragmentation and multithreading more efficiently. The author recommends jemalloc for MySQL to reduce waste and improve performance.
MySQL Internal Memory Management
MySQL’s memory is split between the Server layer (managed by mem_root ) and the InnoDB engine layer (managed by free‑list, LRU‑list, FLU‑list, and unzip‑LRU‑list). The InnoDB buffer pool is allocated via mmap() in chunk units and stays resident once filled, releasing memory only when the pool size is reduced or MySQL shuts down. The Server layer uses a large pre‑allocated arena (via init_alloc_root ) that obtains memory from the allocator (usually ptmalloc) and then sub‑allocates for thread‑local structures; each thread has its own mem_root , which can lead to fragmentation.
UID PID minflt/s majflt/s VSZ RSS %MEM Command
997 11980 0.00 0.00 1240908 55536 0.69 mysqld ----------------------
BUFFER POOL AND MEMORY
----------------------
Total large memory allocated 107380736
Dictionary memory allocated 116177
Buffer pool size 6400
Free buffers 5960
Database pages 432After the benchmark the buffer pool showed zero free buffers, confirming full utilization.
Conclusion
The apparent “extra” memory usage in MySQL is largely consumed by the memory allocator and fragmentation; the allocator keeps memory reserved for efficiency and only releases it when certain thresholds are met. Understanding the interaction between MySQL’s own memory structures, the Linux allocator, and the underlying OS helps explain the high‑water‑mark phenomenon.
Aikesheng Open Source Community
The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.