Diagnosing and Resolving a Native Memory Leak in Spring Boot Applications
This article details the investigation of an unexpected native memory consumption issue in a Spring Boot service, describing how JVM and system‑level tools were used to pinpoint the leak caused by the Spring Boot loader's Inflater usage and how configuring MCC scan paths or upgrading Spring Boot eliminated the problem.
Background
After migrating a project to the MDP framework (based on Spring Boot), the system began reporting high swap usage. Although the JVM was configured with a 4 GB heap, the physical memory consumption reached 7 GB, indicating abnormal native memory usage.
The JVM options used were:
-XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=256M -XX:+AlwaysPreTouch -XX:ReservedCodeCacheSize=128m -XX:InitialCodeCacheSize=128m -Xss512k -Xmx4g -Xms4g -XX:+UseG1GC -XX:G1HeapRegionSize=4MTop and jcmd VM.native_memory detail showed that the committed memory was smaller than the physical usage because the native memory allocated via unsafe.allocateMemory and DirectByteBuffer was not accounted for.
Investigation Process
1. Java‑level tools
Used -XX:NativeMemoryTracking=detail and jcmd pid VM.native_memory detail to view memory distribution.
The output revealed that native memory allocated by unsafe.allocateMemory and DirectByteBuffer was present, while memory allocated by native C code ( Native Code ) was missing.
2. System‑level tools
Since the issue stemmed from native code, system tools were employed:
gperftools
Monitoring showed memory allocated via malloc spiking to 3 GB and then stabilising around 700‑800 MB.
strace
Running strace -f -e "brk,mmap,munmap" -p <pid> did not reveal suspicious allocations.
GDB dump
Used gdb -pid <pid> and dump memory mem.bin startAddress endAddress followed by strings mem.bin to inspect the dumped memory, which contained decompressed JAR information, indicating that the memory was allocated during JAR loading.
strace during startup
Tracing the process at startup uncovered many 64 MB mmap allocations that matched the unexplained native memory.
jstack
Identified the thread performing the allocations using the thread ID from strace .
The culprit was the Meituan Configuration Center (MCC) which uses Reflections to scan all JARs. The scanning process employs Spring Boot's Inflater (via ZipInflaterInputStream ) to decompress JARs, allocating off‑heap memory that was not promptly released.
3. Why the off‑heap memory was not released
Spring Boot relied on the Inflater object's finalize method to free native memory, meaning release depended on GC. However, the underlying glibc memory allocator (and tcmalloc used by gperftools) retains freed memory in per‑thread arenas (typically 64 MB each), so the OS does not see the memory returned, giving the impression of a leak.
Tests with a custom allocator (a simple malloc implementation using mmap ) confirmed that the process consistently held 700‑800 MB of native memory, while the OS reported a much larger resident set due to arena allocation and delayed page allocation.
Solution
Two practical fixes were applied:
Configure MCC to scan only specific JAR packages instead of the entire classpath, drastically reducing the number of Inflater calls.
Upgrade Spring Boot to version 2.0.5.RELEASE, where ZipInflaterInputStream now explicitly releases its native buffers without relying on GC.
Both approaches eliminated the excessive native memory consumption.
Summary
The native memory leak originated from Spring Boot's JAR decompression during exhaustive package scanning, which allocated off‑heap buffers that were only freed by GC. Because the underlying memory allocator kept the freed pages in per‑thread arenas, the OS memory usage remained high, appearing as a leak. Restricting scan paths or upgrading Spring Boot resolves the issue.
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.