Investigating Excessive Native Memory Usage in a Spring Boot Application Migrated to MDP Framework
After migrating a project to the MDP framework based on Spring Boot, the system repeatedly reported high swap usage despite a 4 GB heap configuration, leading to an investigation that uncovered native memory consumption caused by unchecked JAR scanning and allocator behavior, which was resolved by limiting scan paths and updating Spring Boot's inflater implementation.
The author migrated a project to the MDP framework (built on Spring Boot) and observed frequent swap warnings; although the JVM was configured with a 4 GB heap, the process consumed up to 7 GB of physical memory.
The JVM options used were: -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=256M -XX:+AlwaysPreTouch -XX:ReservedCodeCacheSize=128m -XX:InitialCodeCacheSize=128m -Xss512k -Xmx4g -Xms4g -XX:+UseG1GC -XX:G1HeapRegionSize=4M
Enabling native memory tracking with -XX:NativeMemoryTracking=detail and restarting the application allowed the use of jcmd <pid> VM.native_memory detail , which showed that committed memory was far less than the physical usage, indicating substantial native (off‑heap) allocations not accounted for by the JVM.
Further analysis with pmap revealed many 64 MB regions absent from the jcmd output, suggesting native code allocations.
System‑level tools were then employed: gperftools showed memory spikes up to 3 GB followed by a drop to ~800 MB; strace traced mmap/brk calls during startup, confirming large 64 MB allocations; GDB was used to dump suspicious memory regions for inspection.
The root cause was identified as the Meituan Config Center (MCC) using Reflections to scan all JARs. During scanning, Spring Boot’s ZipInflaterInputStream inflates JAR files via the JDK Inflater , which allocates native memory that is only released in the finalize method. Because the default scan path covered every JAR, massive off‑heap memory was retained until GC triggered finalization.
By configuring MCC to scan only specific packages, the off‑heap memory consumption dropped dramatically. Spring Boot later added an explicit release in ZipInflaterInputStream , eliminating reliance on finalization.
Additional investigation showed that glibc’s per‑thread memory arenas (64 MB each) and tcmalloc’s pooling behavior retained memory after free, causing the OS to report higher resident set size. A custom allocator demo (shown below) illustrated how mmap‑based allocations can lead to apparent over‑allocation due to page rounding and lazy allocation:
#include
#include
#include
#include
void* malloc(size_t size) {
long* ptr = mmap(0, size + sizeof(long), PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, 0, 0);
if (ptr == MAP_FAILED) return NULL;
*ptr = size;
return (void*)(&ptr[1]);
}
void* calloc(size_t n, size_t size) {
void* ptr = malloc(n * size);
if (!ptr) return NULL;
memset(ptr, 0, n * size);
return ptr;
}
void* realloc(void* ptr, size_t size) {
if (size == 0) { free(ptr); return NULL; }
if (!ptr) return malloc(size);
long* plen = (long*)ptr; plen--; long len = *plen;
if (size <= len) return ptr;
void* rptr = malloc(size);
if (!rptr) { free(ptr); return NULL; }
memcpy(rptr, ptr, len);
free(ptr);
return rptr;
}
void free(void* ptr) {
if (!ptr) return;
long* plen = (long*)ptr; plen--; long len = *plen;
munmap((void*)plen, len + sizeof(long));
}The investigation concludes that unchecked JAR scanning in Spring Boot can cause significant native memory usage, and that memory pools in the underlying allocator may retain memory after it is freed, giving the impression of a leak. Limiting scan scopes and using newer Spring Boot versions that explicitly release native buffers resolve the issue.
Code Ape Tech Column
Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.