Investigation and Resolution of High Container Memory Usage in a Java G1GC Application
This article details a step‑by‑step investigation of a container whose memory usage spiked to 99% due to large objects being allocated to the old generation under G1GC, explains the root cause, demonstrates reproducible tests with JVM flags and monitoring tools, and provides practical mitigation strategies.
Background: Late at night, the author received alerts that a container's memory usage was at 99% and investigated the issue.
Phenomenon: JVM heap usage was normal (~50%) but at 1 am the old generation spiked, triggering a Full GC; container memory rose sharply and stayed high.
Configuration: The service was started with -Xms4g -Xmx4g -Xmn2g -XX:+UseG1GC -XX:G1HeapRegionSize=8m -XX:G1ReservePercent=15 -XX:InitiatingHeapOccupancyPercent=50 and the container had 4C/5G limits.
Root cause analysis: A scheduled task queried large order data (500 orders per batch) producing orderBills objects of several megabytes, which due to G1GC region size were allocated directly to the old generation. The GC could not keep up, leading to heap exhaustion and Full GC.
Investigation steps: Created a sample project memorytest with a method that allocates byte arrays in a loop; ran it with the same JVM flags; used jps , jmap , jstat , ps to monitor heap usage and GC statistics.
public void job() {
// ... do business
int pageSize = 500;
while (xxx) {
// each query 500 orders
List
orderNoList = orderService.getOrderPage(pageSize);
// query bills for those orders
List
orderBills = billService.findByOrderNos(orderNoList);
// ... do business
}
// ... do business
}Results showed that large objects were placed in the old generation, causing Full GC and memory not being released even after GC; adjusting allocation size reduced pressure.
Solutions: Reduce data volume returned by queries; adjust G1 region size to balance allocation efficiency and pause times; add proper GC logging and heap dump on OOM; monitor with JConsole/VisualVM; use tools like jstat , jmap , jcmd , arthas for troubleshooting.
Conclusion: Proactive memory monitoring, appropriate JVM flags, and careful data handling are essential to prevent similar memory spikes.
Zhuanzhuan Tech
A platform for Zhuanzhuan R&D and industry peers to learn and exchange technology, regularly sharing frontline experience and cutting‑edge topics. We welcome practical discussions and sharing; contact waterystone with any questions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.