Operations 14 min read

Understanding and Tuning JVM ParallelGCThreads and Related Memory Parameters

This article explains the trade‑off between throughput and pause time in JVM garbage collection, details how the ParallelGCThreads, ConcGCThreads, and CICompilerCount parameters are calculated and affect performance, presents experimental results, and provides concrete configuration recommendations for both on‑heap and off‑heap memory in containerized Java applications.

JD Retail Technology
JD Retail Technology
JD Retail Technology
Understanding and Tuning JVM ParallelGCThreads and Related Memory Parameters

Before discussing the ParallelGCThreads parameter, the article introduces the two optimization goals of JVM garbage collection: throughput (the proportion of CPU time spent on business threads) and pause time (the duration of Stop‑The‑World pauses), which are often conflicting.

ParallelGCThreads controls the number of threads used during parallel GC phases. If set too low, GC pauses become longer; if set too high, CPU is consumed by GC threads, reducing throughput. The default value is derived from the number of logical CPUs (ncpus): when ncpus < 8, ParallelGCThreads = ncpus; otherwise ParallelGCThreads = 8 + (ncpus‑8) * 5/8. For containers on JDK 1.8.0_131 and earlier, the JVM cannot detect Docker CPU limits, leading to overly large defaults.

Experimental scenarios with an 8C12G container on a 128‑core host show that setting ParallelGCThreads to 8 reduces CPU usage by about 5 % under steady load and allows faster recovery when CPU spikes, compared with the default value.

Recommended mitigation for risky ParallelGCThreads settings:

Upgrade the JDK to 1.8.0_131 or later (preferably 1.8.0_191+).

Explicitly set the parameter, e.g.: -XX:ParallelGCThreads=8 (choose the value from the table provided in the article).

Other important thread‑related parameters are ConcGCThreads (concurrent marking threads) and CICompilerCount (JIT compilation threads). Their defaults also depend on ParallelGCThreads and can be too high in pre‑1.8.0_131 containers, affecting throughput.

Heap memory tuning is covered next. The article explains the on‑heap memory model (young, old, and metaspace), the role of -Xms, -Xmx, -XX:NewRatio, and provides default calculation rules for containers (e.g., Xmx = ½ of container memory when ≤2 GB, otherwise ¼). It advises setting Xms = Xmx for services and choosing NewRatio between 2 and 3.

Off‑heap memory considerations focus on Direct Byte Buffers and Metaspace. Direct Byte Buffers improve I/O efficiency but can leak if not reclaimed; the article recommends setting -XX:MaxDirectMemorySize explicitly and ensuring the sum of Xmx × 1.1, MaxDirectMemorySize, thread stacks, and reserved system memory stays within container limits.

Metaspace replaces the permanent generation in JDK 8. The article suggests fixing -XX:MetaspaceSize and -XX:MaxMetaspaceSize to the same value (e.g., 256 m) to avoid frequent full GCs caused by metaspace expansion.

Finally, a consolidated configuration example is provided:

-server -Xms8192m -Xmx8192m -XX:MaxDirectMemorySize=4096m

If using JDK 1.8.0_131 or earlier, add:

-XX:ParallelGCThreads=8 -XX:ConcGCThreads=2 -XX:CICompilerCount=2

For metaspace tuning, consider:

-XX:MaxMetaspaceSize=256m -XX:MetaspaceSize=256m

An example environment variable setting is also shown:

export JAVA_OPTS="-Djava.library.path=/usr/local/lib -server -Xms4096m -Xmx4096m -XX:MaxMetaspaceSize=512m -XX:MetaspaceSize=512m -XX:MaxDirectMemorySize=2048m -XX:ParallelGCThreads=8 -XX:ConcGCThreads=2 -XX:CICompilerCount=2 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/export/Logs -XX:+UseG1GC [other_options...] -jar jarfile [args...]"
JavaJVMperformanceDockerGCMemory TuningParallelGCThreads
JD Retail Technology
Written by

JD Retail Technology

Official platform of JD Retail Technology, delivering insightful R&D news and a deep look into the lives and work of technologists.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.