Understanding Java Object Allocation: Stack Allocation, TLAB, and Performance Impact
This article examines a Java micro‑benchmark that measures object creation time, explains why printing objects dramatically slows execution, and details JVM allocation strategies such as stack allocation, escape analysis, scalar replacement, and Thread‑Local Allocation Buffers (TLAB) with relevant parameters.
Code Test
import com.google.common.base.Stopwatch;
import java.util.concurrent.TimeUnit;
public class StackTest {
public static void main(String[] args) {
Stopwatch started = new Stopwatch();
started.start();
User user = null;
for (long i = 0; i < 1000_000_000; i++) {
user = new User();
}
started.stop();
System.out.println(started.elapsed(TimeUnit.MILLISECONDS) + "ms");
// without printing: ~300ms
// with printing: ~3000ms
// System.out.println(user);
}
}
class User {
private int age;
private String userName;
public int getAge() { return age; }
public void setAge(int age) { this.age = age; }
public String getUserName() { return userName; }
public void setUserName(String userName) { this.userName = userName; }
}The simple benchmark creates one million User objects and measures the time; printing the object increases the runtime from about 300 ms to roughly 3000 ms, illustrating a ten‑fold slowdown caused by object escape.
Object Allocation Rules
Java’s JVM can allocate objects on the stack when certain conditions are met, avoiding heap allocation and garbage collection. The required conditions include small object size, no escape (enabled via -XX:+DoEscapeAnalysis ), and eligibility for scalar replacement ( -XX:+EliminateAllocations ).
Stack Allocation
Stack allocation places thread‑private objects directly on the stack, allowing immediate reclamation via POP without GC involvement, which is highly efficient. However, large objects cannot be allocated on the stack.
Stack space is limited; large objects are ineligible.
Objects must not escape the method (use -XX:+DoEscapeAnalysis ).
Scalar replacement must be possible (use -XX:+EliminateAllocations ).
In the demo, the User object escapes when System.out.println(user) is executed, preventing stack allocation.
TLAB Allocation
TLAB (Thread‑Local Allocation Buffer) is a per‑thread region inside the Eden space that speeds up heap allocation by reducing synchronization. When enabled (default), each thread gets its own TLAB.
TLAB improves allocation throughput, especially under multithreaded contention, but like stack allocation it cannot handle large objects that exceed the TLAB size.
Allocation strategy example: a 100 KB TLAB with 80 KB already used receives a request for a 30 KB object. The JVM may either discard the current TLAB and request a new one, or allocate the 30 KB object directly on the heap while keeping the existing TLAB for smaller future allocations.
The decision is driven by the internal refill_waste threshold: if the requested size exceeds refill_waste , the JVM allocates on the heap; otherwise it discards the current TLAB and creates a new one. Both refill_waste and TLAB size are dynamically tuned at runtime.
JVM Parameter Analysis
Key JVM flags influencing allocation:
-XX:+DoEscapeAnalysis – enables escape analysis.
-XX:+EliminateAllocations – allows scalar replacement.
-XX:+UseTLAB – toggles TLAB usage.
Disabling TLAB with -XX:+UseTLAB (set to false) can further reduce allocation speed, especially in multithreaded scenarios.
Demo Analysis
The performance degradation observed when printing the User object stems from the object escaping the method, which prevents stack allocation and forces heap allocation with frequent GC cycles.
Understanding and configuring these JVM options can help developers optimize object creation paths and improve overall application performance.
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.