Understanding Java Memory Model: Why JMM Differs from Sequential Consistency
This article explains the core concepts of the Java Memory Model (JMM), compares its semantics with sequential consistency, analyzes sample code illustrating reordering and data races, and outlines synchronization and happen‑before principles that ensure correct multithreaded behavior.
1. Brief Overview of JMM
Semantic Specification
The semantics of the Java programming language allow compilers and processors to perform optimizations that interact with incorrectly synchronized code to make it work.
Thread‑local semantics describe the behavior of a single‑threaded program, allowing the behavior of a thread to be fully predicted based on the values read within that thread; because the implementation runs in its own context, evaluating the implementation determines whether the thread’s actions are legal.
Programs follow thread‑local semantics: under thread isolation, each thread’s actions are controlled by that thread’s semantics, while the value seen by each read operation is determined by the memory model.
JMM Specification
Regarding data storage, for shared data reads and writes JMM uses the thread’s working memory and the JVM heap (analogous to cache such as Redis for working memory and a database for heap storage).
For code optimization, JMM may reorder program‑order code or even eliminate unnecessary synchronization to improve performance.
JMM Overview
Given a program and an execution trace that checks program legality, JMM works by examining each read in the trace and, according to certain rules, checking whether the observed write is valid.
It mainly guarantees that every execution result matches the memory model’s expected values, without caring how the implementation achieves the program’s behavior.
The memory model decides which values may be read at each point in the program. In isolation, each thread’s operations are governed by that thread’s semantics, but the values seen by reads are decided by the memory model.
Whenever a write to a variable generates an inter‑thread action, it must match the subsequent read’s inter‑thread action in program order; the value read by a thread is the one determined by JMM.
2. JMM and Sequential Consistency Model
Program Order and Sequential Consistency
Program Order Can be described as a collection of actions across threads that follow the thread‑local semantics execution order. In short, it is “what you see is what you get” within a thread, i.e., the program code order.
Sequential Consistency Memory Model All operations of a thread must execute in program order. Regardless of synchronization, all threads see a single total order of operations, each operation being atomic and immediately visible to all other threads.
Sequential Consistency Issues If the memory model uses a consistency model, compiler and processor optimizations may become illegal.
JMM’s Efforts Regarding Sequential Consistency
<code>// shared.java
int pwrite = 0;
int cwrite = 0;
// producer.java
int pread = 0;
int r1 = 0;
run(){
r1 = 20; // --- 1
pread = cwrite; // --- 2
pwrite = 10; // --- 3
}
// consumer.java
int cread = 0;
int r2 = 0;
run(){
cread = pwrite; // --- 4
r2 = 21; // --- 5
cwrite = 20; // --- 6
}</code>Code Analysis Based on the JMM model: due to data races, the compiler may reorder the code, producing outputs where pread = cwrite = 20 while cread = pwrite = 10 . Based on a sequential consistency model: the output would be normal, not showing the above mismatch, although thread execution may interleave.
Locking Scheme JMM model: guarantees normal output but the execution order inside threads may be reordered. Sequential consistency model: does not disrupt order and still produces correct output.
Summary In the presence of data races, JMM cannot guarantee the execution order between threads, whereas sequential consistency guarantees that the order matches the code, even if threads interleave. Within a single thread, JMM may still reorder actions in critical sections, while sequential consistency does not reorder and preserves program order.
3. JMM Specification Summary
Shared Data Rules
Memory regions that can be shared by multiple threads are called shared memory or heap memory.
Thread‑shared data includes all object instance fields, static fields, array elements, etc.
Thread‑confined data includes local variables, method parameters, exception handlers, and ThreadLocal/ThreadLocalRandom, etc.
Thread Operation Rules
Thread behavior can be observed by other threads; the program behavior includes:
Normal read operations can be performed.
Normal write operations can be performed.
For synchronized blocks: Volatile reads are visible to other threads (writes go directly to main memory). Volatile writes make the variable visible to other threads. lock acquires a monitor. unlock releases a monitor.
The first and last actions of combined thread execution (e.g., after waiting for subtasks).
Starting and terminating threads.
Synchronization Principles (observable, visible behavior changes)
Unlocking monitor m synchronizes with subsequent lock of monitor m.
A write to volatile variable v synchronizes with any subsequent read of v by any thread.
Starting a thread synchronizes with the first action of that thread.
The default write of each field in a thread synchronizes with the thread’s first action.
The final action T1 of a thread synchronizes with any action in another thread T2 that detects T1’s termination (e.g., via InterruptedException or Thread.interrupted / Thread.isInterrupted ).
Happen‑Before Principle (Specification)
Defines a happen‑before relation hb(x, y) between actions x and y: If actions x and y occur in the same thread and x precedes y in program order, then hb(x, y). The end of an object’s constructor hb‑precedes the start of its finalizer. If action x synchronizes with subsequent action y, then hb(x, y). Transitivity: if hb(x, y) and hb(y, z) then hb(x, z).
The happen‑before principle mainly determines ordering of conflicting actions and defines when data races occur. VM implementations follow these rules: Unlock actions happen‑before subsequent lock actions in the same thread. A write to a volatile variable happens‑before each subsequent read of that variable. Calling Thread.start() happens‑before any action in the started thread. All actions of a thread happen‑before another thread successfully returns from join() on that thread. Initialization of any object happens‑before any other action involving that object.
Following these principles means some code cannot be reordered and some data cannot be cached, solving JMM visibility specifications.
Xiaokun's Architecture Exploration Notes
10 years of backend architecture design | AI engineering infrastructure, storage architecture design, and performance optimization | Former senior developer at NetEase, Douyu, Inke, etc.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.