Understanding the Java Memory Model (JMM) and JSR‑133: Concepts, Guarantees, and Memory Barriers
This article explains the Java Memory Model (JMM) defined by JSR‑133, covering its core theory, the role of keywords like volatile, synchronized, and final, the underlying hardware memory models, cache‑coherence protocols, happens‑before rules, memory barriers, and how they ensure atomicity, visibility, and ordering in multithreaded Java programs.
Multithreading and high‑concurrency are essential topics for every Java developer. The Java Memory Model (JMM), specified by JSR‑133, defines the guarantees that language constructs such as volatile , synchronized , and final provide so that program results are predictable in a multithreaded environment.
1. Consensus before discussing JMM
JMM is an abstract memory model, not the physical hardware memory model.
It does not directly correspond to the Java runtime data area.
It serves both programmers (users) and JVM implementers.
2. Computer memory model background
Modern CPUs execute instructions much faster than memory accesses, so caches are introduced between CPU and main memory. In multiprocessor systems each CPU has its own cache, leading to cache‑coherence problems solved by protocols such as MSI, MESI, MOSI.
3. Multithreaded programming challenges
Thread communication (how threads exchange information).
Thread synchronization (how threads enforce ordering).
Two common concurrency models are:
Shared memory : threads read/write a common memory area.
Message passing : threads exchange explicit messages without shared state.
4. JSR‑133 overview
JSR‑133 is the Java Memory Model and Thread Specification, part of JSR‑176. It is integrated into the Java Language Specification, JVM Specification, and java.lang package.
The specification targets two audiences:
Programmers : provides happens‑before guarantees for correct synchronization.
JVM implementers : restricts compiler and processor optimizations.
5. JMM guarantees
Atomicity : basic reads/writes of primitive types are atomic; synchronized , Lock , and atomic classes (e.g., AtomicInteger ) provide stronger atomic guarantees.
Visibility : volatile , synchronized , and Lock ensure that writes become visible to other threads.
Ordering : JMM prevents certain reorderings; the happens‑before principle defines the partial order that must be respected.
6. Happens‑before rules
Program order rule.
Monitor lock rule.
Volatile variable rule.
Thread start rule.
Thread join rule.
Thread interruption rule.
Finalizer rule.
Transitivity.
These rules guarantee that if operation A happens‑before operation B, the effects of A are visible to B, regardless of actual execution order.
7. Memory barriers
Memory barriers (store, load, full) enforce ordering at the CPU level. JMM inserts barriers around volatile reads/writes to prevent prohibited reorderings.
volatile write sequence (simplified):
StoreStore barrier → volatile write → StoreLoad barriervolatile read sequence (simplified):
LoadLoad barrier → volatile read → LoadStore barrierThese barriers ensure that writes before a volatile write become visible before the write, and reads after a volatile read see the latest value.
8. final semantics
Writes to a final field in a constructor cannot be reordered with the publication of the object reference.
Reading a final field after obtaining a reference cannot be reordered with the reference read.
JMM inserts StoreStore and LoadLoad barriers to enforce these guarantees.
Overall, the article provides a detailed walkthrough of how JMM, JSR‑133, and related language constructs work together to give Java developers reliable concurrency primitives.
JD Tech
Official JD technology sharing platform. All the cutting‑edge JD tech, innovative insights, and open‑source solutions you’re looking for, all in one place.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.