Understanding Java Thread Concurrency: Concepts, Lifecycle, and the Java Memory Model
This article explains key Java concurrency concepts—including synchronization, parallelism, critical sections, blocking vs non‑blocking, thread lifecycle, priority, common thread methods, interrupt handling, and the Java Memory Model’s guarantees of atomicity, visibility, ordering, and happens‑before relations—providing practical examples and code snippets for backend developers.
Introduction
Java backend services often require massive high‑concurrency computation, making concurrency especially important in Java server‑side programming.
Key Concepts
1. Synchronous and Asynchronous
Synchronous calls block until the method finishes; asynchronous calls return immediately while the actual work runs in another thread.
2. Concurrency vs Parallelism
Concurrency means multiple tasks make progress by interleaving execution (only one runs at any instant), whereas parallelism means tasks truly run simultaneously on multiple CPUs.
3. Critical Section
A critical section is a shared resource that only one thread may access at a time; other threads must wait until it is released.
4. Blocking and Non‑Blocking
Blocking occurs when a thread waits (e.g., for a lock) and the OS suspends it, causing context switches; non‑blocking allows multiple threads to enter the critical area as long as data integrity is preserved.
Concurrency Control Strategies
Blocking
No starvation (fair scheduling)
Lock‑free (optimistic locking)
Non‑blocking (optimistic, may retry on conflict)
No‑wait (guarantee completion in bounded steps)
Java Memory Model (JMM)
The JMM defines rules to guarantee atomicity, visibility, and ordering for multithreaded programs.
What Is a Thread?
A thread is the basic unit of CPU scheduling, sharing the process’s resources while having its own stack, program counter, and registers.
Thread State Diagram
1. New – thread object created.
2. Runnable – start() called, thread is ready to run.
3. Running – thread has CPU and executes code.
4. Blocked – thread waits (e.g., for I/O, lock, sleep) and releases the CPU.
5. Dead – thread has finished execution or terminated due to an exception.
Blocking Situations
Waiting block – thread calls wait().
Synchronized block – thread cannot acquire a monitor lock.
Other block – sleep(), join(), or I/O request.
Creating and Starting Threads in Java
Two ways: extend java.lang.Thread or implement java.lang.Runnable .
When extending Thread , override run() . When implementing Runnable , pass the instance to a Thread and start it.
Best practice: give each thread a meaningful name for easier debugging.
Thread Priority
Java thread priority ranges from 1 (MIN_PRIORITY) to 10 (MAX_PRIORITY), with the default being 5 (NORM_PRIORITY). Higher priority threads are more likely to receive CPU time, but priority does not guarantee execution order and is platform‑dependent.
Common Thread Methods
sleep() – pauses the current thread without releasing locks; may throw InterruptedException .
join() – waits for another thread to finish.
yield() – hints that the scheduler may run other threads of the same priority.
interrupt() – sets the thread’s interrupt flag; if the thread is blocked in certain calls, an InterruptedException is thrown.
interrupted() – checks and clears the interrupt status.
Interrupt Handling Example
public void run(){ try{ // ... while(!Thread.currentThread().isInterrupted() && moreWork){ // do more work } }catch(InterruptedException e){ // thread was interrupted during sleep or wait }finally{ // cleanup if required } }
The loop continuously checks isInterrupted() ; when another thread calls interrupt() , the flag becomes true and the loop exits.
Thread Safety
Thread safety means that shared data is accessed in a way that prevents race conditions, typically using locks or other synchronization mechanisms. Not all concurrency control relies on locks; lock‑free and non‑blocking techniques also exist.
JMM Guarantees
Atomicity
All reads/writes of non‑long/double fields are atomic; volatile long/double are also atomic.
Visibility
Changes to a shared variable become visible to other threads according to the happens‑before rules.
Ordering
Compilers and processors may reorder instructions, but they must respect data dependencies and the JMM’s ordering constraints.
Happens‑Before Rules
Program order: each action happens‑before any later action in the same thread.
Monitor lock: unlock happens‑before subsequent lock.
Volatile: write to a volatile variable happens‑before any later read of that variable.
Transitivity: if A happens‑before B and B happens‑before C, then A happens‑before C.
Note: a happens‑before relation does not require the first action to execute before the second; it only guarantees that the result of the first is visible to the second.
Data Dependency
When two operations access the same variable and at least one is a write, a data‑dependency exists (write‑after‑read, write‑after‑write, read‑after‑write), preventing reordering that would change program results.
As‑If‑Serial Semantics
Even with reordering, a single‑threaded program must appear to execute in program order, protecting developers from low‑level memory model complexities.
Conclusion
The article provides an overview of Java threading fundamentals, common APIs, interrupt handling, and the memory model rules that ensure correct concurrent behavior in backend applications.
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.