Fundamentals 16 min read

Understanding Asynchronous and Concurrent Programming Models on the JVM

The article explains why asynchronous programming improves resource utilization, compares synchronous and asynchronous styles, and reviews common JVM concurrency models—including threads, thread pools, futures, reactive extensions, async‑await, fibers, and actors—while discussing their trade‑offs and suitability for distributed systems.

Architecture Digest
Architecture Digest
Architecture Digest
Understanding Asynchronous and Concurrent Programming Models on the JVM

Programs can be written in a synchronous style, which often blocks threads and wastes CPU cycles, or in an asynchronous style that enables non‑blocking, more flexible scheduling and better utilization of hardware resources.

In a synchronous example the code must wait for each HTTP request before proceeding:

val res1 = get("http://server1")
val res2 = get("http://server2")
compute(res1, res2)

With asynchronous programming the two requests are launched independently and only synchronized when their results are needed, allowing the CPU to perform other work while I/O is in progress.

The article then examines the hardware basis for parallelism: multiple CPU cores, I/O devices, and distributed machines can all execute work concurrently, so forcing a program to wait for a single resource is inefficient.

Basic concurrency models are described. Thread creates a new OS thread per task, which incurs overhead from thread creation, context switching, cache misses, memory consumption, and resource contention. For I/O‑bound workloads a rule of thumb is roughly twice the number of database connections; for CPU‑bound workloads a few threads per core are typical.

Thread pool reuses a limited set of threads to execute many tasks, reducing creation overhead. The article notes the importance of avoiding blocking tasks in the pool and mentions Java’s Executors factories and Scala’s blocking helper.

Future represents a value that will become available later. Example:

// two futures run in parallel
val f1 = Future { get("http://server1") }
val f2 = Future { get("http://server2") }
compute(f1.get(), f2.get())

Advanced models include Reactive Extensions (Rx) and Reactor, which treat data streams as first‑class objects that can be transformed and scheduled on thread pools. Example RxJava code:

Flowable.just("file.txt")
  .map(name -> Files.readLines(name))
  .subscribe(lines -> System.out.println(lines.size()), Throwable::printStackTrace);

Reactor example:

Flux.fromIterable(getSomeLongList())
  .mergeWith(Flux.interval(100))
  .doOnNext(serviceA::someObserver)
  .map(d -> d * 2)
  .take(3)
  .onErrorResumeWith(errorHandler::fallback)
  .doAfterTerminate(serviceM::incrementTerminate)
  .subscribe(System.out::println);

async‑await syntax (C# and Scala Async) automatically rewrites synchronous code into non‑blocking futures, allowing the programmer to write sequential code that runs asynchronously.

val future = async {
  println("Begin blocking")
  await { async { Thread.sleep(1000) } }
  println("End blocking")
}

Fiber (coroutine‑like) provides cooperative scheduling; on the JVM Quasar implements fibers by bytecode transformation. Example Java fiber:

new Fiber
() {
  @Override
  protected V run() throws SuspendExecution, InterruptedException {
    // your code
  }
}.start();

And Kotlin example:

fiber @Suspendable {
  // your code
}

Actor model (originating from Erlang) treats each actor as an isolated entity that processes one message at a time and communicates via asynchronous messages. Akka is the primary JVM implementation. A simplified actor pseudocode is shown:

class MyActor extends BasicActor {
  var halfDoneResult: XXX = None

  def receive(): Receive = {
    case A => {
      halfDoneResult = firstPart()
      doIO(halfDoneResult).onComplete { self ! B() }
    }
    case B => secondPart(halfDoneResult)
  }
}

The article compares actors with RPC and message queues in distributed architectures, noting that actors provide built‑in distribution but can add complexity when direct responses are needed.

In conclusion, the piece surveys several JVM concurrency abstractions, highlights their performance characteristics, and suggests choosing the model that best matches the workload and architectural constraints.

JVMConcurrencyasynchronousReactiveThreadsactorsfutures
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.