Backend Development 6 min read

Comparing BIO, NIO, and Asynchronous Models Using a Bank Process Analogy

The article uses a simple bank workflow with ten employees to illustrate how BIO, NIO, and asynchronous processing differ in task allocation and throughput, showing that dividing work among specialized roles dramatically increases the number of customers served per hour.

Java Captain
Java Captain
Java Captain
Comparing BIO, NIO, and Asynchronous Models Using a Bank Process Analogy

A bank with ten employees processes a customer request in four steps: fill‑in form (5 min), review (1 min), security fetches cash (3 min), and print receipt (1 min). The author uses this scenario to compare three processing models.

1. BIO (Blocking I/O) model – each arriving customer is handled entirely by a single employee who performs all four steps. With a total of 10 minutes per customer, one employee can serve six customers per hour, so ten employees handle at most 60 customers per hour, leaving many workers idle during the first step.

2. NIO (Non‑blocking I/O) model – the work is split: employee A only collects the forms and then distributes them to the remaining nine employees for the subsequent steps. Assuming employee A is saturated, the nine workers each process a customer in 5 minutes, yielding 9 × (60/5) = 108 customers per hour. This mirrors the classic NIO architecture with a main reactor, sub‑reactors, and worker threads, where each thread specializes in a specific task, eliminating idle time.

3. Asynchronous model – a third employee B is dedicated to the third step (security fetching cash). While the teller finishes step 2, B retrieves the cash, allowing the teller to immediately start the next customer. With this arrangement, a teller can finish a customer in 2 minutes, so eight active tellers process 8 × (60/2) = 240 customers per hour. The author notes that this pattern corresponds to asynchronous RPC/HTTP calls in modern web services, and mentions Jetty Continuations as a concrete implementation.

Conclusion – The “divide‑and‑conquer” principle, assigning specialized staff (or threads) to specific tasks, dramatically improves throughput and applies both to computer systems and broader societal processes.

backend developmentconcurrencyasynchronousNIOthroughputBIO
Java Captain
Written by

Java Captain

Focused on Java technologies: SSM, the Spring ecosystem, microservices, MySQL, MyCat, clustering, distributed systems, middleware, Linux, networking, multithreading; occasionally covers DevOps tools like Jenkins, Nexus, Docker, ELK; shares practical tech insights and is dedicated to full‑stack Java development.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.