Backend Development 12 min read

Understanding the Disruptor Framework: Core Concepts, Wait Strategies, and Usage Examples

This article introduces the open‑source Disruptor framework, explains its ring‑buffer architecture, sequencer, wait‑strategy options, and provides complete Java code examples to demonstrate how to build high‑performance producer‑consumer pipelines without locks.

Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Understanding the Disruptor Framework: Core Concepts, Wait Strategies, and Usage Examples

Introduction

Disruptor is an open‑source framework created by LMAX to solve the performance bottleneck of traditional locked queues, offering lock‑free, high‑concurrency operations and claiming up to six million orders per second in a single thread.

Official site: http://lmax-exchange.github.io/disruptor/.

Many well‑known projects such as Apache Storm, Camel, and Log4j2 adopt Disruptor for high performance.

Why Disruptor Was Created

Java’s built‑in thread‑safe queues (ArrayBlockingQueue, LinkedBlockingQueue, ConcurrentLinkedQueue) rely on locks or CAS, which can degrade performance under heavy contention, prompting the development of a lock‑free solution.

Core Concepts

RingBuffer – the underlying circular array that stores events.

Sequencer – manages sequence numbers for producers and consumers, providing coordination algorithms for single‑ and multi‑producer modes.

Sequence – a numeric cursor used to track the progress of producers and consumers.

SequenceBarrier – ensures producers do not overwrite entries that consumers have not yet processed.

EventProcessor – listens to the RingBuffer and dispatches events to the actual consumer implementation.

EventHandler – the business‑logic interface implemented by consumers.

Producer – the interface used by threads that publish events into the RingBuffer.

WaitStrategy – determines how a consumer waits for a producer to publish an event.

Wait Strategies

BlockingWaitStrategy – uses a lock and condition variable; lowest CPU usage but higher latency.

SleepingWaitStrategy – similar performance to Blocking but uses LockSupport.parkNanos(1) to reduce impact on producer threads.

YieldingWaitStrategy – spins and calls Thread.yield() , suitable for low‑latency systems where consumer threads are fewer than CPU cores.

BusySpinWaitStrategy – pure spin‑wait, offering the best latency for ultra‑low‑latency environments.

PhasedBackoffWaitStrategy – combines spinning, yielding, and a custom back‑off, useful when CPU resources are scarce.

Usage Example

Reference: https://github.com/LMAX-Exchange/disruptor/wiki/Getting-Started

com.lmax
disruptor
3.3.4
// Define the event type
public class LongEvent {
    private Long value;
    public Long getValue() { return value; }
    public void setValue(Long value) { this.value = value; }
}
public class LongEventFactory implements EventFactory
{
    public LongEvent newInstance() { return new LongEvent(); }
}
public class LongEventHandler implements EventHandler
{
    public void onEvent(LongEvent event, long sequence, boolean endOfBatch) {
        System.out.println("Consumer:" + event.getValue());
    }
}
public class LongEventProducer {
    public final RingBuffer
ringBuffer;
    public LongEventProducer(RingBuffer
ringBuffer) { this.ringBuffer = ringBuffer; }
    public void onData(ByteBuffer byteBuffer) {
        long sequence = ringBuffer.next();
        try {
            LongEvent event = ringBuffer.get(sequence);
            event.setValue(byteBuffer.getLong(0));
            Thread.sleep(10);
        } finally {
            System.out.println("Producer ready to publish data");
            ringBuffer.publish(sequence);
        }
    }
}
public class DisruptorMain {
    public static void main(String[] args) {
        ExecutorService executor = Executors.newCachedThreadPool();
        EventFactory
factory = new LongEventFactory();
        int ringBufferSize = 1024 * 1024; // must be power of 2
        Disruptor
disruptor = new Disruptor<>(factory, ringBufferSize, executor,
                ProducerType.SINGLE, new YieldingWaitStrategy());
        disruptor.handleEventsWith(new LongEventHandler());
        disruptor.start();
        RingBuffer
ringBuffer = disruptor.getRingBuffer();
        LongEventProducer producer = new LongEventProducer(ringBuffer);
        ByteBuffer bb = ByteBuffer.allocate(8);
        for (int i = 1; i <= 100; i++) {
            bb.putLong(0, i);
            producer.onData(bb);
        }
        disruptor.shutdown();
        executor.shutdown();
    }
}

Core Design Principles

Ring‑buffer array structure avoids garbage collection and aligns with CPU cache lines.

Index calculation uses power‑of‑two sizes and bit‑wise operations for fast element location.

Lock‑free design relies on atomic CAS operations to claim slots and publish events safely.

Data Structure

The framework uses a customizable RingBuffer (circular array) together with a sequence number that points to the next available slot for producers and consumers.

Write Data Flow (Single‑Threaded)

Request to write *m* elements.

If *m* slots are available, obtain the highest sequence number, ensuring no overwrite of unread data.

Producer writes the elements into the claimed slots.

Use Cases

Disruptor delivers lower latency and higher throughput than traditional queues such as ArrayBlockingQueue, making it suitable for scenarios where a single producer feeds multiple consumers that must process events in order, e.g., reading MySQL binlog and indexing into Elasticsearch.

Performance results and further documentation can be found at the official GitHub wiki and related technical blogs.

JavaconcurrencyDisruptorhigh performanceringbufferWaitStrategy
Rare Earth Juejin Tech Community
Written by

Rare Earth Juejin Tech Community

Juejin, a tech community that helps developers grow.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.