Backend Development 11 min read

Understanding the Disruptor In-Memory Message Queue: Architecture, Features, and Tuning

This article introduces the Disruptor in‑memory message queue, explains its core components such as Ring Buffer, Sequence, Sequencer and Wait Strategies, describes its distinctive features like multicast events and lock‑free concurrency, and provides tuning guidelines and a complete Java example.

Code Ape Tech Column
Code Ape Tech Column
Code Ape Tech Column
Understanding the Disruptor In-Memory Message Queue: Architecture, Features, and Tuning

Disruptor is a popular in‑memory message queue originating from LMAX's research on concurrency, performance, and non‑blocking algorithms. Unlike distributed queues such as RocketMQ, Disruptor runs within a single process and therefore has a very different architecture.

1. Core Concepts

1.1 Ring Buffer

The Ring Buffer is the primary design element of Disruptor. Since version 3.0 it only stores and updates data that passes through the Disruptor, and in advanced scenarios it can be replaced by user‑provided implementations.

1.2 Sequence

Disruptor uses a Sequence to identify the position of each component. Every consumer (event processor) holds its own Sequence , which behaves like an AtomicLong but without false‑sharing between different sequences.

1.3 Sequencer

The Sequencer is the true core of Disruptor. It has two implementations: SingleProducerSequencer and MultiProducerSequencer , supporting single‑producer and multi‑producer scenarios respectively, and implements all required concurrency algorithms.

1.4 Sequence Barrier

The Sequencer creates a SequenceBarrier that holds references to the producer’s sequences and the consumers’ sequences. The barrier determines whether there are events available for a consumer to process.

1.5 Wait Strategy

Wait strategies define how consumers wait for new events. Options include BlockingWaitStrategy , SleepingWaitStrategy , YieldingWaitStrategy , and BusySpinWaitStrategy , each with different latency and CPU usage characteristics.

1.6 Event Processor

An event processor repeatedly handles events from the Disruptor and owns the consumer’s Sequence . The provided BatchEventProcessor offers efficient looping and can invoke a user‑implemented EventHandler after processing.

1.7 Event Handler

Users implement the EventHandler interface to define how each event is processed.

2. Disruptor Features

2.1 Multicast Events

Unlike most queues where an event is consumed by a single consumer, Disruptor can broadcast an event to multiple consumers, allowing all listeners to receive the same message.

2.2 Consumer Dependency Graph

Consumers can be coordinated via a gating mechanism. By adding consumers to the ring buffer with RingBuffer.addGatingConsumers() , a SequenceBarrier can be built to enforce that certain consumers finish processing before others start.

2.3 Memory Pre‑allocation

To achieve low latency, Disruptor pre‑allocates memory for events. Users provide an EventFactory that creates objects for each slot in the ring buffer, allowing producers to reuse these objects without additional allocation.

2.4 Lock‑free Concurrency

Disruptor achieves low latency primarily through lock‑free algorithms using memory barriers and CAS operations. The only place a lock is used is in the BlockingWaitStrategy .

3. Tuning Options

3.1 Single vs. Multi‑Producer

Disruptor<LongEvent> disruptor = new Disruptor<>(factory, bufferSize, DaemonThreadFactory.INSTANCE, ProducerType.SINGLE, new BlockingWaitStrategy());

Setting ProducerType.SINGLE creates a sequencer optimized for a single producer; ProducerType.MULTI enables a multi‑producer sequencer. Benchmarks show higher throughput for the single‑producer configuration on typical hardware.

3.2 Wait Strategies

BlockingWaitStrategy : Uses a lock and a Condition variable.

SleepingWaitStrategy : Calls LockSupport.parkNanos(1) instead of signalling a condition.

YieldingWaitStrategy : Performs a busy‑spin with Thread.yield() , suitable when CPU cores exceed the number of event‑handling threads.

BusySpinWaitStrategy : Pure busy‑spin, offering the lowest latency but requiring a dedicated CPU core and disabled hyper‑threading.

4. Official Example

The following Java code demonstrates a minimal Disruptor setup that publishes long values from a producer to a consumer.

public class LongEvent {
    private long value;
    public void set(long value) { this.value = value; }
    @Override public String toString() { return "LongEvent{value=" + value + '}'; }
}
public class LongEventFactory implements EventFactory
{
    @Override public LongEvent newInstance() { return new LongEvent(); }
}
public class LongEventHandler implements EventHandler
{
    @Override public void onEvent(LongEvent event, long sequence, boolean endOfBatch) {
        System.out.println("Event: " + event);
    }
}
public class LongEventMain {
    public static void main(String[] args) throws Exception {
        int bufferSize = 1024;
        Disruptor
disruptor = new Disruptor<>(LongEvent::new, bufferSize, DaemonThreadFactory.INSTANCE);
        disruptor.handleEventsWith((event, sequence, endOfBatch) -> System.out.println("Event: " + event));
        disruptor.start();
        RingBuffer
ringBuffer = disruptor.getRingBuffer();
        ByteBuffer bb = ByteBuffer.allocate(8);
        for (long l = 0; ; l++) {
            bb.putLong(0, l);
            ringBuffer.publishEvent((event, seq, buffer) -> event.set(buffer.getLong(0)), bb);
            Thread.sleep(1000);
        }
    }
}

5. Conclusion

As a high‑performance in‑memory queue, Disruptor offers valuable design ideas such as memory pre‑allocation and lock‑free concurrency. Its API is straightforward, making it a recommended choice for latency‑critical Java applications.

Performance TuningDisruptorlow latencyRing BufferJava ConcurrencyIn-Memory Queue
Code Ape Tech Column
Written by

Code Ape Tech Column

Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.