Backend Development 26 min read

Performance Comparison of LMAX Disruptor and LinkedBlockingQueue in Java

This article presents a comprehensive performance comparison between LMAX Disruptor and Java's LinkedBlockingQueue, detailing test setups, producer and consumer configurations, various object sizes, benchmark results, and practical conclusions for high‑throughput backend systems.

FunTester
FunTester
FunTester
Performance Comparison of LMAX Disruptor and LinkedBlockingQueue in Java

Conclusion

Overall, the consumer performance of com.lmax.disruptor.dsl.Disruptor is extremely strong—so strong that it is almost impossible to measure a ceiling. However, production performance degrades as the number of events increases, staying around 500k QPS, which satisfies current pressure‑test needs. The key take‑aways are:

Disruptor's consumer capability is outstanding, maintaining high throughput even with a very large number of consumers (e.g., 1000).

Provided there is no message backlog, the size of com.lmax.disruptor.AbstractSequencer#bufferSize has little impact on performance.

In a single‑producer scenario, Disruptor's production rate suffers the same instability as java.util.concurrent.LinkedBlockingQueue .

The bottleneck of Disruptor lies in the producer; larger message objects affect performance more than the number of producers, while queue backlog has little effect.

Reducing the event payload size dramatically improves performance—using java.lang.String for the payload is recommended, as confirmed by the log‑replay system.

Introduction

Here are a few additional remarks.

Disruptor is a high‑performance queue developed by the UK foreign‑exchange firm LMAX to solve memory‑queue latency issues (its latency was found to be on the same order as I/O operations). A system built on Disruptor can handle six million orders per second on a single thread.

Test Results

Performance is measured solely by the number of messages (objects) processed per millisecond. The test uses the consumer mode com.lmax.disruptor.dsl.ProducerType#MULTI and registers consumers with com.lmax.disruptor.dsl.Disruptor#handleEventsWithWorkerPool .

Data Description

Three types of org.apache.http.client.methods.HttpGet objects are used, created with the native API. To differentiate size, additional headers and URL length are added.

Small object:

def get = new HttpGet()

Medium object:

def get = new HttpGet(url)
get.addHeader("token", token)
get.addHeader(HttpClientConstant.USER_AGENT)
get.addHeader(HttpClientConstant.CONNECTION)

Large object:

def get = new HttpGet(url + token)
get.addHeader("token", token)
get.addHeader("token1", token)
get.addHeader("token5", token)
get.addHeader("token4", token)
get.addHeader("token3", token)
get.addHeader("token2", token)
get.addHeader(HttpClientConstant.USER_AGENT)
get.addHeader(HttpClientConstant.CONNECTION)

Producer

Object Size

Queue Length (million)

Thread Count

Rate (/ms)

Small

1

1

890

Small

1

5

1041

Small

1

10

1100

Small

0.5

1

755

Small

0.5

5

597

Small

0.5

10

612

Medium

1

1

360

Medium

1

5

394

Medium

1

10

419

Medium

1

20

401

Medium

0.5

1

256

Medium

0.5

5

426

Large

1

1

201

Large

1

5

243

Large

1

10

242

Large

0.5

1

194

Large

0.5

5

215

Large

0.5

10

195

During the test, an excessively large com.lmax.disruptor.AbstractSequencer#bufferSize caused the Disruptor to take a very long time to start; values above 1024 × 1024 were not tested. The test always used this buffer size.

Observed regularities:

Higher total message volume yields higher QPS.

Increasing producer thread count has little impact on QPS.

Keeping the message payload as small as possible improves performance.

Consumer

For the Disruptor framework, constructing a single‑consumer scenario is difficult, so a shortcut was used that may affect the results slightly; however, Disruptor's extraordinary consumption ability makes the error negligible.

Object Size

Queue Length (million)

Thread Count

Rate (/ms)

Small

1

1

10526

Small

1

5

6060

Small

1

10

5376

Small

1

20

4672

Medium

1

1

12345

Medium

1

5

8130

Medium

1

10

5586

Large

1

1

16129

Large

1

5

5681

Large

1

10

5649

Large

0.5

1

8474

Large

0.5

5

4761

Large

0.5

10

3846

The conclusions are similar to those for java.util.concurrent.LinkedBlockingQueue :

Longer queue length yields higher throughput.

Fewer consumer threads give better performance.

Keep the message payload as small as possible.

PS: In a test with a million large‑object messages and 1000 consumer threads, the QPS reached 3412/ms, showing that even with 1000 consumer threads Disruptor maintains very high performance.

Producer & Consumer

Here the thread count refers to the number of producers or consumers; the total thread count is twice this number.

Object Size

Count (million)

Thread Count

Queue Length (million)

Rate (/ms)

Small

1

1

0.1

16949

Small

1

1

0.2

8403

Small

1

1

0.5

5555

Small

1

5

0.1

5181

Small

1

10

0.1

1295

Medium

1

1

0.1

21276

Medium

1

1

0.2

16949

Medium

1

5

0.2

15625

Medium

1

10

0.2

574

Medium

2

1

0.2

34920

Medium

2

5

0.2

24752

Medium

2

10

0.2

789

Large

1

1

0.1

44000

Large

1

1

0.2

25000

Large

1

5

0.2

11764

Large

1

10

0.2

278

The next round of testing almost crashed because the same case produced up to a two‑fold variance. The following conclusions are for reference only:

Fewer accumulated messages in the queue lead to higher rates.

Consumption speed increases over time.

Keep the message payload as small as possible.

When the thread count exceeds 10, a noticeable performance drop occurs because the consumer becomes much faster than the producer, causing backlog.

Benchmark

Please refer to the previous test article "Java & Go High‑Performance Queue – LinkedBlockingQueue Performance Test" for details.

Test Cases

The test cases are written in Groovy. After defining a custom asynchronous keyword fun and reviewing closure syntax, the author felt enlightened and explored various multithreaded implementations. Additional custom timing keyword time and a closure‑based wait method were also used.

Producer

import com.funtester.config.HttpClientConstant
import com.funtester.frame.SourceCode
import com.funtester.frame.execute.ThreadPoolUtil
import com.funtester.utils.Time
import com.lmax.disruptor.EventHandler
import com.lmax.disruptor.RingBuffer
import com.lmax.disruptor.WorkHandler
import com.lmax.disruptor.YieldingWaitStrategy
import com.lmax.disruptor.dsl.Disruptor
import com.lmax.disruptor.dsl.ProducerType
import org.apache.http.client.methods.HttpGet
import org.apache.http.client.methods.HttpRequestBase
import java.util.concurrent.CountDownLatch
import java.util.concurrent.atomic.AtomicInteger

class DisProduce extends SourceCode {
    static AtomicInteger index = new AtomicInteger(1)
    static int total = 50_0000
    static int size = 10
    static int threadNum = 10
    static int piece = total / size
    static def url = "http://localhost:12345/funtester"
    static def token = "FunTesterFunTesterFunTesterFunTesterFunTesterFunTesterFunTester"

    public static void main(String[] args) {
        Disruptor
disruptor = new Disruptor
(
                FunEvent::new,
                1024 * 1024,
                ThreadPoolUtil.getFactory(),
                ProducerType.MULTI,
                new YieldingWaitStrategy()
        );
        disruptor.start();
        RingBuffer
ringBuffer = disruptor.getRingBuffer();
        def latch = new CountDownLatch(threadNum)
        def ss = Time.getTimeStamp()
        def funtester = {
            fun {
                (total / threadNum).times {
                    if (index.getAndIncrement() % piece == 0) {
                        def l = Time.getTimeStamp() - ss
                        output("${formatLong(index.get())} add cost ${formatLong(l)}")
                        ss = Time.getTimeStamp()
                    }
                    def get = new HttpGet(url + token)
                    get.addHeader("token", token)
                    get.addHeader("token1", token)
                    get.addHeader("token5", token)
                    get.addHeader("token4", token)
                    get.addHeader("token3", token)
                    get.addHeader("token2", token)
                    get.addHeader(HttpClientConstant.USER_AGENT)
                    get.addHeader(HttpClientConstant.CONNECTION)
                    ringBuffer.publishEvent((event, sequence) -> event.setRequest(get))
                }
                latch.countDown()
            }
        }
        def start = Time.getTimeStamp()
        threadNum.times { funtester() }
        latch.await()
        def end = Time.getTimeStamp()
        outRGB("Rate per ms ${total / (end - start)}")
        disruptor.shutdown();
    }

    private static class FunEventHandler implements EventHandler
, WorkHandler
{
        public void onEvent(FunEvent event, long sequence, boolean endOfBatch) {}
        public void onEvent(FunEvent event) {}
    }

    private static class FunEvent {
        HttpRequestBase request
        HttpRequestBase getRequest() { return request }
        void setRequest(HttpRequestBase request) { this.request = request }
    }
}

Consumer

import com.funtester.config.HttpClientConstant
import com.funtester.frame.SourceCode
import com.funtester.frame.event.EventThread
import com.funtester.frame.execute.ThreadPoolUtil
import com.funtester.utils.Time
import com.lmax.disruptor.EventHandler
import com.lmax.disruptor.RingBuffer
import com.lmax.disruptor.WorkHandler
import com.lmax.disruptor.YieldingWaitStrategy
import com.lmax.disruptor.dsl.Disruptor
import com.lmax.disruptor.dsl.ProducerType
import org.apache.http.client.methods.HttpGet
import org.apache.http.client.methods.HttpRequestBase
import java.util.concurrent.atomic.AtomicInteger
import java.util.stream.Collectors

class DisConsumer extends SourceCode {
    static AtomicInteger index = new AtomicInteger(1)
    static int total = 50_0000
    static int threadNum = 10
    static def url = "http://localhost:12345/funtester"
    static def token = "FunTesterFunTesterFunTesterFunTesterFunTesterFunTesterFunTester"
    static def key = true

    public static void main(String[] args) {
        Disruptor
disruptor = new Disruptor
(
                FunEvent::new,
                1024 * 1024,
                ThreadPoolUtil.getFactory(),
                ProducerType.MULTI,
                new YieldingWaitStrategy()
        );
        def funs = range(threadNum).mapToObj(f -> new FunEventHandler()).collect(Collectors.toList())
        disruptor.handleEventsWithWorkerPool(funs as FunEventHandler[])
        disruptor.start();
        RingBuffer
ringBuffer = disruptor.getRingBuffer();
        time {
            total.times {
                def get = new HttpGet(url + token)
                get.addHeader("token", token)
                get.addHeader("token1", token)
                get.addHeader("token5", token)
                get.addHeader("token4", token)
                get.addHeader("token3", token)
                get.addHeader("token2", token)
                get.addHeader(HttpClientConstant.USER_AGENT)
                get.addHeader(HttpClientConstant.CONNECTION)
                ringBuffer.publishEvent((event, sequence) -> event.setRequest(get));
            }
        }
        output("Data $total built!")
        def start = Time.getTimeStamp()
        key = false
        waitFor { !disruptor.hasBacklog() }, 0.01
        def end = Time.getTimeStamp()
        outRGB("Rate per ms ${total / (end - start)}")
        disruptor.shutdown();
    }

    private static class FunEventHandler implements EventHandler
, WorkHandler
{
        public void onEvent(FunEvent event, long sequence, boolean endOfBatch) { if (key) sleep(0.05) }
        public void onEvent(FunEvent event) { if (key) sleep(0.05) }
    }

    private static class FunEvent {
        HttpRequestBase request
        HttpRequestBase getRequest() { return request }
        void setRequest(HttpRequestBase request) { this.request = request }
    }
}

Producer & Consumer

import com.funtester.config.HttpClientConstant
import com.funtester.frame.SourceCode
import com.funtester.frame.execute.ThreadPoolUtil
import com.funtester.utils.Time
import com.lmax.disruptor.EventHandler
import com.lmax.disruptor.RingBuffer
import com.lmax.disruptor.WorkHandler
import com.lmax.disruptor.YieldingWaitStrategy
import com.lmax.disruptor.dsl.Disruptor
import com.lmax.disruptor.dsl.ProducerType
import org.apache.http.client.methods.HttpGet
import org.apache.http.client.methods.HttpRequestBase
import java.util.concurrent.atomic.AtomicInteger
import java.util.stream.Collectors

class DisBoth extends SourceCode {
    static AtomicInteger index = new AtomicInteger(1)
    static int total = 100_0000
    static int threadNum = 5
    static int buffer = 20_0000
    static def url = "http://localhost:12345/funtester"
    static def token = "FunTesterFunTesterFunTesterFunTesterFunTesterFunTesterFunTester"
    static def key = true

    public static void main(String[] args) {
        Disruptor
disruptor = new Disruptor
(
                FunEvent::new,
                1024 * 256,
                ThreadPoolUtil.getFactory(),
                ProducerType.MULTI,
                new YieldingWaitStrategy()
        );
        def funs = range(threadNum).mapToObj(f -> new FunEventHandler()).collect(Collectors.toList())
        disruptor.handleEventsWithWorkerPool(funs as FunEventHandler[])
        disruptor.start();
        RingBuffer
ringBuffer = disruptor.getRingBuffer();
        def produces = {
            fun {
                while (true) {
                    if (index.getAndIncrement() > total) break
                    def get = new HttpGet(url + token)
                    get.addHeader("token", token)
                    get.addHeader("token1", token)
                    get.addHeader("token5", token)
                    get.addHeader("token4", token)
                    get.addHeader("token3", token)
                    get.addHeader("token2", token)
                    get.addHeader(HttpClientConstant.USER_AGENT)
                    get.addHeader(HttpClientConstant.CONNECTION)
                    ringBuffer.publishEvent((event, sequence) -> event.setRequest(get));
                }
            }
        }
        time {
            buffer.times {
                def get = new HttpGet(url + token)
                get.addHeader("token", token)
                get.addHeader("token1", token)
                get.addHeader("token5", token)
                get.addHeader("token4", token)
                get.addHeader("token3", token)
                get.addHeader("token2", token)
                get.addHeader(HttpClientConstant.USER_AGENT)
                get.addHeader(HttpClientConstant.CONNECTION)
                ringBuffer.publishEvent((event, sequence) -> event.setRequest(get));
            }
        }
        output("Data $buffer built!")
        def start = Time.getTimeStamp()
        key = false
        threadNum.times { produces() }
        waitFor { !disruptor.hasBacklog() }, 0.01
        def end = Time.getTimeStamp()
        outRGB("Rate per ms ${(total + buffer) / (end - start)}")
        disruptor.shutdown();
    }

    private static class FunEventHandler implements EventHandler
, WorkHandler
{
        public void onEvent(FunEvent event, long sequence, boolean endOfBatch) { if (key) sleep(0.05) }
        public void onEvent(FunEvent event) { if (key) sleep(0.05) }
    }

    private static class FunEvent {
        HttpRequestBase request
        HttpRequestBase getRequest() { return request }
        void setRequest(HttpRequestBase request) { this.request = request }
    }
}

Have Fun ~ Tester!

FunTester 2021 Summary

FunTester Original Works Award

Groovy Language Learning Notes

Golang HTTP Client Practice

Reflection Access and Modification of Private Variables

How to Break Through Career Bottlenecks

How to Choose an API Testing Tool

How to Deliver Value from Test Automation

Groovy List Usage

Replay Browser Requests for Multi‑Link Performance Testing

How to Reduce Local Errors in Performance Testing

Do Not Search for Non‑Existent Answers

Segmented Random Practice – Simulating Online Traffic

Ten‑Million‑Level Log Replay Engine Design Draft

javaPerformance TestingbenchmarkDisruptorConcurrent Queue
FunTester
Written by

FunTester

10k followers, 1k articles | completely useless

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.