Backend Development 13 min read

Performance Evaluation of Go Channels Compared with Java Disruptor and LinkedBlockingQueue

This article presents a comprehensive performance test of Go language channels, comparing them with Java's Disruptor and LinkedBlockingQueue across various object sizes, queue lengths, and producer/consumer thread counts, and provides practical recommendations based on the benchmark results.

FunTester
FunTester
FunTester
Performance Evaluation of Go Channels Compared with Java Disruptor and LinkedBlockingQueue

Conclusion

Overall, Go channel performance is sufficiently high to meet current stress‑test requirements. Key takeaways include: Go channels are fast enough for most production scenarios; smaller message bodies yield better performance; channel size is not critical; and creating fasthttp.Request objects can be slower than net/http.Request due to lack of explicit release.

Introduction

In Go, a channel is a FIFO communication mechanism that connects goroutines, allowing one goroutine to send values to another. It behaves like a conveyor belt or queue, preserving the order of sent data.

Test Results

Performance is measured by the number of messages processed per millisecond. The tests cover three object types (small, medium, large) and vary queue length, producer/consumer thread counts, and total message count.

Data Explanation

Three net/http.Request variants are used, differing in header size and URL length.

// Small object
    get, _ := http.NewRequest("GET", base.Empty, nil)
// Medium object
    get, _ := http.NewRequest("GET", base.Empty, nil)
    get.Header.Add("token", token)
    get.Header.Add("Connection", base.Connection_Alive)
    get.Header.Add("User-Agent", base.UserAgent)
// Large object
    get, _ := http.NewRequest("GET", base.Empty, nil)
    get.Header.Add("token", token)
    get.Header.Add("token1", token)
    get.Header.Add("token2", token)
    get.Header.Add("token3", token)
    get.Header.Add("token4", token)
    get.Header.Add("token5", token)
    get.Header.Add("Connection", base.Connection_Alive)
    get.Header.Add("User-Agent", base.UserAgent)

Producer Findings

Increasing the number of producers improves throughput up to about 20 threads; beyond that the gain diminishes.

Smaller message bodies achieve higher rates.

Queue length (size) has little impact on performance.

Consumer Findings

Message size between 500 k and 1 M shows no significant difference.

Consumer concurrency peaks between 10 and 20 threads.

Smaller messages are preferable.

Consumer concurrency beyond the optimal range provides diminishing returns, unlike the Disruptor where scaling behaves differently.

Producer & Consumer Combined

The total thread count is twice the number of producer or consumer threads. Results show that message queue accumulation has little impact on throughput, but excessive consumption cycles can slightly degrade performance due to producer rate limits.

Test Cases

The Go implementation mirrors previous Java and Groovy test cases, with differences such as using sync.WaitGroup instead of java.util.concurrent.CountDownLatch and lacking a direct replacement for java.util.concurrent.CyclicBarrier .

Producer Scenario

func TestQueue(t *testing.T) {
    var index int32 = 0
    rs := make(chan *http.Request, total+10000)
    var group sync.WaitGroup
    group.Add(threadNum)
    milli := futil.Milli()
    funtester := func() {
        go func() {
            for {
                l := atomic.AddInt32(&index, 1)
                if l%piece == 0 {
                    m := futil.Milli()
                    log.Println(m - milli)
                    milli = m
                }
                if l > total {
                    break
                }
                get := getRequest()
                rs <- get
            }
            group.Done()
        }()
    }
    start := futil.Milli()
    for i := 0; i < threadNum; i++ {
        funtester()
    }
    group.Wait()
    end := futil.Milli()
    log.Println(atomic.LoadInt32(&index))
    log.Printf("average rate per ms %d", total/(end-start))
}

Consumer Scenario

func TestConsumer(t *testing.T) {
    rs := make(chan *http.Request, total+10000)
    var group sync.WaitGroup
    group.Add(10)
    funtester := func() {
        go func() {
            for {
                if len(rs) > total {
                    break
                }
                get := getRequest()
                rs <- get
            }
            group.Done()
        }()
    }
    for i := 0; i < 10; i++ {
        funtester()
    }
    group.Wait()
    log.Printf("data generated! total %d", len(rs))
    totalActual := int64(len(rs))
    var conwait sync.WaitGroup
    conwait.Add(threadNum)
    consumer := func() {
        go func() {
        FUN:
            for {
                select {
                case <-rs:
                case <-time.After(10 * time.Millisecond):
                    break FUN
                }
            }
            conwait.Done()
        }()
    }
    start := futil.Milli()
    for i := 0; i < threadNum; i++ {
        consumer()
    }
    conwait.Wait()
    end := futil.Milli()
    log.Printf("average rate per ms %d", totalActual/(end-start))
}

Benchmark Results

Benchmarking with net/http.Request and fasthttp.Request shows that net/http often outperforms fasthttp and both are slower than Java implementations for large objects.

Observations

Go channels provide high throughput but are not always faster than Java Disruptor.

Message size heavily influences performance; keep payloads minimal.

Increasing producer threads yields better gains than increasing consumer threads beyond a certain point.

References

FunTester 2021 Summary

FunTester Original Awards

performanceconcurrencyGoHTTPbenchmarkDisruptorchannel
FunTester
Written by

FunTester

10k followers, 1k articles | completely useless

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.