Backend Development 18 min read

Designing a High‑Concurrency Ticket Spike System with Load Balancing, Redis, and Go

This article explores the architecture and implementation of a high‑traffic ticket‑spike service, covering load‑balancing strategies, Nginx weighted round‑robin configuration, local and remote stock deduction using Go and Redis, fault‑tolerant buffering, and performance testing results.

Top Architect
Top Architect
Top Architect
Designing a High‑Concurrency Ticket Spike System with Load Balancing, Redis, and Go

The article begins by describing the extreme concurrency challenges faced by the Chinese railway ticketing platform 12306 during peak periods such as the Spring Festival, where millions of users compete for a limited number of tickets.

It then introduces a three‑layer load‑balancing architecture: OSPF for internal routing, LVS (Linux Virtual Server) for IP‑level load distribution, and Nginx for HTTP reverse‑proxy and weighted round‑robin scheduling. The Nginx configuration for weighted round‑robin is shown:

#配置负载均衡
upstream load_rule {
    server 127.0.0.1:3001 weight=1;
    server 127.0.0.1:3002 weight=2;
    server 127.0.0.1:3003 weight=3;
    server 127.0.0.1:3004 weight=4;
}
server {
    listen 80;
    server_name load_balance.com www.load_balance.com;
    location / {
        proxy_pass http://load_rule;
    }
}

To avoid database bottlenecks, the design adopts a pre‑deduction (pre‑stock) strategy: each server holds a local in‑memory stock pool, and a global stock is maintained in Redis. When a request arrives, the server first decrements its local stock; if successful, it then atomically decrements the Redis stock using a Lua script:

local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if (ticket_total_nums >= ticket_sold_nums) then
    return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0

The Go implementation includes two packages: localSpike for in‑memory stock deduction and remoteSpike for Redis operations. Sample Go code for local deduction:

func (spike *LocalSpike) LocalDeductionStock() bool {
    spike.LocalSalesVolume = spike.LocalSalesVolume + 1
    return spike.LocalSalesVolume < spike.LocalInStock
}

And the Redis connection pool initialization:

func NewPool() *redis.Pool {
    return &redis.Pool{
        MaxIdle:   10000,
        MaxActive: 12000,
        Dial: func() (redis.Conn, error) {
            c, err := redis.Dial("tcp", ":6379")
            if err != nil { panic(err.Error()) }
            return c, err
        },
    }
}

The HTTP handler combines both deductions, returns a JSON response, and logs the result. Performance testing with ApacheBench shows the single‑node service handling over 4,000 requests per second, confirming the effectiveness of the design.

Finally, the article summarizes key takeaways: use load balancing to distribute traffic, employ asynchronous and concurrent processing (Go goroutines, Redis Lua scripts) to minimize blocking I/O, and allocate buffer stock to tolerate server failures, thereby achieving a reliable, high‑throughput ticket‑spike system.

load balancingredisGohigh concurrencynginxticketing system
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.