Backend Development 19 min read

High-Concurrency Ticket Booking System: Architecture, Load Balancing, and Go Implementation

This article explores the design and implementation of a high‑concurrency train ticket spike system, detailing load‑balancing strategies with Nginx, distributed inventory management using Redis, Go‑based services, and performance testing, while addressing challenges such as overselling, fault tolerance, and efficient resource utilization.

Top Architect
Top Architect
Top Architect
High-Concurrency Ticket Booking System: Architecture, Load Balancing, and Go Implementation

During holidays, millions of users compete for train tickets, creating extreme spikes in traffic that require a robust, high‑throughput system. The author analyzes the 12306 ticketing service architecture and presents a simulated scenario where 1 million users attempt to purchase 10 000 tickets simultaneously.

Load‑Balancing Overview

The traffic passes through three layers of load balancing: OSPF (routing), LVS (IP load balancing), and Nginx (HTTP reverse proxy). Each layer distributes requests across a cluster of servers, ensuring high availability and fault tolerance.

Nginx Weighted Round‑Robin

# Configure weighted round‑robin
upstream load_rule {
    server 127.0.0.1:3001 weight=1;
    server 127.0.0.1:3002 weight=2;
    server 127.0.0.1:3003 weight=3;
    server 127.0.0.1:3004 weight=4;
}
server {
    listen 80;
    server_name load_balance.com www.load_balance.com;
    location / {
        proxy_pass http://load_rule;
    }
}

The configuration assigns different weights to four backend instances, allowing traffic to be proportionally distributed.

Go Service Implementation

Four HTTP services (ports 3001‑3004) are written in Go, each handling /buy/ticket requests, logging results to ./stat.log , and exposing a simple JSON response.

package main
import ("net/http" "os" "strings")
func main() {
    http.HandleFunc("/buy/ticket", handleReq)
    http.ListenAndServe(":3001", nil)
}
func handleReq(w http.ResponseWriter, r *http.Request) {
    // log and respond
}
func writeLog(msg string, logPath string) {
    fd, _ := os.OpenFile(logPath, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0644)
    defer fd.Close()
    fd.Write([]byte(msg + "\r\n"))
}

Requests are stress‑tested with ApacheBench ( ab -n 1000 -c 100 http://www.load_balance.com/buy/ticket ), confirming that the weighted distribution matches the configured ratios (1, 2, 3, 4 requests respectively).

Stock Deduction Strategies

The article compares three ordering flows: (1) create order then deduct stock, (2) deduct stock then create order, and (3) pre‑deduct stock. The third approach minimizes database I/O by performing inventory operations in memory and asynchronously generating orders.

Local stock is kept in memory per server, while a global stock counter resides in Redis. The Go code uses a channel as a lightweight lock to serialize stock updates.

// Local stock deduction
func (spike *LocalSpike) LocalDeductionStock() bool {
    spike.LocalSalesVolume++
    return spike.LocalSalesVolume < spike.LocalInStock
}

Remote stock deduction is performed atomically with a Lua script executed on Redis:

local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local total = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local sold = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if total >= sold then
    return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0

If both local and remote deductions succeed, the user receives a “抢票成功” response; otherwise “已售罄”.

Performance Results

Single‑machine testing on a low‑spec Mac shows the service handling over 4 000 requests per second (≈ 2.3 s for 10 000 requests, 23 ms average latency). Log analysis confirms correct request distribution and stock exhaustion.

Conclusions

The design demonstrates how to achieve high concurrency without heavy database load by combining Nginx weighted load balancing, in‑memory stock management, Redis for global consistency, and Go’s lightweight concurrency model. It also discusses fault tolerance via buffer stock and the importance of distributing load across many nodes.

load balancingRedisGohigh concurrencyNginxticketing
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.