Designing a High‑Concurrency Ticket Booking System with Nginx Load Balancing, Redis Stock Management, and Go
This technical article explains how to handle millions of simultaneous train‑ticket purchase requests by combining Nginx weighted load balancing, local in‑memory stock deduction, and a centralized Redis stock counter using Go, ensuring no overselling, high availability, and efficient performance.
In this article a senior architect explains the challenges of handling millions of concurrent ticket purchase requests during holidays and presents a design that combines Nginx weighted load balancing, local in‑memory stock deduction, and a centralized Redis stock counter to guarantee no overselling and high availability.
The three‑layer load‑balancing architecture (OSPF, LVS, Nginx) is introduced, followed by a discussion of three typical stock‑deduction strategies (order‑then‑deduct, deduct‑then‑pay, pre‑deduct) and their drawbacks.
The chosen solution uses pre‑deduction: each server keeps a local stock pool and, after a successful local deduction, atomically decrements the global stock in Redis via a Lua script, ensuring consistency across the cluster.
Sample Nginx configuration for weighted round‑robin:
upstream load_rule {
server 127.0.0.1:3001 weight=1;
server 127.0.0.1:3002 weight=2;
server 127.0.0.1:3003 weight=3;
server 127.0.0.1:3004 weight=4;
}
server {
listen 80;
server_name load_balance.com www.load_balance.com;
location / {
proxy_pass http://load_rule;
}
}Key Go code snippets:
// Local stock deduction
func (spike *LocalSpike) LocalDeductionStock() bool {
spike.LocalSalesVolume += 1
return spike.LocalSalesVolume < spike.LocalInStock
} // Redis Lua script for atomic global deduction
const LuaScript = `
local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if ticket_total_nums >= ticket_sold_nums then
return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0
`
func (r *RemoteSpikeKeys) RemoteDeductionStock(conn redis.Conn) bool {
lua := redis.NewScript(1, LuaScript)
result, err := redis.Int(lua.Do(conn, r.SpikeOrderHashKey, r.TotalInventoryKey, r.QuantityOfOrderKey))
if err != nil {
return false
}
return result != 0
} // HTTP handler
func handleReq(w http.ResponseWriter, r *http.Request) {
redisConn := redisPool.Get()
defer redisConn.Close()
if localSpike.LocalDeductionStock() && remoteSpike.RemoteDeductionStock(redisConn) {
util.RespJson(w, 1, "Ticket purchase successful", nil)
} else {
util.RespJson(w, -1, "Sold out", nil)
}
}Performance testing with ApacheBench shows that a single low‑spec machine can process over 4 000 requests per second, and log analysis confirms correct stock accounting.
The article concludes with two lessons: (1) use load balancing to divide traffic and let each node handle its share, and (2) exploit concurrency and asynchronous processing (Go goroutines, Redis atomic operations) to avoid database bottlenecks.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.