Designing a High-Concurrency Ticket‑Spiking System with Load Balancing, Redis, and Go
This article explains how to build a high‑concurrency ticket‑spike system by analyzing 12306's extreme load, introducing multi‑layer load balancing (OSPF, LVS, Nginx), comparing stock‑deduction strategies, and presenting a Go prototype that uses in‑memory stock, Redis centralized inventory, and weighted Nginx routing to achieve scalable, fault‑tolerant performance.
During holidays, ticket purchasing spikes dramatically; the Chinese railway platform 12306 experiences millions of QPS, illustrating the challenges of flash‑sale systems.
The article first examines the backend architecture, highlighting that large‑scale high‑concurrency services typically employ distributed clusters with multiple layers of load balancing, including OSPF routing, LVS IP load balancing, and Nginx HTTP reverse proxy.
It introduces Nginx's three load‑balancing methods—round‑robin, weighted round‑robin, and IP‑hash—and provides a concrete weighted configuration example:
upstream load_rule {
server 127.0.0.1:3001 weight=1;
server 127.0.0.1:3002 weight=2;
server 127.0.0.1:3003 weight=3;
server 127.0.0.1:3004 weight=4;
}
server {
listen 80;
server_name load_balance.com www.load_balance.com;
location / {
proxy_pass http://load_rule;
}
}Next, three stock‑deduction approaches are discussed: (1) create order then reduce stock, (2) reduce stock after payment, and (3) pre‑deduct stock before order creation. The analysis shows that pre‑deduction avoids heavy database I/O and prevents both overselling and underselling.
A prototype implementation in Go demonstrates the complete flow. Four local HTTP services listen on ports 3001‑3004, each assigned a different weight, and an Nginx upstream distributes requests accordingly.
package main
import ("net/http" "os" "strings")
func main() {
http.HandleFunc("/buy/ticket", handleReq)
http.ListenAndServe(":3001", nil)
}
func handleReq(w http.ResponseWriter, r *http.Request) {
// request handling logic
}
func writeLog(msg string, logPath string) {
fd, _ := os.OpenFile(logPath, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0644)
defer fd.Close()
content := strings.Join([]string{msg, "\r\n"}, "")
fd.Write([]byte(content))
}Local stock is kept in memory, while a Redis hash stores the global inventory. An atomic Lua script guarantees that the remote deduction is performed safely:
local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if (ticket_total_nums >= ticket_sold_nums) then
return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0The request handler first acquires a channel‑based lock, performs local deduction, then invokes the Redis Lua script for remote deduction. If both succeed, it returns a success JSON response; otherwise it reports “sold out”.
func handleReq(w http.ResponseWriter, r *http.Request) {
redisConn := redisPool.Get()
<-done
if localSpike.LocalDeductionStock() && remoteSpike.RemoteDeductionStock(redisConn) {
util.RespJson(w, 1, "抢票成功", nil)
} else {
util.RespJson(w, -1, "已售罄", nil)
}
done <- 1
writeLog(...)
}Load testing with ApacheBench (ab -n 10000 -c 100) shows the single‑node service can handle over 4 000 requests per second, and the distributed setup with Nginx can spread 1 000 000 requests across 100 servers, each holding a buffer inventory to survive node failures.
In conclusion, by combining multi‑layer load balancing, in‑memory local stock, Redis‑based centralized stock, and Go’s native concurrency, the system achieves high throughput, prevents overselling/underselling, and remains tolerant to server crashes without relying on heavyweight database transactions.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.