Designing a High‑Concurrency Ticket Spike System with Load Balancing, Redis, and Go
This article explains how to build a high‑concurrency ticket‑seckill system that can handle millions of simultaneous requests by using multi‑layer load balancing, Nginx weighted round‑robin, local stock deduction, Redis atomic scripts, and Go’s native concurrency, and it demonstrates the implementation with complete code and performance testing.
During holidays, millions of users compete for train tickets, creating a massive spike in traffic that can overwhelm the 12306 service; the article analyzes this problem and proposes a system capable of serving 1 million concurrent users buying 10 000 tickets while remaining stable.
The proposed architecture follows a classic high‑concurrency pattern: a distributed cluster with multiple layers of load balancers (OSPF, LVS, Nginx) provides redundancy and traffic distribution; three types of load balancing—OSPF link‑cost, LVS IP‑hash, and Nginx weighted round‑robin—are introduced and compared.
# Nginx weighted round‑robin configuration upstream load_rule { server 127.0.0.1:3001 weight=1; server 127.0.0.1:3002 weight=2; server 127.0.0.1:3003 weight=3; server 127.0.0.1:3004 weight=4; } server { listen 80; server_name load_balance.com www.load_balance.com; location / { proxy_pass http://load_rule; } }
A simple Go HTTP service is used to simulate the ticket‑buying endpoint; four instances listen on ports 3001‑3004, each logging requests to ./stat.log and responding with JSON status.
package main import ( "net/http" "os" "strings" ) func main() { http.HandleFunc("/buy/ticket", handleReq) http.ListenAndServe(":3001", nil) } // handleReq writes a log entry and returns success/failure func handleReq(w http.ResponseWriter, r *http.Request) { // ... implementation omitted for brevity ... } func writeLog(msg string, logPath string) { fd, _ := os.OpenFile(logPath, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0644) defer fd.Close() content := strings.Join([]string{msg, "\r\n"}, "") fd.Write([]byte(content)) }
To avoid database bottlenecks, the article introduces a two‑stage stock deduction strategy: a fast local in‑memory counter and a remote atomic deduction using Redis. The Redis Lua script guarantees atomicity when checking total inventory and incrementing the sold count.
const LuaScript = ` local ticket_key = KEYS[1] local ticket_total_key = ARGV[1] local ticket_sold_key = ARGV[2] local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key)) local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key)) if ticket_total_nums >= ticket_sold_nums then return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1) end return 0 `
The Go code calls this script via the Redigo client, returning true when the stock is successfully deducted.
Performance is validated with ApacheBench ( ab -n 10000 -c 100 http://127.0.0.1:3005/buy/ticket ), achieving over 4 000 requests per second on a modest Mac, with uniform request distribution and no failed requests; the log shows the point where local stock runs out and further requests are rejected.
In conclusion, the article demonstrates that by combining multi‑layer load balancing, in‑memory stock handling, Redis atomic operations, and Go’s lightweight goroutines, a ticket‑seckill system can avoid database I/O, prevent overselling, tolerate partial node failures, and scale to high QPS levels.
IT Architects Alliance
Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.