Designing a High‑Concurrency Ticket‑Seckill System with Load Balancing, Redis, and Go
This article explains how to build a high‑concurrency ticket‑seckill service that can handle millions of requests by combining distributed load‑balancing (OSPF, LVS, Nginx weighted round‑robin), a pre‑deduction stock strategy using local memory and Redis with Lua scripts, and a Go‑based HTTP server, and it demonstrates performance testing results.
When a massive number of users try to buy train tickets at the same moment, the 12306 service faces a QPS that surpasses any typical flash‑sale system; the article uses this scenario to illustrate how to design a stable, high‑throughput backend capable of handling 1 million concurrent users buying 10 000 tickets.
The core architecture relies on a distributed cluster with three layers of load balancing: OSPF for internal routing, LVS for IP‑level load distribution, and Nginx for HTTP reverse‑proxy weighted round‑robin, each described with its cost calculation and balancing capabilities.
Nginx weighted round‑robin is configured by assigning different weights to backend servers; the article provides an example configuration:
#配置负载均衡
upstream load_rule {
server 127.0.0.1:3001 weight=1;
server 127.0.0.1:3002 weight=2;
server 127.0.0.1:3003 weight=3;
server 127.0.0.1:3004 weight=4;
}
server {
listen 80;
server_name load_balance.com www.load_balance.com;
location / { proxy_pass http://load_rule; }
}Three typical stock‑deduction flows are compared: create‑order‑then‑deduct, pay‑then‑deduct, and pre‑deduct. The first can cause DB I/O bottlenecks, the second may lead to overselling under extreme concurrency, while pre‑deduction stores a temporary inventory in memory and asynchronously creates orders.
The optimized solution stores a portion of the total inventory locally (in‑memory) and the rest in Redis. A Lua script guarantees atomic remote deduction:
local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if(ticket_total_nums >= ticket_sold_nums) then
return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0The Go implementation defines LocalSpike and RemoteSpikeKeys structs, initializes local stock, Redis connection pool, and a single‑capacity channel used as a distributed lock. Local deduction simply increments a counter and checks against the in‑memory limit, while remote deduction executes the Lua script via Redigo.
func (spike *LocalSpike) LocalDeductionStock() bool {
spike.LocalSalesVolume++
return spike.LocalSalesVolume < spike.LocalInStock
}
func (RemoteSpikeKeys *RemoteSpikeKeys) RemoteDeductionStock(conn redis.Conn) bool {
lua := redis.NewScript(1, LuaScript)
result, err := redis.Int(lua.Do(conn, RemoteSpikeKeys.SpikeOrderHashKey, RemoteSpikeKeys.TotalInventoryKey, RemoteSpikeKeys.QuantityOfOrderKey))
if err != nil { return false }
return result != 0
}The HTTP handler combines both checks; on success it returns a JSON response "抢票成功" and logs the result, otherwise it returns "已售罄". The service is started with http.ListenAndServe(":3005", nil) .
Performance is validated with ApacheBench (10 000 requests, 100 concurrent connections), achieving ~4 300 requests per second and confirming that the local sales counter stops at the configured limit while Redis remains stable.
Finally, the article emphasizes two key takeaways: (1) use load balancing to split traffic and let each node operate at its peak, and (2) leverage concurrency‑friendly designs (epoll, async processing, Go goroutines) to avoid database bottlenecks and achieve fault‑tolerant, oversell‑free ticketing.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.