Backend Development 13 min read

Optimizing a High‑Concurrency Ticket Reservation System for the "Travel with Love" Campaign

This article presents a comprehensive case study of the technical challenges and optimization strategies—including traffic surges, cache penetration, cache breakdown, limit‑purchase handling, and inventory deduction—encountered during a large‑scale ticket reservation event, and demonstrates how systematic backend improvements achieved over 50% performance gains and 98% cache hit rate.

Ctrip Technology
Ctrip Technology
Ctrip Technology
Optimizing a High‑Concurrency Ticket Reservation System for the "Travel with Love" Campaign

Background: After the pandemic, Hubei launched the "Travel with Love" campaign offering free admission to all A‑level scenic spots, causing massive traffic spikes for the online ticket reservation system.

Risks and challenges identified include a 100‑fold increase in entry traffic, reduced service stability under high concurrency, purchase‑limit errors, and inventory‑deduction hotspots.

1. Traffic Surge (100×) – The system could not scale horizontally; solutions involved reducing dependencies, merging duplicate I/O calls, and implementing interface‑level caching with fixed expiration and lazy loading.

Cache issues – Addressed cache breakdown, cache penetration, and abnormal degradation by adding passive refresh mechanisms, short‑lived empty objects for missed keys, and modular cache management with per‑module versioning and monitoring.

2. Service instability under high concurrency – Analyzed DB connection pool saturation caused by cache breakdown; mitigated by separating visible and sellable states, avoiding data updates during peak times, and switching cache refresh from delete‑then‑add to overwrite updates via Canal‑driven MQ rebuilding.

3. Limit‑purchase (quota) problems – Identified inconsistency between Redis and DB writes under heavy load; introduced delayed‑message compensation to ensure eventual consistency of cancel‑purchase operations.

4. Inventory deduction – Fixed data inconsistency by wrapping deduction and detail records in a single transaction; introduced asynchronous Redis‑based pre‑deduction with MQ‑driven DB updates, and discussed cache hotspot sharding for high‑traffic keys.

Results: Cache hit rate exceeded 98%, interface response time improved by over 50%, upstream/downstream call ratio reduced from 1:3.9 to 1:1.3, and DB call volume dropped by 70%; overall QPS reached 210k with stable RT.

Conclusion: The systematic risk analysis, traffic estimation, full‑link stress testing, rate‑limiting, monitoring, and post‑event review enabled a robust, high‑performance reservation system.

Recruitment notice: Ctrip Travel R&D team is hiring backend, frontend, testing, SRE, and data mining positions; contact via [email protected].

distributed systemsperformancecachinghigh concurrencybackend optimizationticket reservation
Ctrip Technology
Written by

Ctrip Technology

Official Ctrip Technology account, sharing and discussing growth.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.