Databases 11 min read

Case Study: TiDB Deployment for the 2021 "818 Global Auto Festival"

This case study details how Car Home leveraged TiDB 5.1.1 with a three‑data‑center, five‑replica HTAP architecture to support the high‑traffic 818 Global Auto Festival, covering background, business requirements, database selection, system design, performance challenges, solutions, and post‑event insights.

HomeTech
HomeTech
HomeTech
Case Study: TiDB Deployment for the 2021 "818 Global Auto Festival"

The 818 Global Auto Festival, organized by Car Home and Hunan TV, is a large‑scale automotive shopping event comparable to China’s Double‑11 and 618 sales festivals, requiring massive real‑time interaction, flash sales, and prize draws.

To meet the event’s stringent consistency, safety, and sub‑second data reporting needs, the team selected TiDB 5.0/5.1.1 as the core database, complemented by Kafka, Redis, Elasticsearch, and LVS‑based load balancing across multiple availability zones.

The deployment used high‑performance cloud hosts from a leading Chinese provider, distributing TiDB clusters across three data centers in Beijing (zones A, C, D) with a five‑replica, multi‑IDC topology. TiFlash provided MPP‑based OLAP acceleration, while TiCDC handled real‑time change data capture to downstream MySQL for disaster recovery.

Key architectural highlights include:

Three‑data‑center, five‑replica TiDB cluster with Raft‑based synchronous replication.

TiFlash MPP architecture for large‑scale analytical queries.

TiCDC for incremental data sync and high‑availability backup.

Cross‑region deployment of MySQL as an emergency fallback.

During stress testing, several issues were discovered and resolved:

Index hotspot: Adding a secondary index on user_id caused TPS to drop from 100k to 20k; the solution was to add a hash column and a composite index to disperse hotspot keys, restoring TPS to >50k.

2‑Phase Commit (2PC) slowdown: Secondary indexes forced many transactions into 2PC, increasing request volume threefold; recommendations included increasing GRPC threads and batch‑write optimizations.

TiCDC node imbalance: A single Changefeed caused uneven TPS across TiCDC nodes; the fix was to create multiple Changefeeds and manually distribute large tables.

Best practices derived from the project include keeping inter‑data‑center latency below 2 ms, using RAID‑0 SSDs for TiKV I/O, scaling down to three replicas during a zone outage, and carefully designing schema and indexes to avoid hotspots.

During the live event, the TiDB cluster sustained nearly 400k rows per second writes, with 99th‑percentile SQL latency under 30 ms, TiCDC syncing up to 130k rows/s to MySQL, and TiFlash supporting near‑real‑time dashboards for user participation statistics.

Looking forward, the three‑center, five‑replica TiDB architecture offers high availability, real‑time HTAP capabilities, easy horizontal scaling, and robust TiCDC replication, though the team noted areas for improvement such as per‑second write‑row metrics and TiCDC stability.

The authors thank the PingCAP team members who assisted with testing, debugging, and on‑site support throughout the festival.

High AvailabilityPerformance TestingDistributed DatabaseTiDBHTAPTiCDCTiFlash
HomeTech
Written by

HomeTech

HomeTech tech sharing

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.