Backend Development 9 min read

JD.com Multi‑Center Transaction System Architecture for the 11.11 Shopping Festival

The article explains how JD.com designed and deployed a multi‑center transaction architecture, using a high‑performance data bus and strict consistency and routing controls, to handle the massive traffic spikes of the 11.11 e‑commerce event while ensuring scalability and disaster recovery.

Architect
Architect
Architect
JD.com Multi‑Center Transaction System Architecture for the 11.11 Shopping Festival

JD.com recently held a technical preparation conference for the upcoming 11.11 shopping festival, unveiling the first phase of its "Multi‑Center Transaction Project" that aims to cope with the expected traffic surge.

The need for multiple transaction centers is likened to opening more checkout counters in a supermarket; in e‑commerce, distributing traffic across several data centers can alleviate congestion, improve efficiency, and provide essential disaster‑recovery capabilities.

The project focuses on optimizing the transaction system’s architecture to enhance scalability and fault tolerance, addressing classic distributed‑system challenges such as data partitioning, routing, replication, read‑write consistency, and latency—issues famously illustrated by the 12306 ticket‑booking problems.

Data consistency is highlighted as the foremost concern: as traffic grows, updates to seller inventory, pricing, and order status must be reflected instantly across all centers to avoid stale or incorrect information.

Consistent routing rules are equally critical; a user’s request must follow the same path from authentication through service access to database queries to ensure a coherent experience.

JD’s solution employs a hierarchy of a main center and multiple sub‑centers linked by a high‑performance data bus (Jingobus). Master data (products, merchants, users) flows from the main center to sub‑centers in real time, while transaction data (orders) syncs back to the main center; sub‑centers handle order processing locally, and the main center validates overall data integrity.

New users are assigned a data‑center identifier based on IP location, stored in a cookie and a centralized session cache; existing users receive the identifier gradually during traffic‑shifting phases.

The data bus design delivers synchronization speeds more than three times that of MySQL, with high availability and flexible architecture, solving cross‑region database replication and heterogeneous data‑source synchronization.

Jingobus consists of three components—Relay, Snapshot, and Replicator—mirroring MySQL’s slave I/O, relay log, and SQL threads. Relay extracts transaction logs, Snapshot creates persistent snapshots for new subscribers, and Replicator applies logs to target databases, handling potential lag and cold‑start issues.

By providing a consistent snapshot and high‑throughput log consumption, the data bus serves as a generic CDC (Change Data Capture) layer that underpins the multi‑center transaction system and other asynchronous replication scenarios.

The deployment has already improved nationwide access speed, enabled near‑real‑time local access, and strengthened JD’s capacity to scale and recover during peak events; the first phase is live, with a second phase planned before the 618 promotion and full completion slated for October 2016.

Source: it168 website.

distributed systemse-commercedata replicationbackend scalabilitytransaction architecture
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.