OceanBase 2.0 Partitioning and Performance Optimizations for Ant Group’s Double‑11 Peak
This article explains how OceanBase 2.0’s partitioning architecture, partition‑group feature, and a series of performance optimizations enabled Ant Group’s payment core to handle millions of transactions per second during the Double‑11 shopping festival, while also describing the system’s advantages, results, and future capabilities.
Background: With Ant Group’s rapid business growth and the ever‑increasing traffic of the Double‑11 shopping festival, the payment core needed to support a future capacity of millions of payments per second. To meet this goal, OceanBase 2.0, a distributed database with native sharding and distributed‑transaction optimizations, was introduced.
Million‑Payment Challenge: Traditional horizontal scaling relied on splitting tables across many databases (hundred‑by‑hundred). However, the 2017 Double‑11 peak exceeded the capacity of a single machine, leading to an elastic architecture that added multiple database sets. This approach complicated data routing and maintenance, prompting the search for a more elegant solution that could handle the peak with a single logical database.
Principle Analysis: OceanBase 2.0’s partitioning follows the same idea as classic sharding—further splitting tables by user UID into many partitions. Applications see only one logical table, while data is distributed across unlimited machines, achieving automatic load balancing and breaking the single‑machine performance bottleneck.
Partition‑Group Feature: To maximize performance, OceanBase groups related logical tables that share the same partition key onto the same physical server, reducing distributed‑transaction overhead.
Key Points
Client SQL includes the partition field (e.g., UID or payment_id).
If the partition field is missing, OBServer performs parallel optimization.
OBClient calculates routing to the correct partition, ensuring the first hop lands on the right server.
OBServer uses a generated column (partition_id) for internal partitioning without affecting business logic.
Multi‑dimensional partition constraints enable partition pruning.
Advantages
Business‑friendly: No change to SQL semantics; applications are unaware of partitioning.
Architecturally generic: Constraint‑based pruning and fallback access provided by OBServer.
High performance, low cost: Uses low‑spec servers with automatic load balancing and high resource utilization.
Performance Optimizations in OceanBase 2.0
Elimination of distributed transactions via partition‑group placement.
Two‑phase commit improvements: prepare state kept in logs, commit persisted asynchronously.
Asynchronous commit to avoid blocking workers.
Memory allocator redesign for supporting massive partitions.
Storage compression techniques for space savings.
Optimization Results: Compared with OceanBase 1.4, version 2.0 delivers a 50% performance boost and reduces storage consumption by 30%.
Summary: During the 2018 Double‑11 event, OceanBase 2.0 successfully sustained the payment core’s peak load, confirming its ability to underpin Ant Group’s “million‑payment” strategy. Additional features such as global distributed indexes, global snapshots, distributed stored procedures, real‑time index activation, and flashback further strengthen the platform for diverse enterprise workloads.
Community Invitation: Readers are encouraged to join the OceanBase Double‑11 technical discussion group via the QR code provided.
AntTech
Technology is the core driver of Ant's future creation.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.