Distributed Transaction Solutions and Seata AT Mode Implementation
This article explains the limitations of single‑database transactions in multi‑data‑source micro‑service scenarios, reviews common distributed‑transaction models such as 2PC, 3PC, TCC, status‑table and message‑queue based final consistency, and details the implementation of Seata's AT mode for achieving global ACID properties.
0. Introduction
Computing systems contain many unreliable factors; various mechanisms (TCP, RAID, ARIES) ensure correctness. This article first reviews ACID properties of single‑node database transactions, then discusses challenges of multi‑data‑source operations in distributed scenarios and introduces common distributed‑transaction solutions, finally presenting the industry‑proven Seata AT mode implementation.
1. Single‑Data‑Source vs Multi‑Data‑Source Transactions
Describes local (single‑node) transactions and their ACID semantics, and explains why local mechanisms cannot guarantee global ACID when a business flow spans multiple independent databases in a micro‑service architecture.
2. Common Distributed Transaction Solutions
2.1 Distributed Transaction Model
Defines participants, coordinator, resource manager, transaction manager and how a global transaction manager coordinates local transactions.
2.2 Two‑General’s Problem and Idempotency
Illustrates the classic network two‑general problem, its impact on request‑response reliability, and the need for idempotent operations.
2.3 Two‑Phase Commit (2PC) & Three‑Phase Commit (3PC)
Explains the 2PC workflow, required interfaces (Prepare, Commit, Rollback), and drawbacks such as performance loss, coordinator failure, and uncertainty in the commit phase. Briefly mentions 3PC’s timeout‑based improvement.
2.4 TCC (Try‑Confirm‑Cancel)
Describes the TCC pattern, its two‑phase workflow, retry‑based handling of coordinator crashes, and the necessity of idempotent confirm/cancel operations.
2.5 Transaction Status Table
Presents a status‑table approach where the coordinator records progress and a background task retries unfinished branches.
2.6 Message‑Queue Based Final‑Consistency
Analyzes a naïve MQ implementation, points out its pitfalls (network two‑general problem, long local transactions), and then provides the correct design that guarantees no message loss and idempotent consumption using a local message table and retry mechanisms.
3. Seata AT Mode Implementation
3.1 Overview
Seata’s AT mode builds on relational‑database local transactions, intercepts SQL to record undo logs, and replaces XA’s blocking two‑phase commit with a lighter‑weight two‑stage process.
3.2 Detailed Workflow
Shows the global transaction flow in an e‑commerce scenario (shopping‑service, repo‑service, order‑service) with steps from global XID registration to branch commit/rollback.
start global_trx
call inventory service to deduct stock
call order service to create order
commit global_trxDescribes the undo_log table schema and how AT releases local locks after the first phase, using undo logs for rollback.
Explains commit path (deleting undo logs) and rollback path (replaying undo logs after consistency checks).
Discusses isolation in AT: write isolation via global locks, read isolation defaults to read‑uncommitted but can be upgraded with SELECT FOR UPDATE.
4. Conclusion
Summarizes that all presented solutions (2PC, 3PC, TCC, status‑table, Seata AT, MQ‑based final consistency) aim to achieve global ACID by coordinating local transactions, and that practical innovations often diverge from strict standards like XA.
Full-Stack Internet Architecture
Introducing full-stack Internet architecture technologies centered on Java
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.