System Splitting and Architectural Evolution: Strategies for Scaling, Decoupling, and Performance Optimization
The article explains how increasing business complexity and throughput demands drive system splitting, architectural evolution, and the adoption of scaling, sharding, caching, and asynchronous messaging techniques to improve capacity, robustness, and maintainability of large‑scale backend services.
As business complexity and system throughput grow, unified deployment becomes difficult, leading to the need for system splitting, decoupling, and architectural upgrades to enhance capacity and robustness.
System splitting is described in two parts: horizontal scaling, vertical splitting, business splitting, and horizontal splitting. Horizontal scaling is the first solution, achieved by adding application instances (clustering) and using master‑slave database read/write separation.
Vertical splitting separates business functions into independent services such as user, product, and transaction systems, introducing service governance to manage inter‑service dependencies while improving stability.
Business splitting targets the application layer, dividing functionalities like shopping cart, checkout, order, and flash‑sale modules; caching frequently accessed data (e.g., product info) in JVM reduces external calls and improves performance.
Database splitting involves vertical partitioning (splitting large tables into smaller ones), vertical sharding (separating databases by business domain), horizontal partitioning, and horizontal sharding, illustrated with diagrams of product table and database/table splits.
Horizontal splitting emphasizes service layering and componentization, exemplified by the “mid‑platform” concept where core services act as building blocks and the front‑end composes these blocks to quickly respond to business changes.
Structural evolution moves from direct application‑database connections to remote service calls, introducing caches and indexes to address performance bottlenecks; a 2014 upgrade used Solr + Redis, later evolving to ES + HBase.
Local JVM caching and thread‑local caches further reduce database reads, achieving noticeable latency reductions (≈20 ms) and millions fewer reads per minute.
To mitigate instability of dependent third‑party services, their data is cached locally, turning external services into reliable data sources and reducing risk.
Asynchronous processing via message middleware (e.g., order creation messages) decouples user actions from backend persistence, improving responsiveness.
The article concludes that system architecture inevitably becomes more complex, requiring technology choices that align with business pain points, technical expertise, and resource constraints to achieve stable, robust solutions.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.