Backend Development 10 min read

System Splitting and Architectural Evolution: Strategies for Scaling and Decoupling

The article explains how increasing business complexity and throughput require system splitting, decoupling, and architectural evolution—covering horizontal and vertical scaling, database sharding, caching, and micro‑service patterns—to improve capacity, robustness, and performance of large‑scale applications.

Architecture Digest
Architecture Digest
Architecture Digest
System Splitting and Architectural Evolution: Strategies for Scaling and Decoupling

As business complexity and system throughput grow, unified deployment becomes harder, causing heavy and fragile systems; therefore, businesses need to be split, systems decoupled, and internal architecture upgraded to enhance capacity and robustness.

System Splitting

System splitting can be viewed from a resource perspective as application splitting and database splitting, and from an implementation order as horizontal scaling, vertical splitting, business splitting, and horizontal splitting.

1. Horizontal Scaling

Horizontal scaling is the initial solution when a system hits a bottleneck, mainly through two ways:

Adding instances to applications and forming clusters to increase throughput.

Using master‑slave replication for read/write separation in databases, protecting the most critical resource.

2. Vertical Splitting

Vertical splitting truly begins the decomposition by separating business functions, such as extracting user, product, and transaction systems. Service governance is introduced to handle inter‑service dependencies, improving stability while increasing complexity. Corresponding databases are also split into user, product, transaction databases, etc.

3. Business Splitting

Business splitting targets the application layer, dividing functionality like shopping cart, checkout, order, and flash‑sale systems. For flash‑sale scenarios, product information can be pre‑loaded into JVM cache to reduce external calls and improve performance.

Database splitting follows several steps: vertical table partitioning, vertical database partitioning, horizontal table partitioning, and horizontal database‑table partitioning.

Vertical Table Partitioning splits a large table into smaller ones based on update or query frequency.

Vertical Database Partitioning separates databases by business, e.g., order, product, and user databases.

Horizontal Table Partitioning divides a large table into multiple tables to handle massive data volumes.

Horizontal Database‑Table Partitioning is a further refinement of the previous step.

4. Horizontal Splitting

Service layering turns system services into modular building blocks, separating functional and non‑functional systems and composing business‑centric systems such as middle‑platform or front‑platform architectures. The front‑end aggregates components (e.g., main image, price, stock, coupons) to quickly respond to business changes.

Database hot‑cold data separation can archive outdated items (e.g., discontinued phones) while keeping recent data readily accessible.

Structural Evolution

Structural evolution occurs as system complexity and performance demands increase, prompting internal architecture upgrades. Early systems directly linked applications to databases; after splitting, services depend on remote calls, leading to the introduction of service governance.

Performance bottlenecks in databases are addressed by adding caches and indexes. For example, a 2014 upgrade of a system with 300 million hot records used Solr + Redis, storing only indexes in Solr and primary keys in Redis, with Redis holding a subset of data and falling back to the database when a cache miss occurs. Modern alternatives include ES + HBase.

Frequently accessed data can be cached locally in the JVM (e.g., category information) or using ThreadLocal for per‑thread caching, with careful handling of data eviction and validity.

When modifying product information, unified validation often requires reading the product record multiple times; thread‑local caching can reduce this overhead, improving performance by up to 20 ms and cutting read operations by nearly ten thousand per minute.

To mitigate instability of dependent third‑party services, treat them as data sources and cache their responses locally, reducing external risk.

As user experience expectations rise, asynchronous processing via message middleware (e.g., order placement) improves responsiveness by decoupling front‑end actions from back‑end persistence.

The business layer can be divided into basic services and composite services, while the data layer consists of data sources and indexed caches; selecting appropriate technologies and middleware is essential to solve various system challenges.

Conclusion

System structures become increasingly complex, but stability and robustness improve; technology choices must align with business pain points, technical expertise, and resource constraints to avoid unrealistic solutions.

The author, a JD.com system architect, summarizes recent technical transformations and plans to share detailed points in future posts.

microservicescachingdatabase shardinghorizontal scalingsystem splittingvertical splitting
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.