Databases 18 min read

Design and Implementation of Database Table Partitioning for Loan and Repayment Systems

This article details the design, planning, and step‑by‑step implementation of a database table‑partitioning solution for loan and repayment request tables, covering schema decisions, sharding strategy, historical data migration, dynamic synchronization, code examples with ShardingSphere, SpringBoot, and operational considerations for seamless migration.

Top Architect
Top Architect
Top Architect
Design and Implementation of Database Table Partitioning for Loan and Repayment Systems

Introduction

After a year of business development, the team received a request to split the loan and repayment application tables to improve query performance on tables with tens of millions of rows.

Design Plan

The design includes creating a new database instance, deciding not to split the database, using 50 tables, naming the database tcbiz_ins_cash_mas_split , and using memberId as the sharding column with a suffix‑based algorithm.

Sharding Configuration

Tables are named CashRepayApplySplit0${0..9} and CashRepayApplySplit${10..49} . The sharding column is memberId , and the algorithm extracts the last two digits of the memberId, applies modulo 50, and pads with zero when needed.

Historical Data Synchronization

Existing data from CashRepayInfo (full data) and CashRepayApply (current data) must be migrated to the split tables. The process involves:

Sync missing historical rows from CashRepayInfo to CashRepayApplySplit .

Sync remaining CashRepayApply rows to the split tables.

Backend Query Refactor

Current admin systems use joins on a single table. After sharding, the plan is to keep two‑to‑three years of data in the original table for simple joins, while older data will be queried via memberId against the split tables.

DBA Synchronization Process

Because the DBA cannot directly sync all data, a start id is provided to the DBA, who syncs up to a deterministic end based on uptTime < operationDate - 5 days . Any new rows generated during the DBA window are later synchronized by the development team.

Three‑Write Synchronization

During the migration period, writes are performed to three tables in order: CashRepayApply , CashRepayInfo , and CashRepayApplySplit , with a dedicated transaction manager for the split database.

Sample Code

@Transactional(propagation = Propagation.REQUIRES_NEW, transactionManager = "transactionManagerSplit")
@Mapper
public interface CashRepayApplyMapper extends BaseMapper
{
    @Select("select * from cash_repay_apply where member_id = #{memberId}")
    List
testSharding(@Param("memberId") String memberId);
}

Pitfalls and Solutions

Issues encountered include incompatibility between SpringBoot 3.2.4 and ShardingSphere, missing no‑arg constructor in Representer , and Groovy expression errors. Work‑arounds involve adding jaxb‑impl , providing a custom Representer , and using Long.parseLong(member_id) % 50 in the algorithm.

Demo Configuration

Key Maven dependencies, application.yml , and sharding-config.yaml files are provided, showing data source definitions, sharding rules, and inline algorithm expressions.

Conclusion

The article demonstrates that large‑scale table partitioning can be achieved with careful design, coordination across teams, and incremental migration, turning a complex requirement into a manageable implementation.

data migrationShardingShardingSphereSpringBootdatabase partitioning
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.