How We Transformed a Monolithic System into Scalable Microservices
The article details a three‑phase migration of a large monolithic Java application—splitting its database, refactoring code for multi‑datasource and custom transaction handling, and decomposing the application into nine independent services—while addressing data safety, routing, testing, and deployment risks to achieve a robust microservice architecture.
Background
The department maintained an old monolithic system with over 300 interfaces and 200 tables in a single database, causing single‑point failures, performance bottlenecks, limited extensibility, and high complexity.
Migration Plan
The transformation was divided into three stages:
Database splitting: vertically separate databases per business domain.
Application splitting: vertically separate services per business domain.
Data‑access permission consolidation: each application accesses only its own database, prohibiting cross‑database calls.
Database Splitting
The plan is to split the monolithic database into nine business‑specific databases, using master‑slave replication and binlog filtering to synchronize data.
Code Refactoring Challenges
After splitting databases, existing services that accessed multiple tables could span several databases, leading to two main problems:
Data‑source selection: services used annotations to switch data sources, but a single service might now need to access multiple databases.
Transaction management: the existing @Transactional annotation caches a single connection, which cannot handle operations across multiple databases.
Refactoring points were identified:
6 interfaces write to multiple databases within the same transaction – require data‑source changes, transaction changes, and distributed‑transaction considerations.
50+ interfaces write to multiple databases but not within the same transaction – require data‑source and transaction changes only.
200+ interfaces read from or write to a single database – only data‑source changes needed.
8 joint queries across databases – require logical code changes.
Refactoring Approach
An aspect tool was used to capture entry points and table call relationships, identifying interfaces that operate on tables belonging to different business databases.
Distributed Transaction Decision
After application splitting and data‑access consolidation, distributed transactions are avoided; consistency is ensured through application logic.
Solution 1
Extract each multi‑database mapper into separate services and add appropriate data‑source and transaction annotations. This approach involves many code changes and high risk.
Solution 2
Move the data‑source annotation to the mapper level and implement a custom transaction manager to handle multiple connections. Issues addressed include:
Service‑level transaction annotations not acquiring the correct data source when the mapper switches.
MyBatis transaction caching preventing new connections for additional databases.
A custom transaction class resolves these problems.
Below is a brief explanation of the two components.
Multi‑DataSource Component
This component enables a single application to connect to multiple data sources. It initializes connections at startup and switches them via an aspect based on annotations.
<code>/**
* 切面方法
*/
public Object switchDataSourceAroundAdvice(ProceedingJoinPoint pjp) throws Throwable {
//获取数据源的名字
String dsName = getDataSourceName(pjp);
boolean dataSourceSwitched = false;
if (StringUtils.isNotEmpty(dsName) && !StringUtils.equals(dsName, StackRoutingDataSource.getCurrentTargetKey())) {
StackRoutingDataSource.setTargetDs(dsName);
dataSourceSwitched = true;
}
try {
return pjp.proceed();
} finally {
if (dataSourceSwitched) {
StackRoutingDataSource.clear();
}
}
}
</code>Custom Transaction Implementation
The default MyBatis transaction (SpringManagedTransaction) caches a single connection, which cannot handle cross‑database operations. The custom MultiDataSourceManagedTransaction maintains a map of connections per data source.
<code>public class MultiDataSourceManagedTransaction extends SpringManagedTransaction {
private DataSource dataSource;
public ConcurrentHashMap<String, Connection> CON_MAP = new ConcurrentHashMap<>();
public MultiDataSourceManagedTransaction(DataSource dataSource) {
super(dataSource);
this.dataSource = dataSource;
}
@Override
public Connection getConnection() throws SQLException {
String dataSourceKey = (String) dataSource.getClass().getDeclaredMethod("getCurrentTargetKey").invoke(dataSource);
if (CON_MAP.get(dataSourceKey) == null) {
Connection connection = dataSource.getConnection();
if (!TransactionSynchronizationManager.isActualTransactionActive()) {
connection.setAutoCommit(true);
} else {
connection.setAutoCommit(false);
}
CON_MAP.put(dataSourceKey, connection);
return connection;
}
return CON_MAP.get(dataSourceKey);
}
@Override
public void commit() throws SQLException {
for (Connection conn : CON_MAP.values()) {
if (!conn.isClosed() && !conn.getAutoCommit()) {
conn.commit();
}
}
}
@Override
public void rollback() throws SQLException {
for (Connection conn : CON_MAP.values()) {
if (conn != null && !conn.isClosed() && !conn.getAutoCommit()) {
conn.rollback();
}
}
}
@Override
public void close() throws SQLException {
for (Connection conn : CON_MAP.values()) {
DataSourceUtils.releaseConnection(conn, this.dataSource);
}
CON_MAP.clear();
}
}
</code>Data Safety Measures
Cross‑database transactions (6 cases) – handled by code‑level consistency guarantees and thorough pre‑release testing.
Single‑database transactions – rely on the custom transaction implementation, which is fully tested.
Other single‑table operations – hundreds of mapper annotations added; runtime monitoring records any missing or incorrect data‑source annotations and alerts the team.
Application Splitting
The monolith posed systemic risks, high complexity, and testing environment conflicts. The same nine‑business‑domain division was applied to the application layer.
Splitting Options
Option 1: Build empty new services and manually move code – high risk, long cycle.
Option 2: Clone the old system into nine new services, route traffic, then gradually remove redundant code – chosen for faster rollout and lower risk.
Implementation Steps
Build new services by copying the old codebase and adjusting the system name.
Traffic routing filter – intercepts requests and forwards them to the appropriate new service based on a mapping table.
<code>@Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain filterChain) throws ServletException, IOException {
HttpServletRequest servletRequest = (HttpServletRequest) request;
HttpServletResponse servletResponse = (HttpServletResponse) response;
int systemRouteSwitch = configUtils.getInteger("system_route_switch", 1);
if (systemRouteSwitch == 0) {
filterChain.doFilter(request, response);
return;
}
if (systemRouteSwitch == 1) {
String systemRoute = servletRequest.getHeader("systemRoute");
if (systemRoute == null || !systemRoute.equals("1")) {
filterChain.doFilter(request, response);
return;
}
}
String mapJson = configUtils.getString("route.map", "");
Map<String, String> map = JSONObject.parseObject(mapJson, Map.class);
String rootUrl = map.get(servletRequest.getRequestURI());
if (StringUtils.isEmpty(rootUrl)) {
filterChain.doFilter(request, response);
return;
}
String targetURL = rootUrl + servletRequest.getRequestURI();
if (servletRequest.getQueryString() != null) {
targetURL += "?" + servletRequest.getQueryString();
}
// request forwarding logic omitted for brevity
// ...
}
</code>Generate interface‑to‑service mapping using a custom
@TargetSystemannotation on controllers.
<code>@TargetSystem(value = "http://order.demo.com")
@GetMapping("/order/info")
public ApiResponse orderInfo(String orderId) {
return ApiResponse.success();
}
</code>Identify test traffic via a special request header and route it only to the new services.
Merge ongoing development code using Git remote repositories so that changes in the old system can be pushed to all new services.
Launch Risks and Mitigation
Duplicate JOB execution in old and new systems – mitigated by a dynamic switch that disables jobs in the new system until the old system’s jobs are turned off after QA approval.
Duplicate MQ consumption – the same dynamic switch controls MQ listeners.
System Slimming
After generating the entry‑mapping map, each new service retains only the interfaces, jobs, and MQ code it owns; all other code is removed.
Benefits of the Split
More reasonable architecture and higher availability – failure of one service does not bring down the whole system.
Controlled complexity – each service has a single responsibility and clear logic.
Higher performance ceiling – each service can be individually optimized, e.g., adding caches.
Elimination of testing environment conflicts – parallel development no longer competes for the same environment.
Data‑Access Permission Refactoring
Previously, some business databases were accessed directly by other applications, breaking data ownership boundaries. The refactor introduced RPC interfaces to centralize data access.
Process
Statistical analysis of DAO methods that access non‑own databases; those require RPC wrappers.
Generate RPC interfaces using templates (ftl) and a code‑generation tool that parses DAO files, extracts class names, methods, imports, and creates corresponding API, implementation, and RPC classes.
Gray‑release strategy: route traffic through the RPC layer with a switch, perform dual reads (DAO vs RPC) for verification, then fully switch to RPC once confidence is achieved.
Conclusion
The three‑step optimization—database splitting, application splitting, and data‑access permission consolidation—smoothly migrated the monolithic system to a microservice architecture, eliminating single‑point failures, improving performance, simplifying business logic, and enabling independent scaling and optimization of each service.
JD Cloud Developers
JD Cloud Developers (Developer of JD Technology) is a JD Technology Group platform offering technical sharing and communication for AI, cloud computing, IoT and related developers. It publishes JD product technical information, industry content, and tech event news. Embrace technology and partner with developers to envision the future.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.