Backend Development 12 min read

Optimizing Traditional Commercial Bank Architecture in the Era of Internet Finance

The article analyzes how rapid changes such as big data, cloud computing, and distributed architectures challenge traditional core banking systems and proposes a multi‑step strategy—including mainframe transaction off‑loading, service‑oriented design, open platform development, and big‑data integration—to modernize and improve performance, cost efficiency, and scalability.

Architect
Architect
Architect
Optimizing Traditional Commercial Bank Architecture in the Era of Internet Finance

With decades of development, the four major commercial banks have built core business systems that support billions of customers and daily transaction volumes of up to 200 million, but the rapid emergence of big data, cloud computing, social networks, O2O, and distributed architectures is pressuring these mature IT infrastructures.

To reduce reliance on mainframes, the bank has already migrated many query transactions to an open platform, cutting CPU usage by about 70 % while handling 60 % of daily transactions, and plans to further shift less‑critical workloads and replicate static data to the platform.

Traditional banking emphasizes transaction consistency, whereas internet companies accept eventual inconsistency to achieve high concurrency; the article suggests atomizing and service‑ifying mainframe transactions, using ESB, Webservice gateways, and database replication to improve scalability.

It outlines a four‑point SOA‑based roadmap: (1) atomic and service‑oriented transaction design via a Webservice gateway (BWG); (2) unified service catalog and management; (3) cross‑system service invocation, sharing, and composition; (4) building a generic application service platform to replace ad‑hoc solutions.

The open platform should evolve from a front‑office integration point to a core processing engine, leveraging cloud resources, distributed multi‑active architectures (e.g., Zookeeper), and frameworks such as Dubbo for asynchronous messaging and high availability.

In the big‑data domain, the bank has established Hadoop, MPP, Spark/Storm, and SAS labs, and identifies five functional areas: multi‑source data acquisition, massive data storage, customer insight via machine learning, credit profiling, and real‑time consumption analytics. Recommendations include consolidating an ODS database, creating a unified customer data mart, advancing Sql‑on‑Hadoop tools, and forming an analyst team.

Overall, the optimization strategy aligns IT architecture with the bank’s strategic goals, balancing innovation with stability while embracing service‑oriented, cloud‑native, and big‑data technologies.

distributed systemsbig datacloud computingservice-oriented architecturebanking architecturecore banking
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.