Building High‑Performance, High‑Availability Container Networks for Banking in a Two‑Site‑Three‑Center Architecture
This article explains the challenges of container networking in banks, especially under a two‑site‑three‑center architecture, and provides practical guidance on using underlay and overlay approaches, Kube‑OVN solutions, and best‑practice recommendations to achieve high‑availability, high‑concurrency, and high‑performance cloud‑native networks.
When discussing cloud‑native infrastructure, container networking is inevitably a core topic. It is a foundational component of cloud‑native platforms and presents one of the biggest challenges when building such platforms, especially for banks.
Banking applications have unique requirements: they need to interoperate with traditional network architectures, support fixed IPs after containerization, manage multi‑network planes, multi‑NICs, multi‑tenant and cross‑cloud scenarios, and handle traffic monitoring, scheduling, and QoS. Many banks currently have a “black‑box” container network where internal and external traffic are not integrated, hindering cross‑cloud multi‑network cluster communication.
In a two‑site‑three‑center architecture, improving container‑network availability is critical. The design depends on the underlying IaaS capabilities. If the container platform runs on traditional virtualization without SDN, a host‑network mode allows PODs to communicate directly with VMs and physical machines. When the host nodes are virtual machines with SDN and CNI enabled, communication with IaaS VMs works out‑of‑the‑box, but interaction with legacy networks requires NAT or EIP configuration.
The biggest obstacle is IP statefulness: exposing services via host IP + port or domain name hides the real container IP, breaking flat‑IP access for legacy systems and making network management difficult.
To address this, banks can adopt the Kube‑OVN underlay solution. By leveraging OpenStack OVS layer‑2 switching, Kube‑OVN flattens container workloads and legacy workloads onto a single layer‑2 plane, enabling direct IP communication and preserving existing network management policies.
For high‑concurrency scenarios, banks should consider both underlay and overlay networks. Overlay provides virtual IPs and aligns with platform‑centric thinking, while underlay reuses the physical network and matches traditional line‑based management. Depending on the cluster, a mix of both (including multi‑NIC deployments) can satisfy diverse requirements.
Operational recommendations include: following IaaS networking principles such as three‑ or four‑network separation; using unified ingress/egress points for traffic control; applying NetworkPolicy for per‑application security; and tailoring cluster networking—using OVS‑based networks for performance‑critical clusters and CNI‑based VPC‑isolated networks for scalability.
Kube‑OVN has demonstrated performance comparable to Calico, with support for OVS, DPDK, and hardware acceleration, meeting the stringent performance testing required by financial institutions. It also integrates security and regulatory controls needed in banking environments.
In summary, by carefully planning underlay/overlay strategies, leveraging Kube‑OVN, and aligning network design with both cloud‑native and traditional banking policies, banks can achieve high‑availability, high‑concurrency, and high‑performance container networks for their cloud‑native transformation.
Cloud Native Technology Community
The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.