Operations 14 min read

Key Network Requirements for Dual‑Active Data Center Storage Deployment

This article explains the evolution of storage, the shift toward cloud‑native storage solutions, and details the network architecture, latency considerations, and operational challenges involved in building dual‑active data center storage systems for high‑availability enterprise environments.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Key Network Requirements for Dual‑Active Data Center Storage Deployment

Another "fusion" holiday (combining National Day and Mid‑Autumn Festival) is here, and the Spring Festival will only be eight days. First, happy holidays and smooth travel to everyone. Then I plan to share some valuable storage‑related content (perhaps the last few articles I write about storage boxes), summarize the mainstream vendors' dual‑active storage solutions, and provide related learning resources.

Storage is a technology that is both ancient and modern. Looking at its evolution from integrated to separated to fused architectures, it perfectly illustrates the shift of storage services from inside servers to outside servers. The biggest difference from DAS to today’s SDS/HCI is the improvement in reliability and efficiency. Just as the universe was born from a pre‑big‑bang singularity and will eventually return to a black hole, both stories highlight the importance of the process.

Considering the acquisitions of EMC, the integration of HDS, and the rumored acquisition of NetApp, pure storage boxes are no longer viable. Data is increasingly placed on VMs, containers, and public/hybrid clouds. Future storage must be considered from the cloud platform perspective to meet Cloud‑Native application needs, focusing on data flow across clouds, multi‑vendor sharing, intelligent operations, service‑level compliance, and Storage‑as‑a‑Service. Customers will favor new business models like Pay‑U and Pay‑G, and all cloud‑computing attributes will eventually be applied to storage.

Traditional storage boxes, which have been independent from mainframes for a long time, still hold many memories and stories. Today we focus on analyzing the network requirements for data‑center dual‑active deployments.

Building a dual‑active data center requires close cooperation among network systems, storage systems, compute resources, and application systems. User business systems run simultaneously in two data centers, involving front‑end global load balancing, server‑side load balancing, server‑cluster HA technologies, as well as back‑end database and storage system technologies, to provide continuous services.

When one data center experiences a problem, the other continues to provide services; storage dual‑active is only one part of data‑center dual‑active, and the deployment of physical and virtual machines differs. See the articles How to Build Dual‑Active Data Centers and How to Deploy Application Dual‑Active .

Besides storage dual‑active, the most critical technical factor for dual‑active data centers is the network interconnection between the sites, which includes:

Network topology: Direct bare‑fiber or DWDM connections; intra‑city interconnect via core metro network; inter‑city DCs connected via backbone; DC storage linked via dense‑wave DWDM or bare‑fiber.

Cluster heartbeat: Requires a Layer‑2 network.

VM vMotion: Allows live migration of VMs across data centers while maintaining business continuity.

Broadcast isolation: Broadcast traffic must be isolated between data centers.

Because VMs need to drift between the two data centers, both sites are generally required to reside within a large Layer‑2 fabric.

Fiber link direct‑connect: Shortens distance, similar to traditional network architecture.

Large Layer‑2 interconnect: Multiple solutions exist; follow the account for details and reply with the keyword "Large Layer‑2 Network" to obtain more information.

Decoupling from network devices via software‑defined redirection (essentially VPN): as long as IP reachability exists, traffic can be re‑encapsulated to create a unified isolation domain, removing VLAN count limits.

Data‑center interconnect typically uses fiber; for links longer than 25 km, DWDM wavelength‑division equipment is added to increase bandwidth and reduce latency.

Data replication is achieved through storage, so the latency of the storage‑layer dual‑active network must be closely monitored. Currently, dual‑active sites can be up to 100 km apart; some vendors claim 500 km support, with latency increasing roughly 1 ms per 100 km. In practice, distance is not the limiting factor—network latency, error rate, and application RTT tolerance are.

Besides the network, other factors to consider in dual‑active data centers include:

Split‑brain risk: Preventing split‑brain scenarios is critical, especially for storage dual‑active systems, as it can cause prolonged I/O hangs.

Performance impact: Dual writes double the data traffic; link performance directly affects overall system performance.

Data consistency risk: Replication occurs in cache for speed; if a crash happens, cached data may be lost, leading to inconsistency.

Inter‑site communication uncertainty: Unstable links and unpredictable I/O latency can cause severe disruptions, including database hangs and frequent cluster arbitration.

Replication logic errors: Block‑level replication cannot detect corrupted blocks, potentially propagating bad data to the disaster‑recovery site.

Storage network fault propagation: A SAN fault in one site can cascade across the combined SAN fabric, affecting the entire storage network.

Cluster arbitration consistency: Inconsistent arbitration results between dual‑active storage and database clusters can cause catastrophic service impact.

Multipath control strategy: Vendor‑specific multipathing may cause compatibility issues; many dual‑active solutions rely on OS‑provided multipathing to mitigate this.

Architect Technical Alliance: Focused on technology architecture and industry solutions, providing a professional exchange platform, sharing frontline practice, and insight into trends covering cloud computing, big data, hyper‑convergence, software‑defined networking, data protection and solutions. Follow to download original technical materials for free .

<Related Reading>

In‑Depth Analysis of SVC Stretch Cluster Dual‑Active – Solution

In‑Depth Analysis of Clustered Metro Cluster Dual‑Active – Solution

In‑Depth Analysis of PowerHA/SVC HyperSwap Dual‑Active – Solution

In‑Depth Analysis of HAM/GAD Dual‑Active Solution – Solution

In‑Depth Analysis of VIS/HyperMetro Dual‑Active Solution – Solution

In‑Depth Analysis of VPLEX Dual‑Active Data Center Storage Solution – Solution

In‑Depth Analysis of SRDF/Mtreo and MetroSync Dual‑Active – Solution

In‑Depth Analysis of HPE, Dell, and Fujitsu Dual‑Active Solutions

Warm Reminder: Search for "ICT_Architect" or scan the QR code below to follow the public account and get more exciting content.

What does the read count really mean?

We focus solely on being a heartfelt technical sharing platform.

Cloud Nativehigh availabilitydual activeNetworkstorageData Center
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.