Evolution of Traditional Enterprise Storage Architecture: From Multi‑Controller Systems to Modern VMAX
The article traces the historical evolution of traditional enterprise storage—from early multi‑controller platforms like HDS USP and EMC DMX, through HDS Hi‑Star VSP, to the engine‑based VMAX architecture—highlighting architectural innovations, scalability limits, and the shift toward cloud‑centric storage services.
Regarding the development history of traditional storage, many analyses have been made early on; terms such as private cloud, storage‑as‑a‑service, and intelligent operation attract attention, yet revisiting these topics remains valuable because the past brilliance of hardware storage boxes still offers lessons for today.
Architecture evolution runs through the whole storage timeline. Storage originated in high‑end systems, so we start with the multi‑controller architecture. The earliest multi‑controller product was HDS’s Universal Storage Platform (USP/USP V). Historical records show HDS launched a fully switched architecture in 2000, and three years later EMC introduced the direct‑matrix DMX storage.
This is the well‑known three‑layer storage architecture: front‑end, cache, and back‑end, a tightly coupled, highly reliable multi‑controller design. Below we examine DMX’s main characteristics.
DMX’s Front‑end Director (FED), Global Memory Director (GMD) and Back‑end Director (BED) are fully interconnected and shared, eliminating the need for external switches (HDS had already used this approach). Since its birth in 2003, DMX evolved to the fourth generation DMX‑4. In DMX, each GMD connects directly to every FED and BED, making all memory a global cache.
However, multiple directors accessing shared memory can cause conflicts; the front‑end and back‑end are linked via a large backplane, which limits the scale‑out capacity of FED, BED and GMD, preventing further expansion.
Architecture continued to evolve: at the end of 2010 HDS upgraded its high‑end storage to the 5th generation, called the Hi‑Star full‑switch network, and introduced the Virtual Storage Platform (VSP). VSP supports scale‑out by interconnecting two control chassis into a single system, using a switch‑based, full‑interconnect backplane that remains in use today.
In VSP, the Front‑end units (FED), Back‑end units (BED), Control units (VSD) and Cache units (DCA) are fully redundant within each chassis and interconnected through switch units (GSW) forming the Hi‑Star Network; inter‑chassis connections are also realized via GSW. Consequently, components such as FED, BED and DCA achieve global sharing across control chassis.
Further evolution brought VMAX, which introduced the concept of an engine. Each engine contains two controllers, and each controller integrates FE, BE and memory (global memory up to 128 GB). Controllers interconnect via a RapidIO switch called MIBE (Matrix Interface Board Enclosure), dramatically enhancing scale‑out capability (up to 8 engines and 16 controllers).
VMAX’s controller memory is divided into three parts: global memory, temporary store‑and‑forward memory, and control store.
Global memory stores data that is shared across the system, such as user data and global variables.
Temporary store‑and‑forward memory caches data exchanged between the controller and the host or backend disks, acting as an isolation layer so that global memory does not interact directly with hosts or disks.
Control store holds private information on the controller, such as software packages.
For each VMAX controller, access to local and remote global memory follows the DMX model; remote accesses travel over RapidIO channels, reducing contention on global memory.
VMAX’s loosely‑coupled Virtual Matrix Architecture (the latest VMAX architecture has minor updates) moves the CPUs that were previously on FED, BED and GMD onto the controllers. The VMAX back‑end connects to disk chassis with dual‑control sharing, markedly improving extensibility.
The overall multi‑controller storage evolution can be summarized as follows: while single‑controller loosely‑coupled designs are common in distributed “share‑nothing” architectures for large‑capacity, easily scalable scenarios, products like VIX (limited to 15 nodes) and the DS8000 series (now DS8888) illustrate how traditional dual‑chassis designs still deliver high reliability and performance.
Product tiers (entry‑level, mid‑range, high‑end) lack a unified industry definition; vendors differentiate based on architecture, front‑end active‑active capability, cache mechanisms, back‑end interconnect, and reliability. Although specifications such as snapshot counts are often used in marketing battles, more substantive factors include support for mainframes, QoS, continuous mirroring, data‑plane separation, and active‑active operation, which directly impact data safety and user experience.
With the advent of the cloud era, customers increasingly care less about the underlying storage box or vendor and more about SLA fulfillment, portal usability, and resource provisioning. Underlying devices can be servers, performance can be scaled via WebScale techniques, reliability via dual‑active cloud hosts, and networking/security via software‑defined networking, leading to intelligent operations.
In summary, the long history of traditional storage has left a wealth of architectural knowledge worth revisiting; the accompanying diagrams illustrate key milestones.
For more in‑depth reading, click the original link to obtain the compiled e‑book “Complete Analysis of Traditional Enterprise Storage”.
Technical Hot Articles Recommended
Analysis of HPC technology evolution, ecosystems and industry trends
Behind storage performance bottlenecks: reference value of this article
How distributed, multi‑active data centers achieve DNS resolution and load balancing
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.