Fundamentals 9 min read

Understanding NVMe over Fabrics (NVMe‑oF) with InfiniBand and NetApp EF570/E5700 Architecture

This article explains the fundamentals of NVMe‑oF, the role of InfiniBand components such as HCAs, switches, subnet managers and gateways, and why NetApp chose InfiniBand for its EF570/E5700 storage systems, highlighting performance benefits and protocol coexistence.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Understanding NVMe over Fabrics (NVMe‑oF) with InfiniBand and NetApp EF570/E5700 Architecture

NVMe has become a competitive focus for storage vendors, especially in flash, with products like IBM FlashSystem 9110/9150, HPE Nimble AF Series, Dell EMC PowerMAX, and NetApp AFF A800.

Typically these solutions add an NVMe driver to the storage backend while keeping the host interface SCSI, but NetApp E‑Series takes a different approach: the front‑end uses NVMe‑oF over InfiniBand while the back‑end remains SCSI‑based SAS drives.

InfiniBand architecture consists of Host Channel Adapters (HCAs), switches, Subnet Managers (SM), and gateways. Each node requires an HCA to maintain links, and switches forward packets between ports within a subnet.

Routers forward packets between subnets when needed, but InfiniBand can efficiently handle up to 40,000 nodes without routers; only extremely large deployments might require InfiniBand routers such as Mellanox IBTM switches.

Gateways act as bridges between InfiniBand and other protocols (e.g., Ethernet or storage interfaces), enabling mixed‑protocol environments through technologies like Mellanox Virtual Protocol Interconnect (VPI).

NVMe is the industry‑standard PCIe SSD interface, offering lower latency and higher efficiency than SCSI‑based protocols. NVMe‑oF extends these benefits over fabrics (Ethernet, RDMA, Fibre Channel), providing scalable, low‑overhead storage connectivity.

NetApp EF570/E5700 uses NVMe‑oF over InfiniBand for the front‑end while retaining SAS drives at the back‑end, delivering over 1 M IOPS in microseconds and supporting multiple high‑speed host interfaces (32 Gb FC, 25 Gb iSCSI, 100 Gb InfiniBand, 12 Gb SAS, 100 Gb NVMe‑oF).

The EF570/E5700 platform supports operating systems SLES12 SP3 and RHEL 7.4, Mellanox FDR/EDR HCAs, Mellanox InfiniBand switches, and both switched and direct‑connect topologies.

NetApp chose InfiniBand for NVMe‑oF because it provides built‑in RDMA, already supports other InfiniBand‑based protocols (iSER, SRP), allows coexistence of these protocols on the same hardware and HCA ports, and can negotiate lower speeds to accommodate legacy devices.

In summary, NVMe‑oF over InfiniBand, iSER, and SRP can coexist on the same network and even share the same HCA ports, enabling existing iSER/SRP customers to connect to EF570/E5700 via NVMe‑oF without additional infrastructure changes.

storage architectureRDMAdata centerInfiniBandNVMe-oFSAS
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.