Understanding OpenFabrics Enterprise Distribution (OFED) and InfiniBand Software Architecture
This article provides a comprehensive overview of OpenFabrics Enterprise Distribution (OFED), its history, component stack, and the layered InfiniBand software architecture, explaining how various protocols such as IPoIB, SDP, and iSER enable high‑performance, low‑latency networking for Linux and Windows applications.
OpenFabrics Enterprise Distribution (OFED) is an open‑source collection of drivers, kernel code, middleware, and user‑level interfaces that support InfiniBand fabrics. The first version was released in 2005 by the OpenFabrics Alliance (OFA), and Mellanox OFED provides Linux and Windows (WinOF) implementations with diagnostic and performance tools for monitoring bandwidth and congestion.
The OpenFabrics Alliance, founded in 2004 as the OpenIB Alliance, develops and promotes a vendor‑independent, Linux‑based InfiniBand software stack, later extending support to Windows, iWARP, RoCE, and other high‑performance networks.
Mellanox OFED bundles drivers, middleware, user interfaces, and standard protocols such as IPoIB, SDP, SRP, iSER, RDS, and DAPL, supporting MPI, Lustre/NFS over RDMA, and exposing the Verbs programming interface. It is distributed as an ISO image containing source code, binary RPMs, firmware, utilities, installation scripts, and documentation.
From a developer’s perspective, the InfiniBand software architecture is designed to let existing IP/TCP socket applications run unchanged while gaining InfiniBand performance. The stack consists of three logical layers: the HCA driver, the core InfiniBand module, and upper‑layer protocols. The middle layer provides services such as Communication Manager (CM), Subnet Administrator client, Subnet Management Agent (SMA), Performance Management Agent (PMA), MAD service, General Service Interface (GSI), queue pairs (QP), Subnet Management Interface (SMI), and Verbs, handling resource tracking and cleanup.
High‑level protocols like IP over IB (IPoIB) allow any IP‑based application to benefit from InfiniBand bandwidth and latency without code changes. Linux kernels 2.6.11+ support IPoIB, and developers can also use Socket Direct Protocol (SDP) or other socket‑based APIs to exploit advanced InfiniBand features. InfiniBand also supports SCSI, iSCSI, and NFS via protocols such as iSER, which operate over an additional abstraction layer (CMA) for transparent RDMA access.
The InfiniBand stack is supported on major Linux distributions, Windows Server, Windows Compute Cluster Server, and hypervisor platforms like VMware, making it a versatile solution for high‑performance computing, storage, and networking workloads.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.