Understanding NVMe over Fabric: Advantages of Fibre Channel and Technical Insights
The article provides a comprehensive technical analysis of NVMe over Fabric, highlighting the advantages of Fibre Channel over Ethernet, discussing RDMA, zero‑copy mechanisms, and the impact of SCSI translation layers on performance and adoption in modern data‑center storage architectures.
Recent discussions by Brocade on NVMe over Fabric emphasize that Fibre Channel (FC) offers significant advantages compared to Ethernet, especially for data‑center transmission and security. The article analyzes Brocade’s key viewpoints, encouraging readers to follow the public account for full technical documents.
While all‑flash and hybrid arrays dominate data‑centers, the NVMe standard—designed for PCIe‑based solid‑state modules—delivers low latency and enhanced queue mechanisms, improving both random and sequential performance and increasing parallelism over traditional SAS protocols.
NVMe over Fabric extends the NVMe protocol beyond the PCIe bus to challenge SCSI’s dominance in SANs, supporting multiple fabric transports, primarily FC, InfiniBand, RoCE v2, and iWARP.
As NVMe over Fabric matures, it is positioned as a SCSI alternative in the SAN market, opening opportunities for flash‑module vendors. Although newcomers often tout advantages, potential drawbacks receive less attention, and the article aims to give readers a balanced understanding.
1. FC as a superior NVMe fabric
FC is a valid NVMe fabric option. The NVMe‑OF whitepaper describes two fabric types: RDMA‑based and FC‑based. Despite some claims that FC is not a legitimate NVMe fabric, the whitepaper confirms FC’s suitability, citing its credit‑based flow control and delivery mechanisms, which Ethernet‑based fabrics lack.
2. RDMA is not essential for NVMe fabrics
Proponents argue RDMA is crucial, yet the NVMe whitepaper does not list RDMA as an ideal attribute; it is merely one implementation method. NVMe and NVMe‑OF do not inherently depend on RDMA.
3. SCSI is not the only FC‑native protocol
Comparisons should be made between NVMe‑over‑Ethernet and SCSI‑over‑FC, not between NVMe‑over‑Ethernet and FC latency alone. FC carries SCSI as the FC‑4 protocol (FCP), but FC can also transport NVMe directly via the FC‑NVMe standard, which defines a native NVMe traffic type on FC.
4. Impact of the SCSI‑to‑NVMe translation layer
NVMe fabrics aim for minimal latency, ideally avoiding translation layers. Direct NVMe usage eliminates extra clock cycles per I/O, while FC supports native NVMe without translation.
Nevertheless, many users cannot redesign applications, so a translation layer can facilitate NVMe adoption. Major HBA vendors provide drivers for both translation and native NVMe‑OF support.
5. Can FC achieve zero‑copy?
Historically, IP stacks used intermediate buffers, but FC’s architecture reduces stack layers, enabling zero‑copy via DMA and Scatter‑Gather Lists (SGL), similar to RDMA’s approach.
6. Zero‑copy on IP does not require RDMA
iWARP extends RDMA to TCP, using Direct Data Placement (DDP) for zero‑copy. TCP Offload Engines (TOEs) with DDP support can achieve zero‑copy performance comparable to FC.
7. Routable RoCE v2 is a better RoCE variant
RoCE v2 runs over UDP, eliminating TCP’s slow‑start behavior, but requires ECN for congestion control. It is positioned as a scalable, high‑performance RDMA solution for data‑center networks.
The article concludes by inviting readers to obtain Brocade’s full technical materials for deeper analysis of SSD, flash technology, product status, and NVMe trends.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.