Fundamentals 9 min read

Understanding FC‑NVMe: How NVMe over Fibre Channel Works, Its Benefits and Drawbacks

The article explains the FC‑NVMe standard, describing its operation over Fibre Channel, its performance advantages over traditional SCSI, the required infrastructure, and the trade‑offs compared with Ethernet and InfiniBand based NVMe‑oF solutions.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Understanding FC‑NVMe: How NVMe over Fibre Channel Works, Its Benefits and Drawbacks

Continuing from yesterday's discussion on the development history of NVMe over Fabrics, today we explore FC‑NVMe. The Fibre Channel implementation of NVMe (FC‑NVMe standard) is a technical specification that enables transmission of NVMe command messages and information between host and target storage subsystems over a Fibre Channel network.

Fibre Channel is a transport option for NVMe‑oF, developed by the non‑profit NVM Express Inc., which also released version 1.0 of the NVMe‑oF specification on June 5, 2016. Other NVMe transport options include Ethernet and InfiniBand RDMA.

The INCITS T11 committee defined a frame format and mapping protocol to apply NVMe‑oF to Fibre Channel, completing the first version of the FC‑NVMe standard in August 2017 and submitting it to INCITS for publication.

How FC‑NVMe Works

The FC protocol (FCP) allows upper‑layer transport protocols such as NVMe, SCSI, and IBM's proprietary FICON to be mapped, enabling data and command transfer between hosts and peripheral target storage devices or systems.

Compared with SCSI and FICON, NVMe features a simplified register interface and command set, reducing I/O stack CPU overhead, lowering latency, and improving performance. NVMe is designed for fast media like SSDs and memory‑based technologies, whereas SCSI was created for slower HDD/tape storage and FICON for mainframe‑to‑storage connections.

NVMe transmission is an abstract protocol layer that provides reliable NVMe command and data transport.

FC‑NVMe simplifies the NVMe command set into basic FCP commands. Because Fibre Channel is purpose‑built for storage traffic, the system includes functions such as discovery, management, and end‑to‑end verification.

The main difference between NVMe‑oF (including the Fibre Channel variant) and native NVMe lies in the transport mechanism. Native NVMe maps requests and responses to host shared memory via the PCIe interface, while NVMe‑oF uses a message‑based model to send requests and responses over a network between host and target storage devices.

NVMe‑oF replaces PCIe to extend the communication distance between NVMe hosts and storage subsystems. The original design goal was to add no more than 10 µs of latency when connecting NVMe hosts and targets over an appropriate network.

Large‑scale block‑flash storage environments are the most likely to adopt NVMe over Fibre Channel. FC‑NVMe provides the same predictability and reliability characteristics as SCSI, and NVMe‑oF traffic can run alongside traditional SCSI traffic on the same FC fabric.

FC‑NVMe defines the FC‑NVMe protocol layer, the NVMe‑oF specification defines the NVMe‑oF protocol layer, and the NVMe specification defines the host software and NVM subsystem protocol layer.

Infrastructure components that must support Fibre Channel NVMe include storage operating systems and network adapter cards. FC storage vendors must ensure their products meet FC‑NVMe requirements. Current HBA vendors supporting FC‑NVMe include Broadcom and Cavium, while Broadcom and Cisco are major FC switch vendors.

Advantages and Disadvantages of FC‑NVMe

Compared with HDD or SATA/SAS SSDs using the SCSI command set, FC‑NVMe offers higher performance and lower latency. A potential drawback is higher cost, although NVMe SSD prices are expected to converge with some traditional SSDs.

When compared with Ethernet or InfiniBand based NVMe‑oF alternatives, Fibre Channel is known for lossless data transmission, predictable and consistent performance, and reliability. Large enterprises often choose FC for mission‑critical workloads, but FC requires specialized equipment and storage‑network expertise and can be more expensive than Ethernet alternatives.

Ethernet‑based NVMe products are generally more abundant, and many storage startups focus on Ethernet NVMe, sometimes using proprietary technologies to accelerate market entry.

InfiniBand‑based NVMe attracts high‑performance computing workloads that demand extremely high bandwidth and low latency. InfiniBand, like FC, requires special hardware and offers flow and congestion control and QoS, but lacks automatic discovery services for node addition.

The NVMe‑oF specification supports RDMA (though it is not mandatory); mapping options include RoCE for Ethernet and InfiniBand and iWARP for internet‑scale RDMA. The NVMe Express organization also plans to support TCP as a transport option.

Related Reading

Why Brocade believes FC is the best fabric for NVMe over Fabrics?

What capabilities does combining NVMe with SCM bring to storage media?

A Detailed History of NVMe over Fabrics Technology

For more technical resources, search for “ICT_Architect” or scan the QR code to follow the public account and click the original link.

NVMeNVMe over FabricsStorage NetworkingFibre ChannelFC-NVMe
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.