Operations 10 min read

Mellanox CX-5 NIC OVS Flow Table Acceleration and SR‑IOV VF Migration Guide

This guide details the investigation of Mellanox CX‑5 NIC support for OVS flow‑table acceleration, outlines required software versions, explains VT‑d and SR‑IOV technologies, provides step‑by‑step host configuration, OVS setup, test procedures, performance results in VXLAN and VLAN environments, and explores the feasibility and scripts for VF hot‑migration.

360 Tech Engineering
360 Tech Engineering
360 Tech Engineering
Mellanox CX-5 NIC OVS Flow Table Acceleration and SR‑IOV VF Migration Guide

The article investigates Mellanox CX‑5 NIC support for Open vSwitch (OVS) flow‑table acceleration, introduces the required software versions, and describes the overall testing workflow, also covering preliminary research on VF hot‑migration.

Background : Rapid data‑center growth leads to high vSwitch CPU usage; accelerating the vSwitch is essential.

Key Technologies :

VT‑d (Intel Virtualization Technology for Direct I/O) provides DMA remapping via IOMMU.

SR‑IOV enables a physical function (PF) to expose up to 256 virtual functions (VFs) for direct VM access.

Test Requirements :

Linux kernel >= 4.13‑rc5 (or >= 3.10.0‑860 for RHEL).

Mellanox firmware >= 16.21.0338 for ConnectX‑5.

iproute >= 4.11, Open vSwitch >= 2.8, OpenStack >= Pike.

SR‑IOV enabled, VT‑d and IOMMU enabled in BIOS/cmdline.

Host Configuration Steps :

echo 0 > /sys/class/net/eth0/device/sriov_numvfs
echo 2 > /sys/class/net/eth0/device/sriov_numvfs
# Set VF MAC addresses
ip link set eth0 vf 0 mac e4:11:22:33:44:52
ip link set eth0 vf 1 mac e4:11:22:33:44:53
# Unbind VF from driver
echo 0000:04:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
echo 0000:04:00.3 > /sys/bus/pci/drivers/mlx5_core/unbind
# Enable offload
echo switchdev > /sys/class/net/eth0/compat/devlink/mode
ethtool -K eth0 hw-tc-offload on

OVS Operations :

modprobe openvswitch
ovs-ctl start
ovs-vsctl add-br ovs-sriov
ovs-vsctl set Open_vSwitch . other_config:hw-offload=true
ovs-vsctl set Open_vSwitch . other_config:tc-policy=skip_sw
ovs-vsctl add-port ovs-sriov ens3f0_0
ovs-vsctl add-port ovs-sriov ens3f0_1
ip link set ens3f0_0 up
ip link set ens3f0_1 up
# VXLAN example
ovs-vsctl add-port ovs-sriov vxlan0 -- set interface vxlan0 type=vxlan options:local_ip=1.1.1.1 options:remote_ip=1.1.1.2 options:key=98

Test Results :

In a VXLAN setup, hardware offload yields ~15 µs OVS latency, far lower than ~113 µs for virtio.

In a VLAN setup, similar low latency is observed, confirming the benefit of SR‑IOV offload.

VF Hot‑Migration Research :

Current software stacks cannot yet support live migration of SR‑IOV VFs. The proposed approach uses the kernel net_failover driver to create a failover device that switches between a primary VF and a standby virtio interface.

# Example libvirt XML for a VF

Migration script outline:

# Source hypervisor
virsh domif-setlink $DOMAIN $TAP_IF up
bridge fdb del $MAC dev $PF master
virsh detach-device $DOMAIN $VF_XML
ip link set $PF vf $VF_NUM mac 00:00:00:00:00:00
virsh migrate --live $DOMAIN qemu+ssh://$REMOTE_HOST/system
# Destination hypervisor
virsh attach-device $DOMAIN $VF_XML
virsh domif-setlink $DOMAIN $TAP_IF down

Component version requirements: kernel 4.18‑4.20 / 5.0‑5.11, QEMU stable‑4.2, libvirt ≥ 5.3, OpenStack T series for VXLAN support.

linuxNetworkingvirtualizationOVSMellanoxSR-IOVVF Migration
360 Tech Engineering
Written by

360 Tech Engineering

Official tech channel of 360, building the most professional technology aggregation platform for the brand.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.