Cloud Native 16 min read

Understanding Virtio, OVS, vhost-net, vhost-user, and vdpa: Architecture and Performance Analysis

This article explains the principles and performance characteristics of virtio networking, OVS virtual switching, and the related vhost‑net, vhost‑user, and vdpa mechanisms, comparing their architectures, data‑plane offloads, and suitability for cloud‑native environments.

360 Tech Engineering
360 Tech Engineering
360 Tech Engineering
Understanding Virtio, OVS, vhost-net, vhost-user, and vdpa: Architecture and Performance Analysis

virtio is an I/O semi‑virtualization solution and OVS is a software virtual switch; the article introduces their forwarding principles and analyzes performance.

In traditional data centers, physical NICs and hardware switches handle packet forwarding, while in cloud environments virtual NICs connect to virtual switches, enabling VM‑to‑VM communication and external network access.

Hardware NICs use DMA to move packets from the NIC to memory, generating interrupts for the CPU; virtual NICs simulate this behavior by copying data between server memory and VM memory, effectively emulating DMA.

virtio provides a unified mechanism for virtual devices (NIC, disk, etc.) with a frontend in the VM and a backend on the host, allowing efficient data exchange and broad hypervisor support.

OVS is preferred in cloud environments because it implements an OpenFlow pipeline that simplifies adding new features, offloads data‑plane processing to the datapath, and integrates easily with the kernel.

vhost‑net allows QEMU to simulate virtio‑net registers; the backend uses eventfd for kick/call notifications, handling control‑plane (feature negotiation) and data‑plane (packet copy) operations.

Guest transmit flow: guest allocates skb → writes address to vring → kicks KVM → KVM notifies vhost → vhost copies data and notifies guest.

Guest receive flow: virtio‑net driver allocates skb → hardware NIC DMA fills skb → interrupt → softirq routes packet through OVS bridge → packet delivered to guest.

vhost‑user moves the data‑plane to a user‑space OVS‑DPDK process, using a Unix domain socket for configuration and kick/call fd exchange, enabling zero‑copy and hugepage usage.

virtio full offload leverages hardware passthrough (vfio‑pci) to implement the virtio backend directly in the NIC, allowing DMA to QEMU/OVS‑DPDK processes and reducing CPU involvement.

vdpa implements the virtio data‑plane in hardware while keeping control‑plane in the host, using vhost‑vdpa to communicate with hardware drivers; it aims to support hot migration and reduce CPU load.

The performance analysis compares interrupt overhead, zero‑copy capability, hugepage usage, multi‑queue scaling, lock‑free designs, and pipeline models, concluding that vdpa offers the highest performance, followed by vhost‑user, with vhost‑net being the least efficient.

In summary, vhost‑net is the most mature and widely used, vhost‑user provides high performance for telecom‑grade clouds but lacks kernel networking features, and vdpa delivers the best performance at higher cost and hardware dependency.

References: https://www.redhat.com/en/blog/deep-dive-virtio-networking-and-vhost-net https://www.redhat.com/en/blog/journey-vhost-users-realm https://www.redhat.com/en/blog/how-deep-does-vdpa-rabbit-hole-go https://zhuanlan.zhihu.com/p/336616452 https://zhuanlan.zhihu.com/p/308114104

cloud-nativeOVSNetwork Virtualizationvirtiovdpavhost-netvhost-user
360 Tech Engineering
Written by

360 Tech Engineering

Official tech channel of 360, building the most professional technology aggregation platform for the brand.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.