Integrating Kube-OVN with OVS‑DPDK for KubeVirt VM DPDK Network Interfaces
This guide explains how to combine Kube-OVN with OVS‑DPDK to provide DPDK‑type network interfaces for KubeVirt virtual machines, covering prerequisites, NIC driver configuration, node labeling, OVS‑DPDK configuration, Kube‑OVN installation with DPDK support, and VM deployment steps.
This document describes the process of using Kube‑OVN together with OVS‑DPDK to give KubeVirt virtual machines DPDK‑type network interfaces.
Prerequisites : each node must have a NIC dedicated to DPDK drivers and hugepages must be enabled.
Configure the NIC for DPDK using driverctl set-override 0000:00:0b.0 uio_pci_generic . Refer to the DPDK documentation for other drivers.
Label OVS‑DPDK nodes so Kube‑OVN can recognize them:
kubectl label nodes
ovn.kubernetes.io/ovs_dp_type="userspace"Create the OVS‑DPDK configuration file /opt/ovs-config/ovs-dpdk-config with the following variables:
ENCAP_IP=192.168.122.193/24 DPDK_DEV=0000:00:0b.0These variables define the tunnel endpoint address and the PCI ID of the DPDK device.
Install Kube‑OVN with DPDK support :
wget https://raw.githubusercontent.com/kubeovn/kube-ovn/release-1.10/dist/images/install.sh bash install.sh --with-hybrid-dpdkDeploy the KVM Device Plugin to create VMs:
kubectl apply -f https://raw.githubusercontent.com/kubevirt/kubernetes-device-plugins/master/manifests/kvm-ds.ymlCreate a NetworkAttachmentDefinition for the DPDK network:
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: ovn-dpdk
namespace: default
spec:
config: >-
{
"cniVersion": "0.3.0",
"type": "kube-ovn",
"server_socket": "/run/openvswitch/kube-ovn-daemon.sock",
"provider": "ovn-dpdk.default.ovn",
"vhost_user_socket_volume_name": "vhostuser-sockets",
"vhost_user_socket_name": "sock"
}Build a VM image using the following Dockerfile:
FROM quay.io/kubevirt/virt-launcher:v0.46.1
# wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
COPY CentOS-7-x86_64-GenericCloud.qcow2 /var/lib/libvirt/images/CentOS-7-x86_64-GenericCloud.qcow2Define the VM and deployment resources (excerpt):
apiVersion: v1
kind: ConfigMap
metadata:
name: vm-config
data:
start.sh: |
chmod u+w /etc/libvirt/qemu.conf
echo "hugetlbfs_mount = \"/dev/hugepages\"" >> /etc/libvirt/qemu.conf
virtlogd &
libvirtd &
mkdir /var/lock
sleep 5
virsh define /root/vm/vm.xml
virsh start vm
tail -f /dev/null
vm.xml: |
... (VM definition with vhostuser interface, hugepages, and resources) ...
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vm-deployment
spec:
replicas: 1
selector:
matchLabels:
app: vm
template:
metadata:
labels:
app: vm
annotations:
k8s.v1.cni.cncf.io/networks: default/ovn-dpdk
spec:
nodeSelector:
ovn.kubernetes.io/ovs_dp_type: userspace
containers:
- name: vm
image: vm-vhostuser:latest
command: ["bash", "/root/vm/start.sh"]
resources:
limits:
cpu: "2"
memory: "8784969729"
hugepages-2Mi: 2Gi
requests:
cpu: 666m
memory: "4490002433"
volumeMounts:
- name: vhostuser-sockets
mountPath: /var/run/vm
- name: xml
mountPath: /root/vm/
- name: hugepage
mountPath: /dev/hugepages
- name: libvirt-runtime
mountPath: /var/run/libvirtAfter the VM is created, set its password and access the console:
# virsh set-user-password vm root 12345
# virsh console vmConfigure the VM network inside the guest:
ip link set eth0 mtu 1400
ip addr add 10.16.0.96/16 dev eth0
ip route add default via 10.16.0.1
ping 114.114.114.114For more details, refer to the official Kube‑OVN documentation at https://kubeovn.github.io/docs/v1.10.x/ .
Cloud Native Technology Community
The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.