Integrating Cilium with Kube‑OVN via CNI Chaining: Configuration and Deployment Guide
This guide explains how to enhance Kube‑OVN with Cilium’s eBPF‑based networking and security features by using CNI chaining, covering prerequisites, configuration changes, Helm deployment steps, verification commands, and useful reference links for a complete cloud‑native networking solution.
Kube‑OVN can leverage the eBPF‑based network and security capabilities of Cilium through CNI chaining, allowing users to combine Kube‑OVN’s rich network abstractions with Cilium’s advanced monitoring and security features.
Benefits of integrating Cilium:
Richer and more efficient security policies.
Monitoring view powered by Hubble.
Prerequisites
Linux kernel version >= 4.19 (or a compatible kernel) to support full eBPF functionality.
Helm installed on the cluster (see the "Installing Helm" reference).
Configure Kube‑OVN
To fully use Cilium’s security capabilities, disable the built‑in networkpolicy feature in Kube‑OVN and adjust the CNI configuration priority.
In the install.sh script, set the following variables:
ENABLE_NP=false
CNI_CONFIG_PRIORITY=10If Kube‑OVN is already deployed, you can modify the startup arguments of kube-ovn-controller :
args:
- --enable-np=falseAdjust the CNI priority for kube-ovn-cni :
args:
- --cni-conf-name=10-kube-ovn.conflistRename the CNI configuration file on each node so that the Cilium configuration is used first:
mv /etc/cni/net.d/01-kube-ovn.conflist /etc/cni/net.d/10-kube-ovn.conflistDeploy Cilium
Create a chaining.yaml ConfigMap that defines a CNI chain using the generic‑veth mode:
apiVersion: v1
kind: ConfigMap
metadata:
name: cni-configuration
namespace: kube-system
data:
cni-config: |- {"name": "generic-veth","cniVersion": "0.3.1","plugins": [ {"type": "kube-ovn","server_socket": "/run/openvswitch/kube-ovn-daemon.sock","ipam": {"type": "kube-ovn","server_socket": "/run/openvswitch/kube-ovn-daemon.sock"} }, {"type": "portmap","snat": true,"capabilities": {"portMappings": true} }, {"type": "cilium-cni"} ] }Apply the ConfigMap:
kubectl apply -f chaining.yamlInstall Cilium with Helm, enabling the generic‑veth chaining mode and pointing to the custom ConfigMap:
helm repo add cilium https://helm.cilium.io/
helm install cilium cilium/cilium --version 1.11.6 \
--namespace kube-system \
--set cni.chainingMode=generic-veth \
--set cni.customConf=true \
--set cni.configMap=cni-configuration \
--set tunnel=disabled \
--set enableIPv4Masquerade=false \
--set enableIdentityMark=falseVerify that Cilium is installed correctly:
# cilium status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Hubble: disabled
\__/¯¯\__/ ClusterMesh: disabled
\__/
DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
Containers: cilium Running: 2
cilium-operator Running: 2
Cluster Pods: 8/11 managed by Cilium
Image versions cilium quay.io/cilium/cilium:v1.10.5@sha256:...: 2
cilium-operator quay.io/cilium/operator-generic:v1.10.5@sha256:...: 2References
Cilium: https://cilium.io/
CNI Chaining documentation: https://docs.cilium.io/en/stable/gettingstarted/cni-chaining/
Helm installation guide: https://helm.sh/docs/intro/install/
Kube‑OVN official docs: https://kubeovn.github.io/docs/v1.10.x/
For further details, see the Kube‑OVN Chinese documentation and community resources linked at the end of the original article.
Cloud Native Technology Community
The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.