Cloud Computing 8 min read

Integrating Kubernetes Pods with OpenStack VPC Network Using a Custom CNI and IPVLAN

This article describes how 360 unified Kubernetes pod networking with OpenStack VPC by developing a custom CNI plugin that leverages Neutron elastic ports, IPVLAN L2 mode, and OVS to achieve layer‑2 connectivity between VMs and pods, including detailed implementation steps and command examples.

360 Smart Cloud
360 Smart Cloud
360 Smart Cloud
Integrating Kubernetes Pods with OpenStack VPC Network Using a Custom CNI and IPVLAN

Background: OpenStack and Kubernetes are widely used cloud solutions; 360 operates both platforms, with virtual machines running on OpenStack VPC networks. The goal is to manage and enrich networking by integrating Kubernetes pods into the OpenStack VPC.

Pre‑modification situation: Container networking relied on a cilium + BGP solution where nodes joined an underlay network, making pod IPs unrelated to the OpenStack VPC and preventing layer‑2 communication with VMs. Bare‑metal servers were also underlay‑only.

Proposed solution: Three key questions were addressed – where pod IPs come from, how they are allocated, and how pods communicate. The design replaces the pod IPAM with OpenStack Neutron, requests elastic ports for pods, and uses IPVLAN L2 mode (inspired by Alibaba’s Terway CNI) to attach pods to the VPC overlay, enabling L2 connectivity.

After modification: A custom plugin named hulk‑vpc‑cni calls Neutron to create ports (optionally pre‑allocating 10‑20 fixed IPs), then uses OVS commands to create an elastic NIC and an IPVLAN sub‑interface for the pod. If three‑layer connectivity is needed, a floating IP can be bound to the port. The article provides a complete command sequence for creating ports, attaching them to bare‑metal nodes, setting up an IPVLAN bridge, and launching containers with the allocated IP.

Bare‑metal enhancements: Post‑upgrade bare‑metal nodes support VPC networking via automated OVS deployment, allowing both container services and OVS SDN on the same host. Additionally, Ceph‑based cloud disks can be attached using an iSCSI gateway built on SPDK, with the full workflow illustrated.

Summary: By redesigning the CNI to use Neutron elastic ports, IPVLAN L2, and OVS, pods and VMs now share the same VPC and can communicate at layer 2. The approach unifies networking for Kubernetes and OpenStack but introduces configuration complexity and some performance overhead due to additional overlay components.

Reference commands:

# (1) Create a virtual NIC on a bare‑metal host
openstack port create --network
--disable-port-security --host
# Add additional fixed IPs if needed
openstack port set --fixed-ip subnet=
,ip-address=
# (2) Attach the virtual NIC to the bare‑metal node
BM_INTERFACE=tapad3da56b-0c
BM_PORT_ID=ad3da56b-0c24-4dde-8ed2-396f8bddbcc5
BM_PORT_MAC=fa:16:3e:62:08:3e

docker exec -it -u root neutron_openvswitch_agent ovs-vsctl --may-exist add-port br-int $BM_INTERFACE \
  -- set Interface $BM_INTERFACE type=internal \
  -- set Interface $BM_INTERFACE external-ids:iface-status=active \
  -- set Interface $BM_INTERFACE external-ids:attached-mac=$BM_PORT_MAC \
  -- set Interface $BM_INTERFACE external-ids:iface-id=$BM_PORT_ID

ip link set dev $BM_INTERFACE address $BM_PORT_MAC
ifconfig $BM_INTERFACE mtu 1450
ifup $BM_INTERFACE

# (3) Create an IPVLAN bridge and launch a container
docker network create -d ipvlan --subnet=
--gateway=
\
  -o parent=$BM_INTERFACE -o ipvlan_mode=l2 ipvlan_test

docker run --net=ipvlan_test --ip=
-id --name ipvlan_pod1 --rm alpine /bin/sh

# (4) Bind a floating IP to the pod if needed
neutron floatingip-associate --fixed-ip-address
Cloud ComputingKubernetesOpenStackVPCCNIipvlanNetwork Integration
360 Smart Cloud
Written by

360 Smart Cloud

Official service account of 360 Smart Cloud, dedicated to building a high-quality, secure, highly available, convenient, and stable one‑stop cloud service platform.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.