Why Does Containerization Slow Down Your App? A Deep Dive into K8s Networking Performance
This article investigates why moving applications from virtual machines to containers on Kubernetes can degrade performance, presents benchmark comparisons, analyzes the impact of network architecture and soft interrupts, and proposes optimization strategies such as ipvlan and Cilium to restore efficiency.
Background
As more companies adopt cloud‑native architectures, applications shift from monoliths on virtual machines to microservices running in containers orchestrated by Kubernetes. After containerization, many teams observe worse performance compared to the VM baseline.
Stress Test Results
Before Containerization
Using the
wrktool, the application on a VM achieved an average response time (RT) of 1.68 ms and 716 requests/s (QPS), with CPU already saturated.
After Containerization
The same test on Kubernetes yielded an average RT of 2.11 ms and 554 requests/s, again with CPU fully utilized.
Performance Comparison
Overall performance dropped: RT increased by 25 % and QPS decreased by 29 %.
Root Cause Analysis
Architecture Differences
The containerized deployment uses Kubernetes with Calico in IPIP mode. Traffic flows through a veth pair between the pod’s network namespace and the host namespace, adding extra processing steps.
Performance Analysis
Soft‑interrupt (softirq) usage on the CPU is noticeably higher after containerization. Further investigation with
perfconfirms increased softirq activity.
Soft‑Interrupt Reason
Because containers and the host reside in separate network namespaces, data must traverse the veth pair, invoking the kernel’s soft‑interrupt handling path. The relevant kernel code is shown below:
<code>static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev) {
...
if (likely(veth_forward_skb(rcv, skb, rq, rcv_xdp)))
...
}
static int veth_forward_skb(struct net_device *dev, struct sk_buff *skb,
struct veth_rq *rq, bool xdp) {
return __dev_forward_skb(dev, skb) ?: xdp ?
veth_xdp_rx(rq, skb) :
netif_rx(skb); // soft interrupt handling
}
/* Called with irq disabled */
static inline void ____napi_schedule(struct softnet_data *sd,
struct napi_struct *napi) {
list_add_tail(&napi->poll_list, &sd->poll_list);
// raise soft interrupt
__raise_softirq_irqoff(NET_RX_SOFTIRQ);
}
</code>This chain (veth_xmit → veth_forward_skb → netif_rx → __raise_softirq_irqoff) explains the higher soft‑interrupt count and the associated performance degradation.
Optimization Strategies
The Calico IPIP overlay introduces overhead due to the veth pair. Switching to an underlay network can reduce this cost.
ipvlan L2 Mode
ipvlan binds directly to the host’s Ethernet interface, shortening the data path and eliminating soft‑interrupt overhead. The diagram below illustrates containers sending traffic directly via the host’s
eth0.
ipvlan L3 Mode
In L3 mode the host acts as a router, enabling cross‑subnet container communication without the overlay.
Cilium
Cilium is a high‑performance CNI plugin that leverages eBPF to streamline packet processing and reduce iptables overhead. Benchmarks show Cilium achieving higher QPS and lower CPU usage than Calico.
Conclusion
Containerization brings agility, resource efficiency, and environment consistency, but it also adds network complexity that can degrade performance. By understanding the architectural impact—especially the role of soft interrupts—and adopting underlay networking solutions such as ipvlan or Cilium, teams can mitigate these issues and fully benefit from cloud‑native deployments.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.