Cloud Native 9 min read

Performance Degradation After Containerization: Analysis and Optimization Strategies

The article investigates why applications experience higher latency and lower QPS after moving from virtual machines to Kubernetes containers, analyzes the root causes such as increased soft‑interrupts caused by the networking stack, and proposes optimizations like ipvlan, macvlan, and Cilium to restore performance.

Architect's Guide
Architect's Guide
Architect's Guide
Performance Degradation After Containerization: Analysis and Optimization Strategies

Background

As more companies adopt cloud‑native architectures, monolithic applications evolve into micro‑services, and deployment shifts from virtual machines to containers orchestrated by Kubernetes. However, after containerization we observed that application performance deteriorates compared with the VM baseline.

Benchmark Results

Before Containerization

Using the wrk tool on a VM, the average response time (RT) was 1.68 ms and QPS reached 716 /s, with CPU already saturated.

After Containerization

Running the same workload in containers yielded an average RT of 2.11 ms and QPS of 554 /s, again with CPU fully utilized.

Performance Comparison

Metric

Virtual Machine

Container

RT

1.68 ms

2.11 ms

QPS

716 /s

554 /s

Overall performance drop: RT + 25 %, QPS ‑ 29 %.

Root Cause Analysis

Architecture Differences

The containerized deployment uses Kubernetes with Calico in IPIP mode. Traffic flows from the pod through a service NodePort, then iptables, and finally through Calico’s virtual interface before reaching the host, adding extra hops.

Performance Analysis

Soft‑interrupt (si) usage rose noticeably after containerization. Using perf we traced hot functions and counted soft‑interrupt occurrences, which increased by about 14 %.

static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev){
    ...
    if (likely(veth_forward_skb(rcv, skb, rq, rcv_xdp)))
    ...
}

static int veth_forward_skb(struct net_device *dev, struct sk_buff *skb,
        struct veth_rq *rq, bool xdp){
    return __dev_forward_skb(dev, skb) ?: xdp ?
        veth_xdp_rx(rq, skb) :
        netif_rx(skb); // soft‑interrupt handling
}

/* Called with irq disabled */
static inline void ____napi_schedule(struct softnet_data *sd,
        struct napi_struct *napi){
    list_add_tail(&napi->poll_list, &sd->poll_list);
    // raise soft‑interrupt
    __raise_softirq_irqoff(NET_RX_SOFTIRQ);
}

The veth transmission chain ends with a soft‑interrupt, explaining the higher interrupt count after containerization.

Soft‑Interrupt Reason

Containers and the host run in separate network namespaces; traffic must cross a veth pair, forcing the packet through the full kernel stack and generating additional soft interrupts.

Optimization Strategies

Calico’s IPIP overlay introduces overhead. Alternatives such as macvlan/ipvlan or Cilium can shorten the data path and eliminate extra soft‑interrupts.

ipvlan L2 Mode

ipvlan L2 lets containers use the host’s Ethernet interface directly, shortening the send path and avoiding soft‑interrupts.

ipvlan L3 Mode

In L3 mode the host acts as a router, enabling cross‑subnet container communication.

Cilium

Cilium is an eBPF‑based CNI that bypasses iptables, providing higher performance. Benchmark data shows Cilium achieving better QPS and lower CPU usage than Calico.

Conclusion

Containerization brings agility, resource efficiency, and environment consistency, but also adds network complexity that can degrade performance. By selecting appropriate CNI plugins (ipvlan, macvlan, Cilium) and understanding the impact of soft‑interrupts, teams can mitigate the slowdown and fully benefit from cloud‑native deployments.

PerformancekubernetesContainerizationNetworkingCiliumipvlan
Architect's Guide
Written by

Architect's Guide

Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.