Cloud Native 7 min read

Investigation of Kubernetes Container Isolation Mechanism and Its Impact

The article investigates a cloud‑vendor Kubernetes isolation feature that inserts iptables DROP rules into a pod’s network namespace, demonstrating how it fully blocks traffic, triggers liveness‑probe restarts, and impacts services depending on replica count and probe configuration, while preserving state only without probes.

37 Interactive Technology Team
37 Interactive Technology Team
37 Interactive Technology Team
Investigation of Kubernetes Container Isolation Mechanism and Its Impact

This article documents a practical investigation of the container isolation feature provided by a cloud‑vendor security product for Kubernetes clusters. The feature is designed to quickly isolate a compromised pod by inserting iptables rules that drop all traffic in the pod’s network namespace, preventing lateral movement or node escape.

Purpose : To understand the implementation principle of the isolation function, verify its effectiveness, and assess its impact on production workloads.

Isolation Principle : The product enters the target pod’s net namespace and adds an iptables rule that rejects all inbound and outbound traffic. The rule is applied to a custom chain (e.g., test-nips ) and drops every packet.

Example of the iptables state before isolation (no custom rules):

~]# nsenter --net=/proc/$pid/ns/net iptables -nL
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
test-nips  all  --  0.0.0.0/0            0.0.0.0/0

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain test-nips (1 references)
target     prot opt source               destination
DROP all -- 0.0.0.0/0 0.0.0.0/0

After isolation, the custom chain contains a DROP rule that blocks all traffic.

Test Environment : A deployment named nginx with two containers – an nginx container (with a livenessProbe ) and a centos:7 container (sleeping). The YAML definition is:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        livenessProbe:
          tcpSocket:
            port: 80
      - image: centos:7
        name: centos
        command: ["/bin/sleep"]
        args: ["600000"]

Steps performed:

Apply the deployment ( kubectl apply -f test-nginx.yaml ) – both pods start normally.

Inspect the shared net namespace (PID 31677) and confirm that no iptables rules exist before isolation.

Trigger the isolation feature on one pod. The nginx container, having a livenessProbe , fails health checks and is restarted repeatedly.

Verify that both containers can no longer reach external domains, confirming the network block.

Check the iptables list inside the namespace after isolation, which now shows a DROP all rule.

Key observations:

The isolation effectively cuts off all network traffic for the affected pod.

If a container has a livenessProbe , the pod is repeatedly restarted, leading to loss of any in‑container files.

If no livenessProbe is configured, the container remains stopped but retains its filesystem state.

Business impact depends on replica count: with multiple replicas, the service continues; with a single replica, the service is unavailable.

Questions & Answers (summarized):

Effect of isolation: complete network disconnection.

Isolation mechanism: iptables rule added in the pod’s net namespace.

Pod behavior after isolation: network loss, possible container restart.

Will new pods be created? No, the pod count stays the same.

Business impact: depends on replica count and which pods are isolated.

Can the state be preserved? Only if no livenessProbe is set; otherwise the container restarts and state is lost.

The article concludes that the isolation feature provides a reliable way to contain compromised workloads, but operators must consider probe configurations and replica strategies to mitigate potential service disruption.

TestingKubernetesContainer SecurityiptablesisolationlivenessProbe
37 Interactive Technology Team
Written by

37 Interactive Technology Team

37 Interactive Technology Center

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.