How to Deploy a High‑Availability Ingress‑NGINX Controller with Helm
This step‑by‑step guide explains how to use Helm to install a highly available ingress‑nginx controller on Kubernetes, covering version compatibility, replica configuration, pod anti‑affinity, load‑balancer setup with HAProxy, and verification of the deployment.
As micro‑service architectures become mainstream, Kubernetes serves as the foundation for many modern applications, and the Ingress controller manages incoming HTTP/HTTPS traffic. This guide demonstrates how to deploy a highly available ingress‑nginx controller using Helm, simplifying traffic management and improving system robustness.
Supported Versions
Ingress‑NGINX v1.11.2 – Helm chart 4.11.2 – compatible with Kubernetes 1.30‑1.26, Alpine 3.20.0, NGINX 1.25.5
Ingress‑NGINX v1.10.4 – Helm chart 4.10.4 – compatible with Kubernetes 1.30‑1.26, Alpine 3.20.0, NGINX 1.25.5
Older releases (v1.9.x, v1.8.4, etc.) are also listed in the original table.
High‑Availability Deployment Strategy
Deploy two ingress‑nginx replicas and use pod anti‑affinity to schedule them on separate nodes.
Place a load balancer in front of the replicas so traffic is evenly distributed.
Deploy ingress‑nginx with Helm
<code>helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm pull ingress-nginx/ingress-nginx --untar --untardir /etc/kubernetes/addons --version 4.11.2</code>Create a values file for the controller:
<code>cat <<'EOF' | sudo tee /etc/kubernetes/addons/ingress-nginx-value.yml > /dev/null
controller:
replicaCount: 2
kind: Deployment
image:
registry: 172.139.20.170:5000
image: library/controller
tag: "v1.11.2"
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/component
operator: In
values:
- controller
topologyKey: kubernetes.io/hostname
nodeSelector:
ingress-nginx: controller
service:
type: NodePort
externalTrafficPolicy: Local
nodePorts:
http: "30080"
https: "30443"
metrics:
enabled: true
port: 10254
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
admissionWebhooks:
patch:
enabled: true
image:
registry: 172.139.20.170:5000
image: library/kube-webhook-certgen
tag: 'v1.4.3'
EOF</code>Label the nodes and install the chart:
<code>kubectl label node k8s-node01 ingress-nginx=controller
kubectl label node k8s-node02 ingress-nginx=controller
helm install -n kube-system ingress-nginx -f /etc/kubernetes/addons/ingress-nginx-value.yml /etc/kubernetes/addons/ingress-nginx</code>Load‑Balancing Ingress‑NGINX with HAProxy
<code>listen ingress-http-tcp
bind *:80
server ingress01 172.139.20.175:30080 maxconn 32 check
server ingress02 172.139.20.75:30080 maxconn 32 check
listen ingress-https-tcp
bind *:443
server ingress01 172.139.20.175:30443 maxconn 32 check
server ingress02 172.139.20.75:30443 maxconn 32 check</code>Update
docker-compose.ymlto expose ports 80 and 443 and set
net.ipv4.ip_unprivileged_port_start=0(requires kernel 4.x):
<code>cat /etc/haproxy/docker-compose.yml
name: haproxy
services:
haproxy:
container_name: haproxy
image: haproxy:2.9-alpine
restart: always
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
ports:
- 80:80
- 443:443
sysctls:
- net.ipv4.ip_unprivileged_port_start=0</code>Restart HAProxy:
<code>sudo docker-compose -f /etc/haproxy/docker-compose.yml restart</code>Verification
Check the pods:
<code>kubectl -n kube-system get pod -l app.kubernetes.io/instance=ingress-nginx</code>Verify the service is reachable (screenshot below).
References
https://kubernetes.github.io/ingress-nginx/
https://github.com/kubernetes/ingress-nginx?tab=readme-ov-file#supported-versions-table
Deploying a high‑availability Ingress controller with Helm streamlines configuration and enhances system robustness, helping both development and production environments manage application traffic reliably.
Linux Ops Smart Journey
The operations journey never stops—pursuing excellence endlessly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.