Deploying MetalLB Load Balancer on Bare‑Metal Kubernetes Clusters
This article explains why exposing services on a bare‑metal Kubernetes cluster requires a custom load‑balancer solution, introduces MetalLB, details its deployment requirements, explains its Layer 2 and BGP modes, provides step‑by‑step installation commands, and shows how to verify the setup.
When a service is deployed on Kubernetes, exposing it to external users is trivial on cloud platforms (such as Alibaba Cloud, Tencent Cloud, AWS) using the cloud provider’s LoadBalancer service. However, on a self‑built bare‑metal Kubernetes cluster, the default networking does not support load balancing, so alternatives like Ingress, NodePort, or ExternalIPs must be used, each with its own drawbacks.
MetalLB addresses this pain point by providing a LoadBalancer implementation that integrates with standard network devices, allowing external services to run normally on bare‑metal clusters and reducing operational overhead.
Deployment Requirements
Kubernetes version 1.13.0 or higher without built‑in network load balancing.
A pool of IPv4 addresses for MetalLB allocation.
If using BGP mode, one or more routers that support BGP.
For Layer 2 mode, nodes must allow traffic on port 7946 for speaker communication.
The cluster network must be compatible with MetalLB (see compatibility table below).
Network Type
Compatibility
Antrea
Yes
Calico
Mostly
Canal
Yes
Cilium
Yes
Flannel
Yes
Kube‑ovn
Yes
Kube‑router
Mostly
Weave Net
Mostly
How MetalLB Works
MetalLB consists of two components: a Controller (deployed as a Deployment) and a Speaker (deployed as a DaemonSet on every node). The Controller watches Service objects; when a Service is set to LoadBalancer type, the Controller allocates an IP from the configured pool and manages its lifecycle. The Speaker advertises the allocated IP using either Layer 2 ARP announcements or BGP announcements, depending on the chosen mode. Traffic arriving at a node is first handled by kube‑proxy, which forwards it to the appropriate Pod.
Installation Steps
Enable ARP mode for kube‑proxy if the cluster uses IPVS (required from Kubernetes v1.14.2 onward): $ kubectl edit configmap -n kube-system kube-proxy # set strictARP to true apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs" ipvs: strictARP: true
Install MetalLB components (default namespace metallb-system ): $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.4/config/manifests/metallb-native.yaml
Configure the desired mode. Layer 2 mode : create an IPAddressPool and a L2Advertisement . apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: ip-pool namespace: metallb-system spec: addresses: - 192.168.214.50-192.168.214.80 # IP pool for the LB apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2adver namespace: metallb-system BGP mode : define a BGPPeer , an IPAddressPool , and a BGPAdvertisement . apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: sample namespace: metallb-system spec: myASN: 64500 peerASN: 64501 peerAddress: 10.0.0.1 apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.168.10.0/24 apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadver namespace: metallb-system
Validate the installation by creating a sample Service and Deployment using LoadBalancer type, checking that an external IP is assigned, and accessing the application from a browser. apiVersion: v1 kind: Service metadata: name: myapp-svc spec: selector: app: myapp ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment spec: replicas: 2 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: nginx image: nginx:1.19.4 ports: - containerPort: 80
Project Maturity
MetalLB is currently in beta but is already used by many individuals and companies in production and non‑production clusters. No major bugs have been reported based on the frequency of issue reports.
Thank you for reading. If you found this article helpful, feel free to share it with your friends or technical groups.
DevOps Operations Practice
We share professional insights on cloud-native, DevOps & operations, Kubernetes, observability & monitoring, and Linux systems.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.