Cloud cluster (AWS/GCP/Azure) ───────────────────────────────────────────────────────────── kubectl expose ... --type=LoadBalancer ↓ Cloud provider API ← automatically provisions a real external IP ↓ Service gets EXTERNAL-IP: 34.102.x.x ← reachable from anywhere Local cluster (our setup) ───────────────────────────────────────────────────────────── kubectl expose ... --type=LoadBalancer ↓ ??? nobody handles this ??? ↓ Service stays EXTERNAL-IP: <pending> ← forever, nothing happens Local cluster + MetalLB ───────────────────────────────────────────────────────────── kubectl expose ... --type=LoadBalancer ↓ MetalLB controller ← watches for pending LoadBalancer services ↓ Service gets EXTERNAL-IP: 192.168.56.200 ← from our lab-pool ↓ MetalLB speaker ← announces the IP via ARP so your laptop can reach it
# roles/metallb/tasks/main.yml - name: Install MetalLB become: false command: > kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/metallb/metallb/v{{ metallb_version }}/config/manifests/metallb-native.yaml environment: KUBECONFIG: /home/vagrant/.kube/config
MetalLB's native manifest creates everything it needs in one apply: the
metallb-system namespace, RBAC rules, the Controller Deployment, and the
Speaker DaemonSet.
group_vars/all.yml to
0.14.9. Changing it there updates both the install URL and any other
reference in one place.
LoadBalancer services with a
pending external IP. When it finds one, it picks an IP from the pool and assigns it
to the service. Think of it as the IP allocator.
# roles/metallb/files/ip-pool.yaml apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: lab-pool namespace: metallb-system spec: addresses: - 192.168.56.200-192.168.56.250
The IPAddressPool is a custom resource (CRD) that tells MetalLB which IPs
it is allowed to hand out. Without this, the Controller doesn't know what range to
allocate from and services stay pending.
.10/.11/.12, leaving the upper range free for MetalLB.
apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: lab-l2 namespace: metallb-system spec: ipAddressPools: - lab-pool # references the pool above
The IPAddressPool defines the IPs. The L2Advertisement defines
how those IPs are announced. Without this resource, IPs are assigned but never
advertised — your laptop still can't route to them.
192.168.56.200, the Speaker on the owning node responds to ARP requests
for that IP. Your laptop sends "who has .200?" and the node replies with its MAC
address. Standard Ethernet, no special router config needed.
Your Laptop │ │ curl http://app.lab.local │ ↓ (hosts file resolves to 192.168.56.200) │ │ ARP: "who has 192.168.56.200?" │ ↓ (MetalLB Speaker replies with node MAC) │ ▼ 192.168.56.200 ← MetalLB assigns this to ingress-nginx Service │ │ Traffic arrives at the node running the Speaker │ ↓ kube-proxy DNAT rule ▼ NGINX Ingress Controller Pod │ │ Routes by hostname (Host: app.lab.local) ▼ Application Pod (Online Boutique / ArgoCD / Grafana)
MetalLB handles one thing: getting traffic from your laptop to the cluster edge. Everything after that (hostname routing to the right pod) is NGINX Ingress's job. They're designed to work together — MetalLB gives NGINX one stable IP, NGINX routes everything behind it.