Without Ingress — one IP per service (not scalable) ──────────────────────────────────────────────────────────── 192.168.56.200 → ArgoCD 192.168.56.201 → Grafana 192.168.56.202 → Online Boutique ... one MetalLB IP consumed per service With Ingress — one IP, routing by hostname ──────────────────────────────────────────────────────────── 192.168.56.200 (MetalLB assigns this to NGINX) │ ┌───────────┼───────────┐ ▼ ▼ ▼ argocd grafana app .lab.local .lab.local .lab.local ↓ ↓ ↓ ArgoCD svc Grafana svc Boutique svc
A LoadBalancer service gives you direct access to one service on one IP.
An Ingress is a routing layer — it sits in front of many services and
dispatches requests by hostname, so you only consume one LoadBalancer IP for the entire
cluster's external traffic.
# roles/nginx-ingress/tasks/main.yml # 1. Install Helm on the master VM - name: Install Helm shell: curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash args: creates: /usr/local/bin/helm # idempotent — skip if already installed # 2. Add the ingress-nginx chart repo - name: Add ingress-nginx Helm repo command: helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx # 3. Install the chart - name: Install NGINX Ingress Controller command: > helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace --version 4.11.3 --set controller.service.type=LoadBalancer --set controller.admissionWebhooks.enabled=false --timeout 10m0s
/usr/local/bin/helm. The creates: argument makes it
idempotent — skipped if the binary already exists.
ingress-nginx namespace
if it doesn't exist. Without this flag, the install fails if the namespace is absent.
LoadBalancer service for the NGINX controller, which MetalLB sees and
assigns an IP from the pool. Without this it defaults to NodePort.
Running state on slow or cold VMs.
# After helm install, inside the cluster: # (admissionWebhooks disabled — only one service created) NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) ingress-nginx ingress-nginx-controller LoadBalancer 10.103.174.240 192.168.56.200 80:31787/TCP,443:31809/TCP # The controller pod: ingress-nginx ingress-nginx-controller-7bf6f99bc6-tlgm5 Running 1/1
admissionWebhooks.enabled=false,
the ingress-nginx-controller-admission ClusterIP service and its
validation job are not created. Ingress objects are still accepted — they just
aren't validated server-side before admission.
Ingress objects cluster-wide and
auto-generates its NGINX config from them. No restart needed when Ingress rules change.
# Example Ingress resource (used in Phase 03 + 04) apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: argocd.lab.local http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: number: 80
You don't configure NGINX directly — you create Ingress objects in
Kubernetes and the controller auto-generates NGINX config from them. Every time you
apply an Ingress resource, NGINX reloads without any restarts.
Host header. When a request arrives for argocd.lab.local,
it proxies it to the argocd-server service.
/
(i.e., everything). You can get more specific with exact paths or regex for complex
routing.
# C:\Windows\System32\drivers\etc\hosts (open as Administrator) # k8s-observability-platform — lab.local domains 192.168.56.200 argocd.lab.local 192.168.56.200 grafana.lab.local 192.168.56.200 app.lab.local
lab.local is not a real DNS domain — no DNS server on the internet knows
about it. The hosts file is your OS's local DNS override: before asking
any DNS server, your OS checks this file first.
192.168.56.200 (or
whatever MetalLB assigned). They all arrive at NGINX, which then routes each to the
correct service by hostname.
192.168.56.200 but check the Ansible debug
output or run kubectl get svc -n ingress-nginx to confirm the actual
assigned IP before updating your hosts file.
Browser: http://grafana.lab.local │ │ 1. OS checks hosts file │ grafana.lab.local → 192.168.56.200 │ │ 2. ARP: who has 192.168.56.200? │ MetalLB Speaker replies → k8s-master MAC │ ▼ 192.168.56.200:80 (ingress-nginx-controller LoadBalancer service) │ │ 3. kube-proxy DNAT → NGINX pod │ ▼ NGINX Ingress Controller │ │ 4. Reads Host header: grafana.lab.local │ Matches Ingress rule → grafana service:80 │ ▼ Grafana Pod (Phase 04)
# From inside k8s-master (vagrant ssh k8s-master) # MetalLB components kubectl get pods -n metallb-system kubectl get ipaddresspool -n metallb-system kubectl get l2advertisement -n metallb-system # NGINX Ingress — confirm LoadBalancer got an IP kubectl get svc -n ingress-nginx # Expected output: # NAME TYPE EXTERNAL-IP PORT(S) # ingress-nginx-controller LoadBalancer 192.168.56.200 80:xxx/TCP,443:xxx/TCP # From your laptop — test NGINX responds # curl http://192.168.56.200 (should return 404 — no Ingress rules yet)
A 404 from NGINX at the IP is actually a success at this stage — it means
NGINX is reachable and running, but no Ingress rules exist yet to route the request
anywhere. Those get created in Phase 03 and 04 when services are deployed.