Phase 03 · GitOps & App Deployment
Online Boutique — deep dive
ansible/roles/online-boutique/  ·  11 microservices · deployed via ArgoCD
Google's Online Boutique (microservices-demo) is a cloud-native e-commerce app made of 11 microservices written in Go, Python, Java, C#, and Node.js. We use it as a realistic workload to demonstrate GitOps deployment via ArgoCD and as the target for the observability stack in Phase 04.
01 The 11 Microservices
frontend (Go) The only user-facing service. Serves the HTML storefront and talks to all other services via gRPC. Entry point for all browser traffic.
cartservice (C#) Manages the shopping cart state in Redis. Accessed by the frontend when items are added or the cart is viewed.
productcatalogservice (Go) Returns the list of products and individual product details. Reads from a static JSON file — no database.
currencyservice (Node.js) Converts prices between currencies. Called by the frontend to display prices in the user's local currency.
paymentservice (Node.js) Simulates credit card payment processing. Always succeeds — no real transactions.
shippingservice (Go) Returns shipping cost quotes and simulates shipment tracking. Called during checkout.
emailservice (Python) Sends order confirmation emails. In this demo it just logs them — no real SMTP.
checkoutservice (Go) Orchestrates the checkout flow — calls payment, shipping, email, and cart services in sequence.
recommendationservice (Python) Returns product recommendations based on items in the cart. Uses simple list filtering.
adservice (Java) Returns contextual text ads based on the current product category. Runs on the JVM.
redis-cart Redis instance used exclusively by cartservice for cart state storage. Separate from any other Redis in the cluster.
02 The Istio Problem — Why directory.include
Google microservices-demo  release/  folder contains:
──────────────────────────────────────────────────────────
  kubernetes-manifests.yaml   ← plain k8s: Deployments, Services
  istio-manifests.yaml        ← Istio: VirtualService, Gateway, ServiceEntry
  kubernetes-manifests-with-istio.yaml  ← combined version

Without directory.include — ArgoCD applies EVERYTHING:
  ✓ Deployments, Services        → applied fine
  ✗ VirtualService               → SyncFailed: CRD not installed
  ✗ networking.istio.io/Gateway  → SyncFailed: CRD not installed
  ✗ ServiceEntry                 → SyncFailed: CRD not installed

With directory.include: "kubernetes-manifests.yaml":
  ✓ Deployments, Services        → applied fine
  ✓ istio-manifests.yaml         → SKIPPED (not in include pattern)
  → All 11 services deploy cleanly

We don't run Istio (a full service mesh) in this cluster — it would add significant resource overhead on local VMs. The directory.include filter in the ArgoCD Application tells ArgoCD to only process files matching the pattern, silently ignoring everything else in the release/ directory.

03 Ingress — app.lab.local → frontend
# roles/online-boutique/files/boutique-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name:      boutique-ingress
  namespace: online-boutique
spec:
  ingressClassName: nginx
  rules:
    - host: app.lab.local
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend
                port:
                  number: 80
  • Why not managed by ArgoCD? — the Ingress is applied separately by Ansible (not via the ArgoCD Application) because it references our local app.lab.local hostname which is specific to this lab setup, not part of the upstream Google manifests.
  • frontend service:80 — only the frontend service needs to be exposed. All other services communicate internally via gRPC and are not publicly accessible. Traffic flows: browser → NGINX → frontend → internal gRPC calls.
  • Timing — the Ansible role waits for the online-boutique namespace to exist (created by ArgoCD) before applying this Ingress, since the namespace must exist first.
04 Full Traffic Flow
Browser: http://app.lab.local/cart
    │
    │  hosts file: app.lab.local → 192.168.56.200
    │
    ▼
MetalLB: 192.168.56.200  (NGINX Ingress LoadBalancer IP)
    │
    │  NGINX matches Host: app.lab.local → boutique-ingress
    │
    ▼
frontend pod  (Go HTTP server on :8080)
    │
    │  gRPC calls to internal services:
    ├── productcatalogservice:3550
    ├── cartservice:7070
    ├── currencyservice:7000
    ├── recommendationservice:8080
    └── adservice:9555
    │
    ▼
HTML response back to browser
05 Verify Phase 03 is Working
# All 11 services Running?
kubectl get pods -n online-boutique

# Ingress wired up?
kubectl get ingress -n online-boutique
# NAME               HOSTS           ADDRESS          PORTS
# boutique-ingress   app.lab.local   192.168.56.200   80

# ArgoCD sync status
kubectl get application online-boutique -n argocd
# SYNC STATUS=Synced  HEALTH STATUS=Healthy

# Test from your laptop
# Browser → http://app.lab.local  (Online Boutique storefront)
# Browser → http://argocd.lab.local  (ArgoCD dashboard)

Phase 03 is complete when all pods show Running, the ArgoCD Application shows Synced / Healthy, and you can browse the Online Boutique storefront at http://app.lab.local. This running workload becomes the observability target in Phase 04 — Prometheus will scrape its metrics and Grafana will display them.