GitOps Module — Presentation 3 of 4

Bootstrapping the GitOps Workflow

From Empty Repo to Production-Ready Environment Management

Civica Training Program

Where We Are Now

ArgoCD is installed and running. Now the real work begins.

The Challenge

You have ArgoCD running on AKS. You've connected a Git repo. But that repo is empty. Your team lead asks: "How do we structure this so it works for 3 environments, 12 microservices, and 4 teams? And we need secrets management."

Phase 1

Repo structure with Kustomize

Phase 2

App of Apps bootstrap

Phase 3

Secrets, TLS, and environments

GitOps Repository Structure

A well-organised repo is the foundation of successful GitOps.

gitops-config/
├── argocd/              # ArgoCD meta-configuration
│   ├── apps.yaml           # App of Apps root
│   ├── projects.yaml       # ArgoCD Projects
│   └── repo-config.yaml    # Repository credentials
│
├── infra/               # Cluster infrastructure
│   ├── cert-manager/
│   ├── external-secrets/
│   ├── ingress-nginx/
│   ├── monitoring/
│   └── sealed-secrets/
│
├── apps/                # Application deployments
│   ├── payment-service/
│   │   ├── base/
│   │   └── overlays/
│   │       ├── dev/
│   │       ├── staging/
│   │       └── prod/
│   └── order-service/
│       ├── base/
│       └── overlays/
│
└── cluster/             # Cluster essentials
    ├── namespaces/
    ├── rbac/
    └── resource-quotas/

The Three Layers of GitOps Configuration

Each layer has different change velocity and ownership.

Layer 1: Cluster Essentials

Changes rarely. Owned by platform team.

  • Namespaces
  • RBAC (ClusterRoles, Bindings)
  • ResourceQuotas and LimitRanges
  • NetworkPolicies

Change frequency: monthly

Layer 2: Infrastructure

Changes occasionally. Owned by platform team.

  • cert-manager
  • Ingress controller
  • External Secrets Operator
  • Monitoring stack

Change frequency: weekly

Layer 3: Applications

Changes frequently. Owned by dev teams.

  • Deployments & Services
  • Ingress rules
  • ConfigMaps
  • HorizontalPodAutoscalers

Change frequency: daily

Kustomize: Managing Environments

Kustomize lets you define a base configuration and overlay environment-specific changes.

Base (shared configuration)

# apps/payment-service/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: payment-service
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: payment
          image: myacr.azurecr.io/payment:latest
          resources:
            requests:
              cpu: "100m"
              memory: "128Mi"

Base kustomization.yaml

# apps/payment-service/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - deployment.yaml
  - service.yaml
  - hpa.yaml

commonLabels:
  app.kubernetes.io/name: payment-service
  app.kubernetes.io/managed-by: argocd

Kustomize Overlays: Per-Environment Config

Overlays patch the base for each environment.

Dev Overlay

# overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
namespace: dev
images:
  - name: myacr.azurecr.io/payment
    newTag: dev-abc123
patches:
  - path: replicas-patch.yaml

Staging Overlay

# overlays/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
namespace: staging
images:
  - name: myacr.azurecr.io/payment
    newTag: v1.2.0-rc1
patches:
  - path: replicas-patch.yaml

Prod Overlay

# overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
namespace: prod
images:
  - name: myacr.azurecr.io/payment
    newTag: v1.1.5
patches:
  - path: replicas-patch.yaml
  - path: resources-patch.yaml

Kustomize Patches: Environment Differences

What changes between environments? Replicas, resources, config values.

Replicas Patch (prod)

# overlays/prod/replicas-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: payment-service
spec:
  replicas: 3

Resources Patch (prod)

# overlays/prod/resources-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: payment-service
spec:
  template:
    spec:
      containers:
        - name: payment
          resources:
            requests:
              cpu: "500m"
              memory: "512Mi"
            limits:
              cpu: "1000m"
              memory: "1Gi"

Dev: 1 replica, minimal resources. Staging: 2 replicas, moderate. Prod: 3+ replicas, full resources.

Knowledge Check #1

Repository structure and Kustomize.

Q1: In a Kustomize-based GitOps repo, where do environment-specific changes live?

Answer: B) In overlay directories (overlays/dev, overlays/prod, etc.). Overlays reference the base and apply patches for replicas, resource limits, image tags, and other per-environment differences.

Q2: Which layer of GitOps configuration changes most frequently?

Answer: C) Applications (deployments, services). Application configurations change daily (new image tags, config updates), while infrastructure changes weekly and cluster essentials change monthly.

Q3: What does kustomize build overlays/prod produce?

Answer: B) Complete Kubernetes manifests with base + prod patches applied. Kustomize build takes the base resources and applies all overlays, patches, and transformations to produce fully rendered YAML that can be applied to a cluster.

Bootstrapping Cluster Essentials

Before deploying apps, the cluster needs its foundation.

Namespaces

# cluster/namespaces/namespaces.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: dev
  labels:
    environment: dev
    team: platform
---
apiVersion: v1
kind: Namespace
metadata:
  name: staging
  labels:
    environment: staging
---
apiVersion: v1
kind: Namespace
metadata:
  name: prod
  labels:
    environment: prod

RBAC and Resource Quotas

Controlling who can do what, and how much they can consume.

RBAC for Dev Teams

# cluster/rbac/dev-team-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: dev-team-access
  namespace: dev
subjects:
  - kind: Group
    name: "ad-group-id-for-devs"
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: edit
  apiGroup: rbac.authorization.k8s.io

Resource Quotas

# cluster/resource-quotas/dev-quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: dev-quota
  namespace: dev
spec:
  hard:
    requests.cpu: "4"
    requests.memory: "8Gi"
    limits.cpu: "8"
    limits.memory: "16Gi"
    pods: "50"

The App of Apps Pattern

The most powerful ArgoCD pattern: one Application that manages all other Applications.

Root App (App of Apps)
↓    ↓    ↓
Cluster Essentials App
Infrastructure App
Applications App
↓         ↓         ↓
Namespaces, RBAC, Quotas
cert-manager, ingress, monitoring
payment-svc, order-svc, ...

You only manually create the root App. ArgoCD creates everything else from Git.

Defining the Root App of Apps

The one Application you bootstrap manually. Everything else is GitOps from here.

# argocd/apps.yaml - The root Application
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: root-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: [email protected]:civica/gitops-config
    targetRevision: main
    path: argocd/applications  # Directory containing child App manifests
  destination:
    server: https://kubernetes.default.svc
    namespace: argocd
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

The argocd/applications/ directory contains Application manifests for each child app (cluster essentials, infra, each microservice).

Child Application Definitions

Each child app points to a specific path in the GitOps repo.

Infrastructure App

# argocd/applications/infra.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: cluster-infra
  namespace: argocd
spec:
  project: infrastructure
  source:
    repoURL: [email protected]:civica/gitops-config
    targetRevision: main
    path: infra
  destination:
    server: https://kubernetes.default.svc
  syncPolicy:
    automated:
      selfHeal: true

Payment Service (Prod)

# argocd/applications/payment-prod.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: payment-service-prod
  namespace: argocd
spec:
  project: payments-team
  source:
    repoURL: [email protected]:civica/gitops-config
    targetRevision: main
    path: apps/payment-service/overlays/prod
  destination:
    server: https://kubernetes.default.svc
    namespace: prod

ApplicationSets: Scaling App of Apps

When you have many apps across many environments, ApplicationSets automate the boilerplate.

# Generate one ArgoCD Application per environment per service
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: all-apps
  namespace: argocd
spec:
  generators:
    - matrix:
        generators:
          - list:
              elements:
                - env: dev
                - env: staging
                - env: prod
          - git:
              repoURL: [email protected]:civica/gitops-config
              revision: main
              directories:
                - path: "apps/*/base"
  template:
    metadata:
      name: "{{path.basename}}-{{env}}"
    spec:
      source:
        path: "apps/{{path.basename}}/overlays/{{env}}"
      destination:
        namespace: "{{env}}"

Knowledge Check #2

App of Apps and bootstrapping.

Q1: In the App of Apps pattern, how many Applications do you create manually?

Answer: B) Just one (the root App). You manually apply only the root Application. It points to a directory containing child Application manifests, which ArgoCD then creates and manages automatically.

Q2: What is the purpose of an ApplicationSet?

Answer: B) To template and generate multiple Applications from a pattern. ApplicationSets use generators (list, git, matrix) to automatically create ArgoCD Applications for combinations of services and environments.

Q3: What goes in the root App of Apps' source path?

Answer: C) ArgoCD Application manifests (child apps). The root App points to a directory of Application YAML files. ArgoCD applies them, creating child applications that each manage their own set of Kubernetes resources.

The Secret Problem

GitOps says "everything in Git." But secrets can't be stored in Git as plain text.

The Dilemma

Your app needs a database password. In traditional deployments, you might kubectl create secret manually. But GitOps demands everything is declared in Git. How do you put a secret in a public-ish repository?

DO NOT Do This

# NEVER commit plain secrets to Git!
apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
data:
  password: cGFzc3dvcmQxMjM=
  # base64 is NOT encryption!

Solutions

  • Sealed Secrets — Encrypt secrets for Git storage
  • External Secrets — Reference secrets from Azure Key Vault
  • SOPS — Mozilla's encrypted files
  • Vault — HashiCorp Vault integration

Sealed Secrets: Encrypt for Git

Bitnami Sealed Secrets lets you encrypt secrets that only the cluster can decrypt.

How It Works

  1. Install Sealed Secrets controller in cluster
  2. Controller generates a public/private key pair
  3. You encrypt secrets locally with the public key using kubeseal
  4. Commit the encrypted SealedSecret to Git
  5. Controller decrypts it with the private key and creates a regular Secret

Workflow

# Install kubeseal CLI
brew install kubeseal

# Create a regular secret YAML
kubectl create secret generic db-creds \
  --from-literal=password=s3cret \
  --dry-run=client -o yaml > secret.yaml

# Seal it (encrypt with cluster's public key)
kubeseal --format yaml \
  < secret.yaml > sealed-secret.yaml

# Commit sealed-secret.yaml to Git
# (safe to store — it's encrypted!)

SealedSecret Manifest

What the encrypted secret looks like in Git.

# Safe to commit to Git!
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  name: db-credentials
  namespace: prod
spec:
  encryptedData:
    password: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq8B2gHbD...
    username: AgCtr8Hj23KDIW+PtxYZ7A0fN12cREFw7C4jLmT...
  template:
    metadata:
      name: db-credentials
      namespace: prod
    type: Opaque

Key Points

  • Only the Sealed Secrets controller (in-cluster) can decrypt this
  • Encrypted data is per-namespace and per-name by default (anti-tampering)
  • If you lose the controller's private key, you lose all secrets — back it up!

External Secrets with Azure Key Vault

For enterprise teams, referencing secrets from Azure Key Vault is often preferred.

Architecture

Azure Key Vault
↓ reads secrets
External Secrets Operator
↓ creates
Kubernetes Secret

ExternalSecret Resource

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: db-credentials
  namespace: prod
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: azure-keyvault
    kind: ClusterSecretStore
  target:
    name: db-credentials
  data:
    - secretKey: password
      remoteRef:
        key: prod-db-password

Setting Up the Azure Key Vault SecretStore

Connecting the External Secrets Operator to Azure Key Vault.

# ClusterSecretStore for Azure Key Vault (using Workload Identity)
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
  name: azure-keyvault
spec:
  provider:
    azurekv:
      authType: WorkloadIdentity
      vaultUrl: "https://civica-prod-kv.vault.azure.net"
      serviceAccountRef:
        name: external-secrets-sa
        namespace: external-secrets

Why External Secrets for Azure?

  • Workload Identity: No credentials to manage — uses Azure AD pod identity
  • Auto-rotation: Secrets refresh on a configurable interval
  • Centralised management: Secrets managed in Key Vault, referenced in Git
  • Audit trail: Key Vault logging shows who accessed what

Sealed Secrets vs External Secrets

Choosing the right approach for your team.

FeatureSealed SecretsExternal Secrets (Azure KV)
Secrets stored inGit (encrypted)Azure Key Vault
EncryptionAsymmetric (per-cluster key)Azure-managed encryption
RotationRe-seal and commitAutomatic (refreshInterval)
DependenciesSealed Secrets controllerESO + Azure Key Vault + Workload Identity
ComplexityLowerHigher (but more enterprise-ready)
Multi-clusterDifferent key per clusterSame Key Vault, different access policies
Best forSmall teams, simple setupsEnterprise, Azure-native teams

Recommendation: Use External Secrets with Azure Key Vault for production. Sealed Secrets for dev/testing.

Knowledge Check #3

Secrets management in GitOps.

Q1: Why can't you commit regular Kubernetes Secrets to Git?

Answer: B) base64 encoding is not encryption — secrets would be exposed. Kubernetes Secrets use base64, which is trivially decodable. Anyone with repo access could read the secret values.

Q2: How does Sealed Secrets protect secret data?

Answer: B) By encrypting with a public key that only the in-cluster controller can decrypt. Sealed Secrets uses asymmetric encryption. You encrypt with the public key locally, and only the controller's private key (in the cluster) can decrypt it.

Q3: What is the advantage of External Secrets Operator with Azure Key Vault?

Answer: B) Automatic secret rotation and centralised management in Azure. External Secrets can auto-refresh on a schedule, and Azure Key Vault provides centralised secret management, audit logging, and Azure AD-based access control.

cert-manager: Automated TLS Certificates

Every production service needs TLS. cert-manager automates certificate lifecycle.

ClusterIssuer for Let's Encrypt

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-prod-key
    solvers:
      - http01:
          ingress:
            class: nginx

Using in Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: payment-service
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
    - hosts:
        - payment.civica.com
      secretName: payment-tls
  rules:
    - host: payment.civica.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: payment-service
                port:
                  number: 80

Setting Up Environments: Dev, Staging, Prod

A complete picture of how environments differ in the GitOps model.

AspectDevStagingProd
Sync PolicyAuto-sync, auto-pruneAuto-sync, manual pruneManual sync, manual prune
Replicas123+
ResourcesMinimalModerateFull
Image Tagdev-* / latest commitRelease candidateStable release tag
SecretsSealed SecretsExternal SecretsExternal Secrets
TLSSelf-signed / staging LELet's Encrypt stagingLet's Encrypt production
MonitoringBasicFull stackFull stack + alerting

Sync Waves: Ordering Deployments

Some resources must be created before others. Sync waves control the order.

The Problem

Your app needs its namespace, RBAC, and secrets to exist before the Deployment can be created. With auto-sync, everything happens at once — causing failures.

# Wave 0 (first): Create namespace
apiVersion: v1
kind: Namespace
metadata:
  name: my-app
  annotations:
    argocd.argoproj.io/sync-wave: "0"
---
# Wave 1: Create secrets and configmaps
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "1"
---
# Wave 2: Deploy the application
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "2"

Sync Hooks: Pre and Post Actions

Run Jobs before or after a sync operation.

Pre-Sync Hook (DB Migration)

apiVersion: batch/v1
kind: Job
metadata:
  name: db-migrate
  annotations:
    argocd.argoproj.io/hook: PreSync
    argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
  template:
    spec:
      containers:
        - name: migrate
          image: myacr.azurecr.io/migrate:v1
          command: ["./migrate", "up"]
      restartPolicy: Never

Hook Types

PreSyncBefore sync (DB migrations)
SyncDuring sync (same as normal)
PostSyncAfter sync (smoke tests)
SyncFailOn sync failure (notifications)

Delete Policies

HookSucceededDelete after success
HookFailedDelete after failure
BeforeHookCreationDelete before re-creating

Environment Promotion Strategy

How a change flows from dev to production.

CI builds image: myacr.azurecr.io/payment:v1.3.0
CI updates dev overlay: newTag: v1.3.0
ArgoCD auto-syncs to dev cluster
↓ QA passes
PR: Update staging overlay: newTag: v1.3.0
↓ Approved & merged
ArgoCD syncs to staging
↓ Staging verified
PR: Update prod overlay: newTag: v1.3.0
↓ Approved, merged, manual sync
Production deployed!

ArgoCD Projects for Team Isolation

Projects define boundaries: which repos, clusters, and namespaces a team can access.

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: payments-team
  namespace: argocd
spec:
  description: "Payments team applications"

  # Only allow these source repos
  sourceRepos:
    - "[email protected]:civica/gitops-config"

  # Only deploy to these namespaces
  destinations:
    - server: https://kubernetes.default.svc
      namespace: "payments-*"

  # Only allow these resource types
  clusterResourceWhitelist: []  # No cluster-scoped resources
  namespaceResourceWhitelist:
    - group: "apps"
      kind: "Deployment"
    - group: ""
      kind: "Service"
    - group: ""
      kind: "ConfigMap"

Knowledge Check #4

Sync waves, hooks, and environments.

Q1: What do sync waves control in ArgoCD?

Answer: B) The order in which resources are applied during sync. Sync waves use the annotation argocd.argoproj.io/sync-wave with numeric values. Lower waves are synced first (e.g., namespace wave 0, secrets wave 1, deployment wave 2).

Q2: What is a common use case for a PreSync hook?

Answer: B) Running database migrations before deploying new code. PreSync hooks run Jobs before the main sync. This ensures the database schema is updated before the new application version tries to use it.

Q3: What should the sync policy be for a production environment?

Answer: B) Manual sync with approval-gated PRs. Production should use manual sync to add a human gate. Combined with PR approvals in Git, this provides two layers of review before changes reach production.

The Complete Bootstrap Sequence

From empty cluster to fully operational GitOps.

  1. Install ArgoCD (Helm, from Presentation 2)
  2. Create GitOps repo with structure: argocd/, infra/, apps/, cluster/
  3. Define ArgoCD Projects for team isolation
  4. Create root App of Apps pointing to argocd/applications/
  5. Add cluster essentials: namespaces, RBAC, quotas
  6. Add infrastructure: cert-manager, ingress, ESO, sealed-secrets
  7. Add application bases with Kustomize
  8. Create overlays for dev, staging, prod
  9. Configure secrets via Sealed Secrets or External Secrets
  10. Set up promotion workflow: dev auto-sync, prod manual

Result

  • Single kubectl apply of root app bootstraps everything
  • All changes go through Git PRs
  • Drift is auto-detected and corrected
  • Secrets are encrypted or externally referenced
  • TLS certificates auto-renew
  • Teams are isolated by ArgoCD Projects

Module 3 Summary

Key takeaways from bootstrapping the GitOps workflow.

Structure

Three layers: cluster essentials, infrastructure, applications. Kustomize base + overlays per environment.

App of Apps

One root Application manages all others. ApplicationSets scale this for many services and environments.

Secrets

Sealed Secrets for simplicity. External Secrets + Azure Key Vault for enterprise production.

Ordering

Sync waves for resource ordering. Hooks for pre/post-sync actions like DB migrations.

Environments

Dev: auto-sync. Staging: auto-sync. Prod: manual sync with PR approval gates.

Isolation

ArgoCD Projects restrict teams to specific repos, namespaces, and resource types.

Questions & Discussion

Bootstrapping the GitOps Workflow

Next up: Azure Integration & Operations

Civica Training Program GitOps Module — 3 of 4
1 / 32
← Back