From Empty Repo to Production-Ready Environment Management
Civica Training Program
ArgoCD is installed and running. Now the real work begins.
You have ArgoCD running on AKS. You've connected a Git repo. But that repo is empty. Your team lead asks: "How do we structure this so it works for 3 environments, 12 microservices, and 4 teams? And we need secrets management."
Repo structure with Kustomize
App of Apps bootstrap
Secrets, TLS, and environments
A well-organised repo is the foundation of successful GitOps.
gitops-config/ ├── argocd/ # ArgoCD meta-configuration │ ├── apps.yaml # App of Apps root │ ├── projects.yaml # ArgoCD Projects │ └── repo-config.yaml # Repository credentials │ ├── infra/ # Cluster infrastructure │ ├── cert-manager/ │ ├── external-secrets/ │ ├── ingress-nginx/ │ ├── monitoring/ │ └── sealed-secrets/ │ ├── apps/ # Application deployments │ ├── payment-service/ │ │ ├── base/ │ │ └── overlays/ │ │ ├── dev/ │ │ ├── staging/ │ │ └── prod/ │ └── order-service/ │ ├── base/ │ └── overlays/ │ └── cluster/ # Cluster essentials ├── namespaces/ ├── rbac/ └── resource-quotas/
Each layer has different change velocity and ownership.
Changes rarely. Owned by platform team.
Change frequency: monthly
Changes occasionally. Owned by platform team.
Change frequency: weekly
Changes frequently. Owned by dev teams.
Change frequency: daily
Kustomize lets you define a base configuration and overlay environment-specific changes.
# apps/payment-service/base/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: payment-service spec: replicas: 1 template: spec: containers: - name: payment image: myacr.azurecr.io/payment:latest resources: requests: cpu: "100m" memory: "128Mi"
# apps/payment-service/base/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml - service.yaml - hpa.yaml commonLabels: app.kubernetes.io/name: payment-service app.kubernetes.io/managed-by: argocd
Overlays patch the base for each environment.
# overlays/dev/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base namespace: dev images: - name: myacr.azurecr.io/payment newTag: dev-abc123 patches: - path: replicas-patch.yaml
# overlays/staging/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base namespace: staging images: - name: myacr.azurecr.io/payment newTag: v1.2.0-rc1 patches: - path: replicas-patch.yaml
# overlays/prod/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base namespace: prod images: - name: myacr.azurecr.io/payment newTag: v1.1.5 patches: - path: replicas-patch.yaml - path: resources-patch.yaml
What changes between environments? Replicas, resources, config values.
# overlays/prod/replicas-patch.yaml apiVersion: apps/v1 kind: Deployment metadata: name: payment-service spec: replicas: 3
# overlays/prod/resources-patch.yaml apiVersion: apps/v1 kind: Deployment metadata: name: payment-service spec: template: spec: containers: - name: payment resources: requests: cpu: "500m" memory: "512Mi" limits: cpu: "1000m" memory: "1Gi"
Dev: 1 replica, minimal resources. Staging: 2 replicas, moderate. Prod: 3+ replicas, full resources.
Repository structure and Kustomize.
kustomize build overlays/prod produce?Before deploying apps, the cluster needs its foundation.
# cluster/namespaces/namespaces.yaml apiVersion: v1 kind: Namespace metadata: name: dev labels: environment: dev team: platform --- apiVersion: v1 kind: Namespace metadata: name: staging labels: environment: staging --- apiVersion: v1 kind: Namespace metadata: name: prod labels: environment: prod
Controlling who can do what, and how much they can consume.
# cluster/rbac/dev-team-role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: dev-team-access namespace: dev subjects: - kind: Group name: "ad-group-id-for-devs" apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: edit apiGroup: rbac.authorization.k8s.io
# cluster/resource-quotas/dev-quota.yaml apiVersion: v1 kind: ResourceQuota metadata: name: dev-quota namespace: dev spec: hard: requests.cpu: "4" requests.memory: "8Gi" limits.cpu: "8" limits.memory: "16Gi" pods: "50"
The most powerful ArgoCD pattern: one Application that manages all other Applications.
You only manually create the root App. ArgoCD creates everything else from Git.
The one Application you bootstrap manually. Everything else is GitOps from here.
# argocd/apps.yaml - The root Application apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: root-app namespace: argocd spec: project: default source: repoURL: [email protected]:civica/gitops-config targetRevision: main path: argocd/applications # Directory containing child App manifests destination: server: https://kubernetes.default.svc namespace: argocd syncPolicy: automated: prune: true selfHeal: true
The argocd/applications/ directory contains Application manifests for each child app (cluster essentials, infra, each microservice).
Each child app points to a specific path in the GitOps repo.
# argocd/applications/infra.yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: cluster-infra namespace: argocd spec: project: infrastructure source: repoURL: [email protected]:civica/gitops-config targetRevision: main path: infra destination: server: https://kubernetes.default.svc syncPolicy: automated: selfHeal: true
# argocd/applications/payment-prod.yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: payment-service-prod namespace: argocd spec: project: payments-team source: repoURL: [email protected]:civica/gitops-config targetRevision: main path: apps/payment-service/overlays/prod destination: server: https://kubernetes.default.svc namespace: prod
When you have many apps across many environments, ApplicationSets automate the boilerplate.
# Generate one ArgoCD Application per environment per service apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: all-apps namespace: argocd spec: generators: - matrix: generators: - list: elements: - env: dev - env: staging - env: prod - git: repoURL: [email protected]:civica/gitops-config revision: main directories: - path: "apps/*/base" template: metadata: name: "{{path.basename}}-{{env}}" spec: source: path: "apps/{{path.basename}}/overlays/{{env}}" destination: namespace: "{{env}}"
App of Apps and bootstrapping.
GitOps says "everything in Git." But secrets can't be stored in Git as plain text.
Your app needs a database password. In traditional deployments, you might kubectl create secret manually. But GitOps demands everything is declared in Git. How do you put a secret in a public-ish repository?
# NEVER commit plain secrets to Git! apiVersion: v1 kind: Secret metadata: name: db-credentials data: password: cGFzc3dvcmQxMjM= # base64 is NOT encryption!
Bitnami Sealed Secrets lets you encrypt secrets that only the cluster can decrypt.
kubesealSealedSecret to Git# Install kubeseal CLI brew install kubeseal # Create a regular secret YAML kubectl create secret generic db-creds \ --from-literal=password=s3cret \ --dry-run=client -o yaml > secret.yaml # Seal it (encrypt with cluster's public key) kubeseal --format yaml \ < secret.yaml > sealed-secret.yaml # Commit sealed-secret.yaml to Git # (safe to store — it's encrypted!)
What the encrypted secret looks like in Git.
# Safe to commit to Git! apiVersion: bitnami.com/v1alpha1 kind: SealedSecret metadata: name: db-credentials namespace: prod spec: encryptedData: password: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq8B2gHbD... username: AgCtr8Hj23KDIW+PtxYZ7A0fN12cREFw7C4jLmT... template: metadata: name: db-credentials namespace: prod type: Opaque
For enterprise teams, referencing secrets from Azure Key Vault is often preferred.
apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: db-credentials namespace: prod spec: refreshInterval: 1h secretStoreRef: name: azure-keyvault kind: ClusterSecretStore target: name: db-credentials data: - secretKey: password remoteRef: key: prod-db-password
Connecting the External Secrets Operator to Azure Key Vault.
# ClusterSecretStore for Azure Key Vault (using Workload Identity) apiVersion: external-secrets.io/v1beta1 kind: ClusterSecretStore metadata: name: azure-keyvault spec: provider: azurekv: authType: WorkloadIdentity vaultUrl: "https://civica-prod-kv.vault.azure.net" serviceAccountRef: name: external-secrets-sa namespace: external-secrets
Choosing the right approach for your team.
| Feature | Sealed Secrets | External Secrets (Azure KV) |
|---|---|---|
| Secrets stored in | Git (encrypted) | Azure Key Vault |
| Encryption | Asymmetric (per-cluster key) | Azure-managed encryption |
| Rotation | Re-seal and commit | Automatic (refreshInterval) |
| Dependencies | Sealed Secrets controller | ESO + Azure Key Vault + Workload Identity |
| Complexity | Lower | Higher (but more enterprise-ready) |
| Multi-cluster | Different key per cluster | Same Key Vault, different access policies |
| Best for | Small teams, simple setups | Enterprise, Azure-native teams |
Recommendation: Use External Secrets with Azure Key Vault for production. Sealed Secrets for dev/testing.
Secrets management in GitOps.
Every production service needs TLS. cert-manager automates certificate lifecycle.
apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: [email protected] privateKeySecretRef: name: letsencrypt-prod-key solvers: - http01: ingress: class: nginx
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: payment-service annotations: cert-manager.io/cluster-issuer: letsencrypt-prod spec: tls: - hosts: - payment.civica.com secretName: payment-tls rules: - host: payment.civica.com http: paths: - path: / pathType: Prefix backend: service: name: payment-service port: number: 80
A complete picture of how environments differ in the GitOps model.
| Aspect | Dev | Staging | Prod |
|---|---|---|---|
| Sync Policy | Auto-sync, auto-prune | Auto-sync, manual prune | Manual sync, manual prune |
| Replicas | 1 | 2 | 3+ |
| Resources | Minimal | Moderate | Full |
| Image Tag | dev-* / latest commit | Release candidate | Stable release tag |
| Secrets | Sealed Secrets | External Secrets | External Secrets |
| TLS | Self-signed / staging LE | Let's Encrypt staging | Let's Encrypt production |
| Monitoring | Basic | Full stack | Full stack + alerting |
Some resources must be created before others. Sync waves control the order.
Your app needs its namespace, RBAC, and secrets to exist before the Deployment can be created. With auto-sync, everything happens at once — causing failures.
# Wave 0 (first): Create namespace apiVersion: v1 kind: Namespace metadata: name: my-app annotations: argocd.argoproj.io/sync-wave: "0" --- # Wave 1: Create secrets and configmaps apiVersion: bitnami.com/v1alpha1 kind: SealedSecret metadata: annotations: argocd.argoproj.io/sync-wave: "1" --- # Wave 2: Deploy the application apiVersion: apps/v1 kind: Deployment metadata: annotations: argocd.argoproj.io/sync-wave: "2"
Run Jobs before or after a sync operation.
apiVersion: batch/v1 kind: Job metadata: name: db-migrate annotations: argocd.argoproj.io/hook: PreSync argocd.argoproj.io/hook-delete-policy: HookSucceeded spec: template: spec: containers: - name: migrate image: myacr.azurecr.io/migrate:v1 command: ["./migrate", "up"] restartPolicy: Never
PreSync | Before sync (DB migrations) |
Sync | During sync (same as normal) |
PostSync | After sync (smoke tests) |
SyncFail | On sync failure (notifications) |
HookSucceeded | Delete after success |
HookFailed | Delete after failure |
BeforeHookCreation | Delete before re-creating |
How a change flows from dev to production.
Projects define boundaries: which repos, clusters, and namespaces a team can access.
apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: payments-team namespace: argocd spec: description: "Payments team applications" # Only allow these source repos sourceRepos: - "[email protected]:civica/gitops-config" # Only deploy to these namespaces destinations: - server: https://kubernetes.default.svc namespace: "payments-*" # Only allow these resource types clusterResourceWhitelist: [] # No cluster-scoped resources namespaceResourceWhitelist: - group: "apps" kind: "Deployment" - group: "" kind: "Service" - group: "" kind: "ConfigMap"
Sync waves, hooks, and environments.
argocd.argoproj.io/sync-wave with numeric values. Lower waves are synced first (e.g., namespace wave 0, secrets wave 1, deployment wave 2).From empty cluster to fully operational GitOps.
kubectl apply of root app bootstraps everythingKey takeaways from bootstrapping the GitOps workflow.
Three layers: cluster essentials, infrastructure, applications. Kustomize base + overlays per environment.
One root Application manages all others. ApplicationSets scale this for many services and environments.
Sealed Secrets for simplicity. External Secrets + Azure Key Vault for enterprise production.
Sync waves for resource ordering. Hooks for pre/post-sync actions like DB migrations.
Dev: auto-sync. Staging: auto-sync. Prod: manual sync with PR approval gates.
ArgoCD Projects restrict teams to specific repos, namespaces, and resource types.
Bootstrapping the GitOps Workflow
Next up: Azure Integration & Operations