From Empty Cluster to GitOps-Ready Platform
Civica Training Program
You understand GitOps principles. Now it's time to get your hands dirty.
You have a fresh AKS cluster. Your team lead says: "Get ArgoCD running by end of day. We need it production-ready — HA, secured, accessible to the team." Let's do this step by step.
Prerequisites & installation method
Configure, secure, and expose ArgoCD
Connect Git repos & verify
What you need before installing ArgoCD.
az aks get-credentialskubectl — Kubernetes CLIhelm v3+ — Package manageraz — Azure CLI (authenticated)argocd — ArgoCD CLI (install later)# Confirm cluster access az aks get-credentials --resource-group myRG --name myAKS kubectl get nodes kubectl cluster-info # Confirm Helm helm version
Two primary approaches to installing ArgoCD on Kubernetes.
kubectl create namespace argocd kubectl apply -n argocd -f \ https://raw.githubusercontent.com/ argoproj/argo-cd/stable/ manifests/install.yaml
helm repo add argo \ https://argoproj.github.io/argo-helm helm install argocd argo/argo-cd \ --namespace argocd \ --create-namespace \ -f values.yaml
Understanding what gets deployed.
In HA mode, the server and controller run with multiple replicas for fault tolerance.
Essential values.yaml settings for a production-ready ArgoCD.
# values-production.yaml global: image: tag: "v2.10.0" # Pin to specific version server: replicas: 2 resources: requests: cpu: "250m" memory: "256Mi" limits: cpu: "500m" memory: "512Mi" controller: replicas: 1 # Only 1 active (leader election) resources: requests: cpu: "500m" memory: "512Mi" repoServer: replicas: 2 resources: requests: cpu: "250m" memory: "256Mi"
Making ArgoCD resilient to node failures.
redis-ha: enabled: true haproxy: enabled: true controller: env: - name: ARGOCD_CONTROLLER_REPLICAS value: "2" server: replicas: 2 autoscaling: enabled: true minReplicas: 2 maxReplicas: 5
server: pdb: enabled: true minAvailable: 1 repoServer: pdb: enabled: true minAvailable: 1
PDBs ensure at least one pod stays running during node drains or upgrades.
Tip: Use topologySpreadConstraints to spread pods across availability zones on AKS.
Installation basics.
Step-by-step commands to deploy ArgoCD.
# Step 1: Add the Argo Helm repository helm repo add argo https://argoproj.github.io/argo-helm helm repo update # Step 2: Create the namespace kubectl create namespace argocd # Step 3: Install with production values helm install argocd argo/argo-cd \ --namespace argocd \ --version 6.0.0 \ -f values-production.yaml # Step 4: Verify the installation kubectl get pods -n argocd kubectl get svc -n argocd # Expected: All pods Running, services created
NAME READY STATUS argocd-application-controller-0 1/1 Running argocd-dex-server-xxx 1/1 Running argocd-redis-xxx 1/1 Running argocd-repo-server-xxx 1/1 Running argocd-server-xxx 1/1 Running
Making the ArgoCD UI and API accessible to your team.
kubectl port-forward \ svc/argocd-server \ -n argocd 8080:443
Quick for development. Not suitable for team access.
server:
service:
type: LoadBalancer
Simple. Gets an Azure public IP. Add NSG rules for security.
server:
ingress:
enabled: true
hostname: argocd.company.com
Production-grade. TLS termination, path routing. Recommended.
Using NGINX Ingress Controller with TLS.
# values-production.yaml (continued) server: ingress: enabled: true ingressClassName: nginx hostname: argocd.civica.internal tls: true annotations: nginx.ingress.kubernetes.io/ssl-passthrough: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" cert-manager.io/cluster-issuer: "letsencrypt-prod" # Important: ArgoCD needs --insecure flag if TLS terminates at ingress extraArgs: - --insecure # Only if using TLS termination at ingress level
If using Azure Application Gateway Ingress Controller (AGIC), replace ingressClassName: nginx with ingressClassName: azure-application-gateway and configure the appropriate annotations.
Getting into ArgoCD for the first time.
# The initial password is stored in a Kubernetes secret kubectl -n argocd get secret argocd-initial-admin-secret \ -o jsonpath="{.data.password}" | base64 -d # Login via CLI argocd login argocd.civica.internal \ --username admin \ --password <initial-password> # IMPORTANT: Change the password immediately! argocd account update-password
configs: params: accounts.admin: "" cm: admin.enabled: "false"
Installing and configuring the ArgoCD command-line interface.
# macOS brew install argocd # Linux curl -sSL -o argocd \ https://github.com/argoproj/argo-cd/ releases/latest/download/ argocd-linux-amd64 chmod +x argocd sudo mv argocd /usr/local/bin/ # Windows (Chocolatey) choco install argocd-cli
# Login argocd login argocd.civica.internal # List applications argocd app list # Get app details argocd app get my-app # Sync an application argocd app sync my-app # View app diff argocd app diff my-app # View cluster info argocd cluster list
How all the pieces fit together in your cluster.
Exposing and accessing ArgoCD.
ArgoCD needs access to the Git repo containing your Kubernetes manifests.
# Add a private repo with HTTPS argocd repo add \ https://github.com/civica/gitops-config \ --username git \ --password <PAT-token> # Add with SSH argocd repo add \ [email protected]:civica/gitops-config.git \ --ssh-private-key-path ~/.ssh/id_ed25519 # Verify argocd repo list
The most secure method for Git repository access.
# Generate an Ed25519 key pair (no passphrase for automation) ssh-keygen -t ed25519 -C "argocd-deploy-key" -f argocd-key -N "" # argocd-key = private key (stays in ArgoCD) # argocd-key.pub = public key (goes to Git provider) # Add private key to ArgoCD argocd repo add [email protected]:civica/gitops-config.git \ --ssh-private-key-path ./argocd-key # Add public key to GitHub/Azure DevOps as a Deploy Key (read-only)
Using personal access tokens or app credentials for HTTPS repos.
# Create a fine-grained PAT with: # - Repository access: read-only # - Contents: read argocd repo add \ https://github.com/civica/gitops-config \ --username git \ --password ghp_xxxYourTokenHere
GitHub uses "git" as the username with PAT as password.
# Create a PAT in Azure DevOps with: # - Code: Read argocd repo add \ https://dev.azure.com/civica/project/ _git/gitops-config \ --username azure \ --password <PAT>
Azure DevOps PATs can be scoped to specific organisations and projects.
Warning: HTTPS tokens can expire. Set up rotation procedures or use SSH deploy keys for long-lived access. Store credentials as Kubernetes secrets, never in values.yaml.
Understanding where repo credentials live in the cluster.
ArgoCD stores repository credentials as Kubernetes Secrets in the argocd namespace. Each repository connection creates a secret with specific labels.
# View repo credentials (secret names) kubectl get secrets -n argocd -l argocd.argoproj.io/secret-type=repository # Or use declarative config: apiVersion: v1 kind: Secret metadata: name: my-repo-creds namespace: argocd labels: argocd.argoproj.io/secret-type: repository stringData: type: git url: [email protected]:civica/gitops-config.git sshPrivateKey: | -----BEGIN OPENSSH PRIVATE KEY----- ...
Credential Templates: Use secret-type: repo-creds to define credentials once for all repos under a URL prefix (e.g., all repos in github.com/civica/).
Now let's deploy something! Defining an Application resource.
# my-first-app.yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: guestbook namespace: argocd spec: project: default source: repoURL: https://github.com/argoproj/argocd-example-apps targetRevision: HEAD path: guestbook destination: server: https://kubernetes.default.svc namespace: guestbook syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true
Understanding every important field in the Application resource.
| Field | Purpose |
|---|---|
spec.project | ArgoCD Project for RBAC scoping. default allows everything. |
spec.source.repoURL | Git repository URL (must be registered in ArgoCD). |
spec.source.targetRevision | Branch, tag, or commit SHA to track. HEAD = latest on default branch. |
spec.source.path | Directory within the repo containing manifests. |
spec.destination.server | Cluster API URL. https://kubernetes.default.svc = same cluster. |
spec.destination.namespace | Target namespace for deployment. |
syncPolicy.automated | Enable auto-sync. Without this, sync is manual. |
syncPolicy.automated.prune | Delete resources removed from Git. |
syncPolicy.automated.selfHeal | Revert manual changes (drift correction). |
Deploying the application and checking its status.
# Apply the Application manifest kubectl apply -f my-first-app.yaml # Or create via ArgoCD CLI argocd app create guestbook \ --repo https://github.com/argoproj/ argocd-example-apps \ --path guestbook \ --dest-server https://kubernetes.default.svc \ --dest-namespace guestbook \ --sync-policy automated \ --auto-prune \ --self-heal
# Check application status argocd app get guestbook # Expected output: Name: guestbook Server: https://kubernetes.default.svc Namespace: guestbook URL: https://argocd.civica.internal/ applications/guestbook Sync Status: Synced Health Status: Healthy # Verify in cluster kubectl get all -n guestbook
Git repositories and ArgoCD applications.
syncPolicy.automated.selfHeal: true do?spec.source.targetRevision: HEAD mean?Fine-tuning how ArgoCD deploys your applications.
automated | Auto-sync on Git change |
automated.prune | Delete removed resources |
automated.selfHeal | Fix drift automatically |
automated.allowEmpty | Allow syncing to empty state |
CreateNamespace=true | Auto-create target namespace |
PruneLast=true | Delete resources after all others are healthy |
ApplyOutOfSyncOnly=true | Only apply changed resources |
Validate=false | Skip schema validation |
Replace=true | Use replace instead of apply |
Caution: For production, consider manual sync (no automated) with required PR approvals. Auto-sync is great for dev/staging but adds risk in production without proper safeguards.
ArgoCD isn't limited to plain YAML — it understands Helm charts and Kustomize overlays natively.
spec: source: repoURL: https://charts.bitnami.com/bitnami chart: nginx targetRevision: 15.0.0 helm: values: | replicaCount: 3 service: type: ClusterIP
Point directly at a Helm chart repository.
spec: source: repoURL: [email protected]:civica/gitops-config targetRevision: main path: apps/my-app/overlays/dev kustomize: images: - myacr.azurecr.io/my-app:v1.2.3
ArgoCD detects kustomization.yaml and runs kustomize build.
Combining multiple sources in a single ArgoCD Application (ArgoCD 2.6+).
apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: my-app namespace: argocd spec: project: default sources: # Note: "sources" (plural) - repoURL: https://charts.bitnami.com/bitnami chart: nginx targetRevision: 15.0.0 helm: valueFiles: - $values/apps/nginx/values-prod.yaml # From second source - repoURL: [email protected]:civica/gitops-config targetRevision: main ref: values # Referenced as $values above destination: server: https://kubernetes.default.svc namespace: nginx
This pattern lets you use a Helm chart from a public registry while keeping your custom values in your own Git repo.
Keeping ArgoCD up to date safely.
# Check current version argocd version # Update Helm repo helm repo update # Review changes helm diff upgrade argocd argo/argo-cd \ --namespace argocd \ -f values-production.yaml \ --version 6.1.0 # Apply upgrade helm upgrade argocd argo/argo-cd \ --namespace argocd \ -f values-production.yaml \ --version 6.1.0
helm diff plugin to preview changesChoosing the right installation scope.
# Default installation
helm install argocd argo/argo-cd \
--namespace argocd
configs: params: application.namespaces: "team-a-*"
Applications and advanced configuration.
syncPolicy.automated.prune: true do?helm template in the repo-server to render the chart into plain Kubernetes manifests, then applies those manifests. It does not use helm install or Helm releases.Common problems and how to fix them.
# Check pod events kubectl describe pod -n argocd \ argocd-server-xxx # Check logs kubectl logs -n argocd \ argocd-server-xxx # Common causes: # - Insufficient resources # - Image pull errors (ACR auth) # - PVC not binding (Redis HA)
# Test from repo-server kubectl exec -n argocd \ deploy/argocd-repo-server -- \ git ls-remote <repo-url> # Common causes: # - Wrong SSH key / PAT # - Network policy blocking egress # - SSH host key not trusted # - Firewall blocking port 22/443
What we accomplished today.
ArgoCD via Helm with production-ready configuration, HA, and resource limits.
ArgoCD UI via Ingress with TLS. Set up initial login and CLI access.
Git repository with SSH deploy keys. Created and verified first Application.
Next: Bootstrapping the GitOps Workflow — App of Apps, Kustomize overlays, sealed secrets, and environment management.
Installing ArgoCD on AKS
Next up: Bootstrapping the GitOps Workflow