Trust but verify. Then push the boundaries.
Verification runs after a Stage is promoted to confirm the application is healthy. Only verified Freight can proceed to the next Stage.
Kargo uses Argo Rollouts AnalysisTemplate CRDs for verification.
apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
name: http-health-check
namespace: my-web-app
spec:
metrics:
- name: webcheck
interval: 10s
count: 3
successCondition: result == "200"
failureLimit: 1
provider:
web:
url: http://my-web-app.dev.svc.cluster.local/healthz
method: GET
jsonPath: "{$.statusCode}"
apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
name: error-rate-check
namespace: my-web-app
spec:
metrics:
- name: error-rate
interval: 30s
count: 5
successCondition: result[0] < 0.05
failureLimit: 2
provider:
prometheus:
address: http://prometheus.monitoring.svc:9090
query: |
sum(rate(http_requests_total{
service="my-web-app",
status=~"5.."
}[5m])) /
sum(rate(http_requests_total{
service="my-web-app"
}[5m]))
Fails if the 5xx error rate exceeds 5% over the measurement window.
apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
name: dev
namespace: my-web-app
spec:
requestedFreight:
- origin:
kind: Warehouse
name: my-warehouse
sources:
direct: true
promotionTemplate:
steps:
# ... git-clone, git-update, git-commit, git-push ...
verification:
analysisTemplates:
- name: http-health-check
- name: error-rate-check
analysisRunMetadata:
labels:
app: my-web-app
stage: dev
1. What CRD does Kargo use for verification?
2. When does verification run?
3. What happens if verification fails?
apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
name: smoke-test
namespace: my-web-app
spec:
metrics:
- name: smoke-test
provider:
job:
spec:
template:
spec:
containers:
- name: test
image: myacr.azurecr.io/smoke-tests:latest
command: ["/bin/sh", "-c"]
args:
- |
curl -f http://my-web-app.dev.svc/api/health
&& echo "Tests passed"
restartPolicy: Never
backoffLimit: 1
Sometimes you need to re-run verification — maybe a flaky test, or transient infrastructure issue.
kargo verify --project my-web-app --stage devBeyond the basics: parallel stages, control flow, multi-warehouse, and more.
Not all pipelines are linear. You might want to promote to multiple stages in parallel after dev verification.
# staging-us: requests from dev
apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
name: staging-us
namespace: my-web-app
spec:
requestedFreight:
- origin:
kind: Warehouse
name: my-warehouse
sources:
stages: [dev]
# ... promotion template for US region ...
---
# staging-eu: ALSO requests from dev (parallel!)
apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
name: staging-eu
namespace: my-web-app
spec:
requestedFreight:
- origin:
kind: Warehouse
name: my-warehouse
sources:
stages: [dev]
# ... promotion template for EU region ...
Production can require Freight to be verified in all parallel staging environments before it becomes eligible.
# prod-us: requires both staging-us AND staging-eu
apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
name: prod-us
namespace: my-web-app
spec:
requestedFreight:
- origin:
kind: Warehouse
name: my-warehouse
sources:
stages:
- staging-us
- staging-eu # Must be verified in BOTH
Sometimes your application images and infrastructure config change at different cadences.
# Two Warehouses
---
kind: Warehouse
metadata:
name: app-images
spec:
subscriptions:
- image:
repoURL: myacr.azurecr.io/my-web-app
imageSelectionStrategy: SemVer
---
kind: Warehouse
metadata:
name: infra-config
spec:
subscriptions:
- git:
repoURL: https://github.com/my-org/infra-config.git
branch: main
1. How do you create parallel stages in Kargo?
2. How can a downstream Stage require Freight to pass through ALL parallel stages?
3. When would you use multiple Warehouses?
Promotion steps can include conditional logic using the if field to skip steps based on context or previous step outputs.
promotionTemplate:
steps:
- uses: git-clone
config:
repoURL: https://github.com/my-org/k8s-config.git
checkout:
- branch: main
path: ./src
- uses: git-update
config:
path: ./src
updates:
- file: apps/my-web-app/${{ ctx.stage }}/values.yaml
key: image.tag
value: ${{ freight.images["myacr.azurecr.io/my-web-app"].tag }}
# Only run argocd-update for prod
- uses: argocd-update
if: ctx.stage == "prod"
config:
apps:
- name: my-web-app-prod
Kargo makes rollback straightforward because Freight is immutable and historical.
kargo promote --stage prod --freight previous-good-freightbrave-dolphin (v1.6.0) — broken!jolly-penguin (v1.5.0) — was working# Promote the old, working Freight to prod
kargo promote --project my-web-app \
--stage prod \
--freight jolly-penguin
# Kargo runs promotion steps:
# - Clones Git repo
# - Updates values.yaml to image.tag: "1.5.0"
# - Commits and pushes
# - ArgoCD syncs back to v1.5.0
Each Stage typically has a corresponding ArgoCD Application watching the same Git path/branch.
Kargo can check ArgoCD Application health as part of the promotion to ensure the sync succeeded.
Both Kargo and ArgoCD may need access to the same Git repos. Coordinate credentials management.
Ensure ArgoCD project RBAC and Kargo project RBAC are aligned for consistent access control.
# ArgoCD Application for the dev Stage
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-web-app-dev
namespace: argocd
annotations:
# Link to Kargo for UI integration
kargo.akuity.io/authorized-stage: my-web-app:dev
spec:
project: default
source:
repoURL: https://github.com/my-org/k8s-config.git
targetRevision: main
path: apps/my-web-app/dev
destination:
server: https://kubernetes.default.svc
namespace: my-web-app-dev
syncPolicy:
automated:
selfHeal: true
1. How do you roll back a Stage to a previous version in Kargo?
2. What annotation links an ArgoCD Application to a Kargo Stage?
3. What makes rollback easy in Kargo?
kubectl get analysisrun -n my-web-app for detailed metrics results.# Kargo controller logs
kubectl logs -n kargo deploy/kargo-controller -f
# Warehouse status
kubectl describe warehouse my-warehouse -n my-web-app
# Stage status and current Freight
kubectl describe stage dev -n my-web-app
# Promotion details
kubectl describe promotion promote-xyz -n my-web-app
# Verification (AnalysisRun) results
kubectl get analysisrun -n my-web-app
kubectl describe analysisrun <name> -n my-web-app
# Freight details
kargo get freight --project my-web-app -o yaml
Begin with a linear dev → staging → prod pipeline. Add complexity only when needed.
Add verification to every Stage, even dev. Catch issues early. Use health checks and smoke tests.
Semantic versioning gives you the most control. Use constraints to limit major version jumps.
Restrict who can promote to production. Use Kargo RBAC roles for approvals.
Set up alerts for Freight stuck in non-verified state. Track promotion success rates.
Name your Stages and Warehouses clearly. Use the Kargo UI to visualize and share pipeline status.
You now have the tools to build safe, automated, auditable promotion pipelines.
https://docs.kargo.iohttps://github.com/akuity/kargohttps://argoproj.github.io/argo-rolloutshttps://akuity.io (managed ArgoCD + Kargo)Thank you!