Event-Driven Architecture on Kubernetes
React to things happening, don't poll for changes.
Knative Eventing provides the infrastructure for all of these patterns.
A system for producing, routing, and consuming events on Kubernetes.
CloudEvents is a CNCF graduated specification for describing events:
{
"specversion": "1.0",
"type": "com.example.order.created",
"source": "/orders/service",
"id": "abc-123-def",
"time": "2025-01-15T12:00:00Z",
"datacontenttype": "application/json",
"data": {
"orderId": "12345",
"customerId": "cust-789",
"total": 99.99
}
}
Required attributes: specversion, type, source, id
CloudEvents can be transported as HTTP headers (binary mode) or JSON body (structured mode):
# Binary content mode (preferred -- headers carry metadata)
POST /events HTTP/1.1
Content-Type: application/json
ce-specversion: 1.0
ce-type: com.example.order.created
ce-source: /orders/service
ce-id: abc-123-def
ce-time: 2025-01-15T12:00:00Z
{"orderId": "12345", "customerId": "cust-789", "total": 99.99}
Binary mode is preferred -- it lets intermediaries route events without parsing the body.
1. What specification does Knative Eventing use for event format?
2. Which CloudEvents attributes are required?
3. Why is binary content mode preferred for CloudEvents over HTTP?
Sources Broker Triggers Sinks
+---------+ +----------+ +------------+ +--------+
| Ping |--->| |--->| Filter: |--->| Service|
| Source | | Broker | | type=order | | A |
+---------+ | (event | +------------+ +--------+
| hub) |
+---------+ | | +------------+ +--------+
| API |--->| |--->| Filter: |--->| Service|
| Server | | | | type=alert | | B |
| Source | +----------+ +------------+ +--------+
+---------+
Sources produce events, Brokers collect them, Triggers filter and route them to Sinks.
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
name: default
namespace: production
annotations:
eventing.knative.dev/broker.class: MTChannelBasedBroker
spec:
config:
apiVersion: v1
kind: ConfigMap
name: config-br-defaults
namespace: knative-eventing
The MTChannelBasedBroker is the default. For production, consider Kafka-backed brokers.
| Broker Type | Backing | Use Case |
|---|---|---|
| MTChannelBasedBroker | In-Memory Channel | Development, testing |
| MTChannelBasedBroker | Kafka Channel | Production with ordering |
| Kafka Broker | Native Apache Kafka | High-throughput production |
| RabbitMQ Broker | RabbitMQ | Complex routing needs |
Warning: InMemoryChannel loses events on restart. Never use for production workloads.
Triggers filter events from a Broker and deliver matching events to a subscriber:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: order-processor
namespace: production
spec:
broker: default
filter:
attributes:
type: com.example.order.created
source: /orders/service
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: order-processor
type, source, extensions)
# Trigger 1: Process payment
kind: Trigger
metadata:
name: payment-trigger
spec:
broker: default
filter:
attributes:
type: com.example.order.created
subscriber:
ref: { kind: Service, name: payment-service }
# Trigger 2: Update inventory
kind: Trigger
metadata:
name: inventory-trigger
spec:
broker: default
filter:
attributes:
type: com.example.order.created
subscriber:
ref: { kind: Service, name: inventory-service }
Sources are the producers that feed events into your system:
| Source | What It Does |
|---|---|
| PingSource | Sends events on a cron schedule (like a timer) |
| ApiServerSource | Watches Kubernetes API events (pod created, etc.) |
| KafkaSource | Consumes messages from Kafka topics |
| GitHubSource | Receives GitHub webhooks |
| Custom Sources | Build your own with SinkBinding or ContainerSource |
apiVersion: sources.knative.dev/v1
kind: PingSource
metadata:
name: health-check-ping
namespace: production
spec:
schedule: "*/5 * * * *" # Every 5 minutes
contentType: "application/json"
data: '{"check": "health", "service": "my-api"}'
sink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: health-checker
type: dev.knative.sources.ping1. What is the role of a Broker in Knative Eventing?
2. Can a single event from a Broker be delivered to multiple subscribers?
3. What type of CloudEvent does PingSource produce?
apiVersion: sources.knative.dev/v1
kind: ApiServerSource
metadata:
name: pod-watcher
namespace: production
spec:
serviceAccountName: pod-watcher-sa
mode: Resource # Send full resource, or "Reference" for just ref
resources:
- apiVersion: v1
kind: Pod
sink:
ref:
apiVersion: eventing.knative.dev/v1
kind: Broker
name: default
type: dev.knative.apiserver.resource.add (or .update, .delete)
apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
name: order-events
namespace: production
spec:
consumerGroup: knative-order-consumer
bootstrapServers:
- my-kafka-cluster:9092
topics:
- orders
sink:
ref:
apiVersion: eventing.knative.dev/v1
kind: Broker
name: default
net:
sasl:
enable: true
type:
secretKeyRef:
name: kafka-secret
key: sasl-type
password:
secretKeyRef:
name: kafka-secret
key: password
user:
secretKeyRef:
name: kafka-secret
key: username
An alternative to Broker/Trigger for direct point-to-point event routing:
Source ---> Channel ---> Subscription ---> Service A
|
+----> Subscription ---> Service B
apiVersion: messaging.knative.dev/v1
kind: Channel
metadata:
name: order-channel
spec:
channelTemplate:
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel # Or KafkaChannel for production
---
apiVersion: messaging.knative.dev/v1
kind: Subscription
metadata:
name: order-sub
spec:
channel:
apiVersion: messaging.knative.dev/v1
kind: Channel
name: order-channel
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: order-processor
Rule of thumb: Start with Broker/Trigger. Use Channel/Subscription when building pipelines.
apiVersion: flows.knative.dev/v1
kind: Sequence
metadata:
name: order-pipeline
spec:
channelTemplate:
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
steps:
- ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: validate-order # Step 1: Validate
- ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: enrich-order # Step 2: Enrich with data
- ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: store-order # Step 3: Store
reply:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: order-confirmation # Final: Send confirmation
Event In
|
v
[Step 1: validate-order]
| (response becomes input to next step)
v
[Step 2: enrich-order]
| (response becomes input to next step)
v
[Step 3: store-order]
| (response goes to reply)
v
[Reply: order-confirmation]
Process an event through multiple branches simultaneously:
apiVersion: flows.knative.dev/v1
kind: Parallel
metadata:
name: order-fanout
spec:
channelTemplate:
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
branches:
- filter:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: is-high-value # Filter function
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: premium-handler # Handle high-value orders
- filter:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: is-international
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: customs-handler # Handle international orders
reply:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: order-aggregator
1. In a Knative Sequence, what does each step receive as input?
2. What is the difference between a Sequence and a Parallel?
3. When should you prefer Channel/Subscription over Broker/Trigger?
# On a Broker
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
name: default
spec:
delivery:
deadLetterSink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: dead-letter-handler
retry: 3
backoffPolicy: exponential
backoffDelay: "PT2S" # 2 seconds initial delay
linear or exponential
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: payment-trigger
spec:
broker: default
filter:
attributes:
type: com.example.order.created
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: payment-service
delivery:
deadLetterSink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: payment-error-handler
retry: 5
backoffPolicy: exponential
backoffDelay: "PT1S"
Trigger-level delivery overrides Broker-level delivery settings.
SinkBinding injects sink information into any Kubernetes workload:
apiVersion: sources.knative.dev/v1
kind: SinkBinding
metadata:
name: my-app-binding
spec:
subject:
apiVersion: apps/v1
kind: Deployment
name: my-legacy-app
sink:
ref:
apiVersion: eventing.knative.dev/v1
kind: Broker
name: default
This injects K_SINK environment variable into your app's pods.
# In your app, just POST CloudEvents to $K_SINK
curl -X POST $K_SINK \
-H "Ce-Specversion: 1.0" \
-H "Ce-Type: com.example.my-event" \
-H "Ce-Source: /my-app" \
-H "Ce-Id: $(uuidgen)" \
-d '{"message": "hello"}'
Send an event and get a response event back. The subscriber's HTTP response becomes a new CloudEvent routed back to the Broker.
Trigger with reply:
spec:
subscriber:
ref: { name: enricher }
# Response goes back
# to the Broker
Store all state changes as events. Replay events to rebuild state. Combine with Kafka for durable event log.
Source -> KafkaChannel
(permanent log)
-> Trigger -> Projector
(builds read model)
[Order API]
|
order.created event
|
v
[Broker]
/ | \
v v v
[Payment] [Inventory] [Email]
Service Service Service
| | |
v v v
payment. inventory. notification.
completed updated sent
| | |
v v v
[Broker] (events flow back)
|
v
[Order Status Updater]
# Check broker status kubectl get broker default -o yaml # Check trigger status kubectl get triggers -o wide # Check sources kubectl get pingsource,apiserversource,kafkasource # View events flowing through (deploy an event display service) kn service create event-display \ --image gcr.io/knative-releases/knative.dev/eventing/cmd/event_display # Create a trigger to catch all events kn trigger create debug-trigger \ --broker default \ --sink ksvc:event-display # Watch the logs kubectl logs -l serving.knative.dev/service=event-display -f
1. What environment variable does SinkBinding inject into pods?
2. What happens to an event that fails delivery after all retries are exhausted?
3. How can you debug events flowing through a Broker?
com.company.domain.actionIn the final module, we cover Advanced Knative and Operations:
Module 4: Advanced Knative and Operations