Module 3 of 4

Knative Eventing

Event-Driven Architecture on Kubernetes

React to things happening, don't poll for changes.

Why Event-Driven Architecture?

"Instead of constantly asking 'has anything changed?', what if your systems could just tell each other when something happens?"

Polling (Bad)

  • Wastes resources
  • Adds latency
  • Tight coupling
  • Doesn't scale well

Events (Good)

  • React instantly
  • Loose coupling
  • Scales naturally
  • Easy to add consumers

Event-Driven Patterns

Knative Eventing provides the infrastructure for all of these patterns.

What is Knative Eventing?

A system for producing, routing, and consuming events on Kubernetes.

CloudEvents: The Common Language

CloudEvents is a CNCF graduated specification for describing events:

{
  "specversion": "1.0",
  "type": "com.example.order.created",
  "source": "/orders/service",
  "id": "abc-123-def",
  "time": "2025-01-15T12:00:00Z",
  "datacontenttype": "application/json",
  "data": {
    "orderId": "12345",
    "customerId": "cust-789",
    "total": 99.99
  }
}
  

Required attributes: specversion, type, source, id

CloudEvents over HTTP

CloudEvents can be transported as HTTP headers (binary mode) or JSON body (structured mode):

# Binary content mode (preferred -- headers carry metadata)
POST /events HTTP/1.1
Content-Type: application/json
ce-specversion: 1.0
ce-type: com.example.order.created
ce-source: /orders/service
ce-id: abc-123-def
ce-time: 2025-01-15T12:00:00Z

{"orderId": "12345", "customerId": "cust-789", "total": 99.99}
  

Binary mode is preferred -- it lets intermediaries route events without parsing the body.

Knowledge Check

1. What specification does Knative Eventing use for event format?

A) Apache Avro
B) CloudEvents (CNCF specification)
C) JSON Schema
Correct: B. CloudEvents is a CNCF graduated specification that provides a standard way to describe events across different systems.

2. Which CloudEvents attributes are required?

A) type, source, data, time
B) specversion, type, source, id
C) id, type, data, datacontenttype
Correct: B. The four required attributes are specversion, type, source, and id. Other attributes like time and data are optional.

3. Why is binary content mode preferred for CloudEvents over HTTP?

A) It uses less bandwidth
B) Intermediaries can route events without parsing the body
C) It supports more data types
Correct: B. In binary mode, metadata is in HTTP headers, allowing brokers and triggers to filter and route events without deserializing the body.

Knative Eventing Architecture

  Sources          Broker            Triggers        Sinks
+---------+    +----------+    +------------+    +--------+
| Ping    |--->|          |--->| Filter:    |--->| Service|
| Source   |   |  Broker  |    | type=order |    | A      |
+---------+    |  (event  |    +------------+    +--------+
               |   hub)   |
+---------+    |          |    +------------+    +--------+
| API     |--->|          |--->| Filter:    |--->| Service|
| Server  |   |          |    | type=alert |    | B      |
| Source   |   +----------+    +------------+    +--------+
+---------+
    

Sources produce events, Brokers collect them, Triggers filter and route them to Sinks.

Brokers: The Event Hub

"Think of a Broker like a post office sorting facility. All mail (events) arrives at the central hub, and sorting rules (Triggers) decide which mailbox (service) each piece goes to."
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
  name: default
  namespace: production
  annotations:
    eventing.knative.dev/broker.class: MTChannelBasedBroker
spec:
  config:
    apiVersion: v1
    kind: ConfigMap
    name: config-br-defaults
    namespace: knative-eventing
  

The MTChannelBasedBroker is the default. For production, consider Kafka-backed brokers.

Broker Implementations

Broker TypeBackingUse Case
MTChannelBasedBrokerIn-Memory ChannelDevelopment, testing
MTChannelBasedBrokerKafka ChannelProduction with ordering
Kafka BrokerNative Apache KafkaHigh-throughput production
RabbitMQ BrokerRabbitMQComplex routing needs

Warning: InMemoryChannel loses events on restart. Never use for production workloads.

Triggers: Subscribing to Events

Triggers filter events from a Broker and deliver matching events to a subscriber:

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: order-processor
  namespace: production
spec:
  broker: default
  filter:
    attributes:
      type: com.example.order.created
      source: /orders/service
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: order-processor
  

Multiple Triggers, One Broker

"One event can trigger multiple independent actions. An order.created event might trigger payment processing, inventory update, and email notification -- all independently."
# Trigger 1: Process payment
kind: Trigger
metadata:
  name: payment-trigger
spec:
  broker: default
  filter:
    attributes:
      type: com.example.order.created
  subscriber:
    ref: { kind: Service, name: payment-service }

# Trigger 2: Update inventory
kind: Trigger
metadata:
  name: inventory-trigger
spec:
  broker: default
  filter:
    attributes:
      type: com.example.order.created
  subscriber:
    ref: { kind: Service, name: inventory-service }
  

Sources: Where Events Come From

Sources are the producers that feed events into your system:

SourceWhat It Does
PingSourceSends events on a cron schedule (like a timer)
ApiServerSourceWatches Kubernetes API events (pod created, etc.)
KafkaSourceConsumes messages from Kafka topics
GitHubSourceReceives GitHub webhooks
Custom SourcesBuild your own with SinkBinding or ContainerSource

PingSource: Cron-Based Events

apiVersion: sources.knative.dev/v1
kind: PingSource
metadata:
  name: health-check-ping
  namespace: production
spec:
  schedule: "*/5 * * * *"     # Every 5 minutes
  contentType: "application/json"
  data: '{"check": "health", "service": "my-api"}'
  sink:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: health-checker
  

Knowledge Check

1. What is the role of a Broker in Knative Eventing?

A) It creates events from external sources
B) It receives events and distributes them to matching Triggers
C) It stores events permanently for replay
Correct: B. A Broker acts as an event hub, receiving events from any source and distributing them to Triggers whose filters match the event attributes.

2. Can a single event from a Broker be delivered to multiple subscribers?

A) Yes, if multiple Triggers match the event's attributes
B) No, each event is delivered to exactly one subscriber
C) Only if the event specifies multiple targets
Correct: A. Multiple Triggers can match the same event, causing it to be delivered to all matching subscribers independently (fan-out).

3. What type of CloudEvent does PingSource produce?

A) com.knative.ping
B) dev.knative.sources.ping
C) knative.eventing.ping.v1
Correct: B. PingSource emits CloudEvents with type dev.knative.sources.ping.

ApiServerSource: Kubernetes Events

apiVersion: sources.knative.dev/v1
kind: ApiServerSource
metadata:
  name: pod-watcher
  namespace: production
spec:
  serviceAccountName: pod-watcher-sa
  mode: Resource          # Send full resource, or "Reference" for just ref
  resources:
    - apiVersion: v1
      kind: Pod
  sink:
    ref:
      apiVersion: eventing.knative.dev/v1
      kind: Broker
      name: default
  

KafkaSource: Consuming from Kafka

apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
  name: order-events
  namespace: production
spec:
  consumerGroup: knative-order-consumer
  bootstrapServers:
    - my-kafka-cluster:9092
  topics:
    - orders
  sink:
    ref:
      apiVersion: eventing.knative.dev/v1
      kind: Broker
      name: default
  net:
    sasl:
      enable: true
      type:
        secretKeyRef:
          name: kafka-secret
          key: sasl-type
      password:
        secretKeyRef:
          name: kafka-secret
          key: password
      user:
        secretKeyRef:
          name: kafka-secret
          key: username
  

Channels and Subscriptions

An alternative to Broker/Trigger for direct point-to-point event routing:

  Source ---> Channel ---> Subscription ---> Service A
                  |
                  +----> Subscription ---> Service B
    
apiVersion: messaging.knative.dev/v1
kind: Channel
metadata:
  name: order-channel
spec:
  channelTemplate:
    apiVersion: messaging.knative.dev/v1
    kind: InMemoryChannel    # Or KafkaChannel for production
---
apiVersion: messaging.knative.dev/v1
kind: Subscription
metadata:
  name: order-sub
spec:
  channel:
    apiVersion: messaging.knative.dev/v1
    kind: Channel
    name: order-channel
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: order-processor
  

When to Use Which?

Broker/Trigger

  • Content-based filtering
  • Multiple event types
  • Dynamic subscribers
  • Complex routing logic
  • Most common pattern

Channel/Subscription

  • Simple point-to-point
  • All events go to all subscribers
  • Pipeline building blocks
  • No filtering needed
  • Building Sequences/Parallels

Rule of thumb: Start with Broker/Trigger. Use Channel/Subscription when building pipelines.

Sequences: Event Pipelines

"Process an event through multiple steps in order: validate, enrich, transform, then store."
apiVersion: flows.knative.dev/v1
kind: Sequence
metadata:
  name: order-pipeline
spec:
  channelTemplate:
    apiVersion: messaging.knative.dev/v1
    kind: InMemoryChannel
  steps:
    - ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: validate-order        # Step 1: Validate
    - ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: enrich-order           # Step 2: Enrich with data
    - ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: store-order            # Step 3: Store
  reply:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: order-confirmation       # Final: Send confirmation
  

How Sequences Work

  Event In
     |
     v
  [Step 1: validate-order]
     | (response becomes input to next step)
     v
  [Step 2: enrich-order]
     | (response becomes input to next step)
     v
  [Step 3: store-order]
     | (response goes to reply)
     v
  [Reply: order-confirmation]
    

Parallel: Fan-Out Processing

Process an event through multiple branches simultaneously:

apiVersion: flows.knative.dev/v1
kind: Parallel
metadata:
  name: order-fanout
spec:
  channelTemplate:
    apiVersion: messaging.knative.dev/v1
    kind: InMemoryChannel
  branches:
    - filter:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: is-high-value       # Filter function
      subscriber:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: premium-handler     # Handle high-value orders
    - filter:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: is-international
      subscriber:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: customs-handler     # Handle international orders
  reply:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: order-aggregator
  

Knowledge Check

1. In a Knative Sequence, what does each step receive as input?

A) The original event that entered the sequence
B) The response (output) from the previous step
C) A copy of all previous steps' outputs
Correct: B. Each step in a Sequence receives the HTTP response from the previous step as its input CloudEvent, forming a processing pipeline.

2. What is the difference between a Sequence and a Parallel?

A) Sequences are faster than Parallels
B) Sequences process steps in order; Parallels process branches simultaneously
C) Sequences use Brokers; Parallels use Channels
Correct: B. Sequences chain steps sequentially (output of one is input to next). Parallels fan-out to multiple branches that process concurrently.

3. When should you prefer Channel/Subscription over Broker/Trigger?

A) When you need content-based filtering
B) When you have many different event types
C) When building pipelines (Sequences/Parallels) or simple point-to-point routing
Correct: C. Channel/Subscription is the building block for Sequences and Parallels, and works well for simple all-events-to-all-subscribers routing without filtering.

Dead Letter Sinks

"In the real world, deliveries fail. A dead letter sink catches undeliverable events so they don't vanish into the void."
# On a Broker
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
  name: default
spec:
  delivery:
    deadLetterSink:
      ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: dead-letter-handler
    retry: 3
    backoffPolicy: exponential
    backoffDelay: "PT2S"        # 2 seconds initial delay
  

Per-Trigger Dead Letter Configuration

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: payment-trigger
spec:
  broker: default
  filter:
    attributes:
      type: com.example.order.created
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: payment-service
  delivery:
    deadLetterSink:
      ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: payment-error-handler
    retry: 5
    backoffPolicy: exponential
    backoffDelay: "PT1S"
  

Trigger-level delivery overrides Broker-level delivery settings.

SinkBinding: Make Any App an Event Source

SinkBinding injects sink information into any Kubernetes workload:

apiVersion: sources.knative.dev/v1
kind: SinkBinding
metadata:
  name: my-app-binding
spec:
  subject:
    apiVersion: apps/v1
    kind: Deployment
    name: my-legacy-app
  sink:
    ref:
      apiVersion: eventing.knative.dev/v1
      kind: Broker
      name: default
  

This injects K_SINK environment variable into your app's pods.

# In your app, just POST CloudEvents to $K_SINK
curl -X POST $K_SINK \
  -H "Ce-Specversion: 1.0" \
  -H "Ce-Type: com.example.my-event" \
  -H "Ce-Source: /my-app" \
  -H "Ce-Id: $(uuidgen)" \
  -d '{"message": "hello"}'
  

Event Flow Patterns

Request / Reply

Send an event and get a response event back. The subscriber's HTTP response becomes a new CloudEvent routed back to the Broker.

Trigger with reply:
spec:
  subscriber:
    ref: { name: enricher }
  # Response goes back
  # to the Broker
      

Event Sourcing

Store all state changes as events. Replay events to rebuild state. Combine with Kafka for durable event log.

Source -> KafkaChannel
  (permanent log)
  -> Trigger -> Projector
  (builds read model)
      

Real-World Example: E-Commerce

  [Order API]
       |
  order.created event
       |
       v
    [Broker]
    /   |    \
   v    v     v
[Payment] [Inventory] [Email]
 Service   Service     Service
   |          |           |
   v          v           v
payment.   inventory.  notification.
completed  updated     sent
   |          |           |
   v          v           v
    [Broker] (events flow back)
       |
       v
  [Order Status Updater]
    

Debugging Eventing

# Check broker status
kubectl get broker default -o yaml

# Check trigger status
kubectl get triggers -o wide

# Check sources
kubectl get pingsource,apiserversource,kafkasource

# View events flowing through (deploy an event display service)
kn service create event-display \
  --image gcr.io/knative-releases/knative.dev/eventing/cmd/event_display

# Create a trigger to catch all events
kn trigger create debug-trigger \
  --broker default \
  --sink ksvc:event-display

# Watch the logs
kubectl logs -l serving.knative.dev/service=event-display -f
  

Knowledge Check

1. What environment variable does SinkBinding inject into pods?

A) KNATIVE_SINK_URL
B) K_SINK
C) EVENT_SINK
Correct: B. SinkBinding injects the K_SINK environment variable containing the URL to send CloudEvents to.

2. What happens to an event that fails delivery after all retries are exhausted?

A) It is retried indefinitely
B) It is silently dropped
C) It is sent to the dead letter sink (if configured), otherwise dropped
Correct: C. If a dead letter sink is configured, failed events are sent there. Without one, events are dropped after retries are exhausted.

3. How can you debug events flowing through a Broker?

A) Deploy an event-display service with a catch-all Trigger and view its logs
B) Check the Broker's internal event log
C) Use kubectl get events to see CloudEvents
Correct: A. Deploy the event_display image as a Knative Service, create a Trigger with no filter (catches all events), and watch the logs.

Eventing Best Practices

What's Coming Next

In the final module, we cover Advanced Knative and Operations:

Module 4: Advanced Knative and Operations

Key Takeaways

← Back