Civica Training Program

Cluster Architecture, Installation & Configuration

Understanding every component, from control plane to network fabric

Presentation 2 of 8  |  CKA Domain: 25% of Exam

Your Mission Begins...

Your team lead walks over to your desk: "We need a production Kubernetes cluster for the new payments platform. Three environments — dev, staging, production. It needs RBAC, proper networking, and we need to be able to upgrade it without downtime. The deadline is next month."

You nod confidently. But inside, questions are racing: How does the control plane actually work? What happens if etcd goes down? How do pods on different nodes talk to each other? How do you control who can do what?

This presentation answers every one of those questions. By the end, you'll understand the architecture deeply enough to build, secure, and maintain a production cluster.

Roadmap

Today's Journey

We'll explore the cluster layer by layer, from the control plane brain to the network fabric that connects everything.

Part 1: The Control Plane

  • API Server — the front door
  • etcd — the source of truth
  • Scheduler — the matchmaker
  • Controller Manager — the enforcer

Part 2: Worker Nodes

  • kubelet — the node agent
  • kube-proxy — the network plumber
  • Container runtime — the engine
  • Cluster bootstrapping with kubeadm

Part 3: Networking & Security

  • Pod-to-pod networking
  • CNI plugins (Azure CNI focus)
  • RBAC — who can do what
  • Service Accounts

Part 4: Operations

  • Cluster upgrades and AKS upgrades
  • etcd backup and restore
  • Node management (cordon, drain)
  • Admission controllers & network policies
Architecture

The Cluster as a City

To understand cluster architecture, imagine a well-run city. Each component has a role, and the city only works when they all collaborate.

+------------------------------------------------------------------+ | THE KUBERNETES CITY | | | | CITY HALL (Control Plane) DISTRICTS (Worker Nodes) | | +----------------------------+ +-------------------------+ | | | Reception = API Server | | Site Managers = kubelet | | | | Filing Room = etcd | | Post Office = kube- | | | | Traffic Ctrl = Scheduler | | proxy | | | | Inspectors = Controllers | | Factories = Container | | | +----------------------------+ | Runtime | | | | Workers = Pods | | | +-------------------------+ | +------------------------------------------------------------------+

City Hall (Control Plane)

API Server is reception — every request goes through it. etcd is the filing room — every record is stored here. Scheduler is the traffic controller — directs new workloads to the right district. Controllers are building inspectors — constantly checking that reality matches the blueprints.

Districts (Worker Nodes)

kubelet is the site manager — takes orders from City Hall and ensures buildings (pods) are constructed correctly. kube-proxy is the postal service — ensures messages (network traffic) reach the right address. Container Runtime is the factory floor where actual construction happens.

Control Plane

API Server: The Front Door to Everything

Every single interaction with the cluster — from kubectl to the kubelet — flows through the API Server. Let's trace a request.

When you run kubectl apply -f deployment.yaml, here's what happens behind the scenes:

kubectl ---> Authentication ---> Authorization ---> Admission ---> etcd (Who are you?) (Can you do this?) (Is it valid?) (Store it)

The Request Pipeline

  1. Authentication: Verifies identity via certificates, tokens, or OIDC (Azure AD)
  2. Authorization: Checks RBAC policies — does this user have permission?
  3. Admission Controllers: Validates and potentially mutates the request (e.g., inject defaults, enforce policies)
  4. Persistence: Writes the object to etcd
  5. Notification: Informs watchers (Scheduler, Controllers) that something changed

Key Characteristics

  • RESTful API — every resource has a REST endpoint
  • Only component that talks to etcd directly
  • Horizontally scalable — run multiple instances behind a load balancer
  • Exposes port 6443 (HTTPS) by default
  • Watch mechanism — components subscribe to changes instead of polling
# Check API Server health
kubectl get --raw /healthz
kubectl get --raw /readyz
Critical Component

etcd: The Source of Truth

If the API Server is the front door, etcd is the vault behind it. Lose etcd, lose everything.

Imagine your company's filing cabinet — every employee record, every contract, every policy document. Now imagine that filing cabinet catches fire and there's no backup. That's what losing etcd without a backup feels like. Every pod definition, every secret, every config map — gone.

What etcd Stores

  • All cluster state (pods, services, deployments, secrets, configmaps)
  • RBAC policies and role bindings
  • Service account tokens
  • Lease information for leader election
  • Custom Resource Definitions (CRDs)

Key Properties

  • Distributed — runs as a cluster (3 or 5 members for HA)
  • Consistent — uses Raft consensus algorithm
  • Key-value store — keys are hierarchical paths
  • Default port: 2379 (client), 2380 (peer)

etcd Backup (CKA Exam Topic!)

# Snapshot backup
ETCDCTL_API=3 etcdctl snapshot save \
  /tmp/etcd-backup.db \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key

# Verify the snapshot
ETCDCTL_API=3 etcdctl snapshot status \
  /tmp/etcd-backup.db --write-out=table
CKA Exam: etcd backup and restore is a common exam task. Practice the commands until they're second nature. Always use ETCDCTL_API=3.
Checkpoint

Quiz 1: Control Plane Components

Let's test our understanding of the control plane before moving on to the worker side.

1. Which control plane component is the ONLY one that communicates directly with etcd?

a) kube-scheduler
b) kube-apiserver
c) kube-controller-manager
d) kubelet
Answer: B. The API Server is the only component that reads from and writes to etcd. All other components (scheduler, controller manager, kubelet) interact with the cluster state through the API Server. This centralizes access control and auditing.

2. In the API Server request pipeline, what is the correct order of processing?

a) Authorization -> Authentication -> Admission -> Persistence
b) Admission -> Authentication -> Authorization -> Persistence
c) Authentication -> Admission -> Authorization -> Persistence
d) Authentication -> Authorization -> Admission -> Persistence
Answer: D. The order is: Authentication (who are you?) -> Authorization (are you allowed?) -> Admission Controllers (is the request valid/modified?) -> Persistence (store in etcd). This order ensures unauthenticated requests are rejected earliest.

3. Why does etcd typically run with an odd number of members (3 or 5)?

a) Even numbers cause network conflicts
b) It's a Kubernetes hard requirement
c) Raft consensus requires a majority (quorum) to function, and odd numbers tolerate more failures per total nodes
d) etcd doesn't support even numbers of members
Answer: C. Raft consensus requires a majority (quorum) to commit writes. With 3 members, you can lose 1; with 5, you can lose 2. An even number (4 members) tolerates the same failures as one fewer odd number (3 members), so the extra member adds cost without improving fault tolerance.
Control Plane

The Scheduler: How Pods Find Their Home

Now that we understand API Server and etcd, let's meet the matchmaker that pairs pods with nodes.

Think of the Scheduler as a wedding planner for pods and nodes. A new pod arrives saying "I need 2 CPUs, 4GB RAM, and I'd prefer to be in zone-a." The Scheduler scans all available nodes, scores them on compatibility, and makes the match. No pod gets placed without the Scheduler's approval.

The Scheduling Algorithm

  1. Watch: Detects unscheduled pods (no node assigned)
  2. Filter: Eliminates unfit nodes
    • Not enough CPU/memory? Filtered out
    • Node has a taint the pod can't tolerate? Filtered out
    • nodeSelector doesn't match? Filtered out
  3. Score: Ranks remaining nodes (0-100)
    • Spread pods evenly across nodes
    • Prefer nodes with the image already cached
    • Respect affinity preferences
  4. Bind: Assigns pod to the highest-scoring node

Scheduling Constraints

# nodeSelector — simple label matching
spec:
  nodeSelector:
    disktype: ssd

# nodeAffinity — expressive rules
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: topology.kubernetes.io/zone
            operator: In
            values:
            - uksouth-1

# Tolerations — allow scheduling on tainted nodes
spec:
  tolerations:
  - key: "dedicated"
    operator: "Equal"
    value: "gpu"
    effect: "NoSchedule"
Control Plane

Controller Manager: The Tireless Worker

The Scheduler places pods. But who ensures the cluster stays in its desired state? Meet the Controller Manager.

Imagine a building inspector who never sleeps. Every second, they walk through the city checking: "Are there really 3 replicas of the payment service running? Is that node still healthy? Does this service still have endpoints?" If anything is out of alignment, they immediately take corrective action.

The Reconciliation Loop

while true: actual = observe_current_state() desired = read_desired_state() if actual != desired: take_action(actual, desired) sleep(brief)

This pattern — observe, compare, act — is the heart of Kubernetes. Every controller follows it.

Built-in Controllers

  • ReplicaSet Controller: Ensures N replicas are running
  • Deployment Controller: Manages rolling updates and rollbacks
  • Node Controller: Monitors node health, marks unhealthy nodes
  • Job Controller: Tracks batch jobs to completion
  • Endpoint Controller: Populates Service endpoints
  • ServiceAccount Controller: Creates default accounts for namespaces
  • Namespace Controller: Cleans up when namespaces are deleted
Azure Integration

Cloud Controller Manager

Running Kubernetes on Azure (or any cloud) requires a bridge between K8s abstractions and cloud-specific resources. That's the Cloud Controller Manager.

What It Does

When you create a Kubernetes Service of type LoadBalancer, K8s doesn't know how to create an Azure Load Balancer. The Cloud Controller Manager does. It translates K8s intentions into cloud API calls.

Cloud-Specific Controllers

  • Node Controller: Detects when a cloud VM is deleted and removes the node from K8s
  • Route Controller: Sets up cloud network routes for pod-to-pod traffic
  • Service Controller: Creates/deletes cloud load balancers when K8s Services change

Azure AKS Integration

  • Azure Load Balancers for Service type LoadBalancer
  • Azure Managed Disks for PersistentVolumes
  • Azure Virtual Network integration via Azure CNI
  • Azure AD for authentication and RBAC
  • Azure Container Registry (ACR) for image pull
  • Azure Monitor for metrics and logging

In AKS, the Cloud Controller Manager runs as part of the managed control plane — you don't manage it directly, but understanding it helps you debug networking and storage issues.

Worker Node

kubelet: The Node Agent

We've covered the brain. Now let's look at the hands and feet — starting with the kubelet, which runs on every worker node.

The kubelet is like a site manager on a construction project. City Hall (control plane) sends down blueprints (PodSpecs), and the kubelet makes sure the buildings (containers) get built, stay standing, and reports any problems back to HQ.

Key Responsibilities

  • Registers the node with the API Server
  • Watches for PodSpecs assigned to its node
  • Tells the container runtime to pull images and start containers
  • Runs liveness & readiness probes to check container health
  • Reports node and pod status back to the API Server
  • Mounts volumes into pods

Important Details

  • Runs as a systemd service, not a pod (it must run even if the container runtime fails)
  • Communicates with the API Server over HTTPS
  • Uses the CRI (Container Runtime Interface) to be runtime-agnostic
  • Config file: /var/lib/kubelet/config.yaml
  • Certificate: /var/lib/kubelet/pki/
# Check kubelet status
systemctl status kubelet

# View kubelet logs
journalctl -u kubelet -f
Worker Node

kube-proxy: Network Rules on Every Node

Pods need to talk to each other and to the outside world. kube-proxy makes that possible.

When you create a Kubernetes Service, you get a stable virtual IP (ClusterIP). But pods behind that service come and go. How does traffic find the right pod? kube-proxy programs network rules on each node that redirect traffic from the Service IP to one of the healthy pods behind it.

Proxy Modes

  • iptables mode (default): Programs iptables rules for packet redirection. Fast, but rules grow linearly with services.
  • IPVS mode: Uses Linux kernel IPVS for load balancing. Better performance at scale (thousands of services).
  • userspace mode: Legacy, rarely used. Proxies traffic in user space — slow.

How It Works

Client -> Service ClusterIP:port | kube-proxy iptables rules | DNAT to Pod IP:targetPort | Actual Pod receives traffic

Key Facts

  • Runs on every node (usually as a DaemonSet)
  • Watches the API Server for Service and Endpoint changes
  • Does NOT handle pod-to-pod traffic (that's the CNI's job)
  • Handles ClusterIP, NodePort, and LoadBalancer service types
Worker Node

Container Runtime: The Engine Room

The kubelet knows what containers to run. The container runtime actually runs them.

The CRI Standard

Kubernetes uses the Container Runtime Interface (CRI) — a plugin API that lets kubelet work with any compliant runtime. This decoupling was a game-changer.

RuntimeNotes
containerdDefault in K8s 1.24+, used by AKS, GKE. Lightweight, production-proven.
CRI-OBuilt specifically for K8s by Red Hat. Used by OpenShift.
Docker (dockershim)Removed in K8s 1.24. Docker itself now uses containerd under the hood.

A common misconception: "Kubernetes removed Docker support!" Not exactly. Docker images still work fine. What was removed was the dockershim — a compatibility layer. Docker itself uses containerd internally, so K8s now talks to containerd directly, cutting out the middleman.

The Relationship

Before K8s 1.24: kubelet -> dockershim -> Docker -> containerd After K8s 1.24: kubelet -> CRI -> containerd (direct!)

Simpler, faster, and fewer moving parts. Your Docker images are 100% compatible.

Checkpoint

Quiz 2: Node Components

We've covered both sides of the cluster — control plane and worker nodes. Time for a check-in.

1. Why does the kubelet run as a systemd service rather than a Kubernetes pod?

a) It must be running to start pods — it can't depend on the container runtime being managed by itself
b) It's a security requirement mandated by the CKA exam
c) Pods can only run application workloads, not system components
d) systemd is faster than container runtimes
Answer: A. The kubelet is responsible for starting and managing pods on a node. If it ran as a pod itself, there would be a chicken-and-egg problem — nothing could start the kubelet pod. It runs as a systemd service so it's available even if the container runtime has issues.

2. What is the default container runtime in Kubernetes 1.24 and later?

a) Docker
b) rkt (Rocket)
c) containerd
d) CRI-O
Answer: C. Starting with K8s 1.24, dockershim was removed. containerd became the default and most widely used CRI-compliant runtime. It's used by AKS, GKE, and most K8s distributions. CRI-O is also supported but primarily used by OpenShift.

3. What does kube-proxy primarily manage?

a) Direct pod-to-pod communication across nodes
b) Network rules that route traffic from Kubernetes Services to backend pods
c) TLS certificates for encrypted communication
d) DNS resolution within the cluster
Answer: B. kube-proxy maintains network rules (iptables or IPVS) on each node that enable Service abstraction. When traffic hits a Service ClusterIP, kube-proxy's rules redirect it to one of the healthy backend pods. Pod-to-pod networking is handled by the CNI plugin, not kube-proxy.
Installation

kubeadm: Bootstrapping a Cluster

Now that we know every component, let's see how they all come together during cluster creation.

kubeadm is the official tool for creating a K8s cluster from scratch. Think of it as the city planner who lays out roads, builds City Hall, and prepares the districts before anyone moves in.

The Bootstrap Process

  1. kubeadm init on the first node (creates control plane)
  2. Generates certificates and kubeconfig files
  3. Deploys etcd, API Server, Scheduler, Controller Manager as static pods
  4. Installs CoreDNS and kube-proxy
  5. Outputs a join command with a token
  6. kubeadm join on worker nodes to join the cluster

Quick Start Commands

# On the control plane node:
sudo kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --apiserver-advertise-address=192.168.1.10

# Set up kubeconfig
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf \
  $HOME/.kube/config

# Install a CNI plugin (e.g., Calico)
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

# On each worker node:
sudo kubeadm join 192.168.1.10:6443 \
  --token abc123.xyz789 \
  --discovery-token-ca-cert-hash \
  sha256:<hash>

CKA Exam: You may be asked to bootstrap a cluster or add nodes. Know these commands.

Networking

Cluster Networking: How Pods Communicate

With components in place, let's tackle one of the trickiest topics in K8s — networking. How does a pod on Node A talk to a pod on Node B?

Kubernetes makes a bold promise: every pod gets its own IP address, and every pod can communicate with every other pod without NAT. This is the "flat network" model. No matter which node a pod is on, it can reach any other pod by IP.

The K8s Network Model Rules

  1. Pod-to-Pod: Every pod can reach every other pod by IP (no NAT)
  2. Node-to-Pod: Every node can reach every pod by IP (no NAT)
  3. Pod-sees-self: A pod sees its own IP as others see it

Types of Communication

  • Container-to-Container (same pod): localhost
  • Pod-to-Pod (same node): Bridge network
  • Pod-to-Pod (different nodes): Overlay or routed network via CNI
  • Pod-to-Service: Via kube-proxy rules (ClusterIP)
  • External-to-Service: Via NodePort, LoadBalancer, or Ingress

DNS in Kubernetes

CoreDNS runs as a cluster addon and provides DNS resolution:

# Service DNS format:
<service>.<namespace>.svc.cluster.local

# Examples:
payment-api.production.svc.cluster.local
redis.cache.svc.cluster.local

# Pod DNS (if enabled):
10-244-1-5.default.pod.cluster.local

Pods in the same namespace can use short names: just payment-api instead of the full FQDN.

Azure Focus

CNI Plugins: Making the Network Work

K8s defines the networking rules, but a CNI plugin actually implements them. Think of it as the difference between traffic laws and the actual roads.

Popular CNI Plugins

PluginApproach
Azure CNIAssigns Azure VNet IPs directly to pods. First-class Azure citizen.
CalicoBGP-based routing + network policies. Very popular for on-prem.
FlannelSimple VXLAN overlay. Easy to set up, limited features.
CiliumeBPF-based. High performance, advanced observability.
Weave NetMesh overlay. Easy setup, encrypts by default.

Azure CNI Deep Dive

With Azure CNI, pods get real Azure VNet IP addresses. This means pods can communicate directly with other Azure resources (VMs, databases) without any translation. The pod is a first-class network citizen in your Azure VNet.

Azure CNI vs kubenet

  • Azure CNI: Pod IPs from VNet subnet. Better performance, more IPs consumed.
  • kubenet: Pod IPs from separate range, NAT for cross-node. Simpler, uses fewer IPs.
  • Azure CNI Overlay: Newer option. Pod IPs from overlay, node IPs from VNet. Best of both worlds.
Security

RBAC: Who Can Do What

A production cluster without RBAC is like an office building where every employee has the master key. Let's fix that.

Imagine three teams sharing a cluster: Payments, Inventory, and Analytics. Without RBAC, a junior developer on the Analytics team could accidentally kubectl delete namespace payments and take down the entire payment system. RBAC prevents this by defining exactly who can do what, and where.

RBAC Building Blocks

  • Role: Defines permissions within a single namespace (e.g., "can list pods in namespace dev")
  • ClusterRole: Defines permissions cluster-wide (e.g., "can list nodes")
  • RoleBinding: Grants a Role to a user/group/service account within a namespace
  • ClusterRoleBinding: Grants a ClusterRole to a user/group cluster-wide

The pattern: Define WHAT actions are allowed (Role), then WHO gets those permissions (Binding).

The Permission Model

ElementDescription
SubjectsUsers, Groups, ServiceAccounts
Verbsget, list, watch, create, update, patch, delete
Resourcespods, services, deployments, secrets, etc.
NamespacesScope of the permission

RBAC is additive only — there are no "deny" rules. If no rule grants permission, it's denied by default.

Security

RBAC in Practice: YAML Examples

Let's make RBAC concrete with real YAML that you'll write in production and on the CKA exam.

Role + RoleBinding (Namespace-scoped)

# Role: can read pods in "dev" namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: dev
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
---
# Bind the role to user "jane"
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods-binding
  namespace: dev
subjects:
- kind: User
  name: jane
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

ClusterRole + ClusterRoleBinding

# ClusterRole: can read nodes (cluster-wide)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: node-reader
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get", "watch", "list"]
---
# Bind to group "ops-team" cluster-wide
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-nodes-global
subjects:
- kind: Group
  name: ops-team
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: node-reader
  apiGroup: rbac.authorization.k8s.io

Verify Permissions

kubectl auth can-i list pods --namespace dev --as jane
# yes
kubectl auth can-i delete pods --namespace dev --as jane
# no
Security

Service Accounts: Identity for Pods

RBAC controls who can do what. But pods need identities too — that's where Service Accounts come in.

Think about it: when your application pod needs to list pods in its namespace (for service discovery) or read a secret, it needs to authenticate to the API Server. But pods aren't humans — they can't log in. Service Accounts give pods an identity that RBAC can authorize.

Key Facts

  • Every namespace gets a default service account
  • Pods use the default SA unless you specify otherwise
  • K8s 1.24+ uses bound tokens (time-limited, audience-scoped) instead of static secret tokens
  • Best practice: create dedicated SAs with minimal permissions per workload

Create and Use a Service Account

# Create a service account
kubectl create serviceaccount app-sa -n dev

# Use it in a pod
apiVersion: v1
kind: Pod
metadata:
  name: my-app
  namespace: dev
spec:
  serviceAccountName: app-sa
  automountServiceAccountToken: true
  containers:
  - name: my-app
    image: my-app:1.0
# Bind a role to the service account
kubectl create rolebinding app-sa-binding \
  --role=pod-reader \
  --serviceaccount=dev:app-sa \
  -n dev
Checkpoint

Quiz 3: Networking and RBAC

We've covered networking and security. These are crucial for the CKA exam — let's make sure they're solid.

1. In the Kubernetes network model, which statement is TRUE?

a) Pods on different nodes must use NAT to communicate
b) Each container gets its own unique IP address
c) Pods can only communicate within their own namespace
d) Every pod gets its own IP and can reach any other pod without NAT
Answer: D. The K8s network model requires that every pod gets a unique IP and all pods can communicate with all other pods without NAT (the "flat network" model). IPs are per-pod, not per-container. Containers within a pod share the same IP and communicate via localhost. Network Policies can restrict traffic, but the base model allows all pod-to-pod communication.

2. What is the difference between a Role and a ClusterRole?

a) A Role is scoped to a single namespace; a ClusterRole applies cluster-wide or can be bound per-namespace
b) A Role is for users; a ClusterRole is for service accounts
c) A Role allows read-only access; a ClusterRole allows write access
d) There is no difference; they are interchangeable
Answer: A. A Role defines permissions within a specific namespace. A ClusterRole defines permissions that can apply cluster-wide (via ClusterRoleBinding) or can be reused across namespaces (via RoleBinding referencing a ClusterRole). ClusterRoles are also needed for non-namespaced resources like nodes.

3. With Azure CNI, where do pod IP addresses come from?

a) A separate overlay network managed by Kubernetes
b) Docker's default bridge network
c) The Azure Virtual Network (VNet) subnet directly
d) An internal DHCP server running in the cluster
Answer: C. Azure CNI assigns pods IP addresses directly from the Azure VNet subnet. This means pods are first-class network citizens in your Azure environment and can communicate directly with other Azure resources (VMs, databases, services) without NAT or proxying.
Access Management

Kubeconfig: Managing Cluster Access

You've set up RBAC in the cluster. But how does kubectl know which cluster to talk to and how to authenticate? That's the kubeconfig file.

The Three Pillars

  • Clusters: Server URL and certificate authority for each cluster
  • Users: Credentials (certificates, tokens, or auth plugins)
  • Contexts: Binds a cluster + user + namespace together
# Kubeconfig structure
apiVersion: v1
kind: Config
clusters:
- cluster:
    server: https://myaks.uksouth.azmk8s.io:443
    certificate-authority-data: LS0t...
  name: myAKSCluster
users:
- name: clusterAdmin
  user:
    client-certificate-data: LS0t...
    client-key-data: LS0t...
contexts:
- context:
    cluster: myAKSCluster
    user: clusterAdmin
    namespace: production
  name: aks-prod
current-context: aks-prod

Essential Commands

# View current config
kubectl config view

# List contexts
kubectl config get-contexts

# Switch context
kubectl config use-context aks-prod

# Set default namespace for context
kubectl config set-context --current \
  --namespace=production

# Merge kubeconfig files
export KUBECONFIG=~/.kube/config:~/.kube/aks-config
kubectl config view --flatten > merged-config

Default location: ~/.kube/config. Override with KUBECONFIG env var or --kubeconfig flag.

Security: Kubeconfig files contain credentials. Never commit them to git. Treat them like passwords.
Operations

Cluster Upgrades: The Careful Dance

Kubernetes releases a new minor version roughly every 4 months. Upgrading a production cluster without downtime requires a methodical approach.

Think of upgrading a cluster like renovating a hotel while guests are still staying in it. You can't shut everything down at once. You upgrade one floor at a time, moving guests around as needed. The control plane goes first, then worker nodes one by one.

kubeadm Upgrade Process

  1. Upgrade kubeadm on the control plane node
  2. kubeadm upgrade plan — shows available versions
  3. kubeadm upgrade apply v1.29.0 — upgrades control plane
  4. Drain each worker node (one at a time)
  5. Upgrade kubelet & kubectl on the worker
  6. Uncordon the worker node
  7. Repeat for each worker

Upgrade Commands (CKA Exam!)

# On control plane:
sudo apt-get update
sudo apt-get install -y kubeadm=1.29.0-*
sudo kubeadm upgrade plan
sudo kubeadm upgrade apply v1.29.0

# Upgrade kubelet on control plane:
sudo apt-get install -y \
  kubelet=1.29.0-* kubectl=1.29.0-*
sudo systemctl daemon-reload
sudo systemctl restart kubelet

# On each worker node:
kubectl drain node-1 --ignore-daemonsets
# (SSH to node-1, upgrade kubeadm, kubelet)
sudo kubeadm upgrade node
kubectl uncordon node-1

Rule: You can only upgrade one minor version at a time (1.28 -> 1.29, not 1.27 -> 1.29).

Azure Focus

AKS Upgrades: The Managed Way

Now that we understand the manual upgrade process, let's see how AKS simplifies it dramatically.

With AKS, Microsoft handles the control plane upgrade for you. For worker nodes, AKS uses a surge upgrade strategy: it adds extra nodes to the pool, cordons and drains old nodes, and rolls forward. You get the upgrade with minimal disruption and much less manual work.

AKS Upgrade Strategy

  • Surge nodes: Temporarily adds nodes during upgrade (configurable: 1 extra, 33%, etc.)
  • Max unavailable: Controls how many nodes can be upgrading simultaneously
  • Pod Disruption Budgets: Respected during drain to protect your apps
  • Auto-upgrade channels: none, patch, stable, rapid, node-image

AKS Upgrade Commands

# Check available versions
az aks get-upgrades \
  --resource-group myRG \
  --name myCluster \
  --output table

# Upgrade cluster
az aks upgrade \
  --resource-group myRG \
  --name myCluster \
  --kubernetes-version 1.29.0

# Enable auto-upgrade
az aks update \
  --resource-group myRG \
  --name myCluster \
  --auto-upgrade-channel stable

# Upgrade just node images (OS patches)
az aks nodepool upgrade \
  --resource-group myRG \
  --cluster-name myCluster \
  --name nodepool1 \
  --node-image-only
Disaster Recovery

etcd Backup & Restore: Your Safety Net

Remember our city analogy? etcd is the filing room. Let's talk about fire drills and disaster recovery.

It's 2 AM. An admin accidentally deletes the production namespace with all its deployments, services, and secrets. Without an etcd backup, the only recovery is to recreate everything from scratch — if you even remember what was running. With a recent backup, you restore the cluster to its pre-disaster state in minutes.

Backup

# Create a snapshot
ETCDCTL_API=3 etcdctl snapshot save \
  /backup/etcd-$(date +%Y%m%d).db \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key

# Verify the snapshot
ETCDCTL_API=3 etcdctl snapshot status \
  /backup/etcd-20240115.db \
  --write-out=table

Schedule regular backups with a cron job. In AKS, etcd backups are managed by Azure.

Restore

# Stop the API Server (if using static pods)
# Move the manifest temporarily
sudo mv /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/

# Restore the snapshot
ETCDCTL_API=3 etcdctl snapshot restore \
  /backup/etcd-20240115.db \
  --data-dir=/var/lib/etcd-restored

# Update etcd config to use new data dir
# Edit: /etc/kubernetes/manifests/etcd.yaml
# Change --data-dir=/var/lib/etcd-restored

# Move API Server manifest back
sudo mv /tmp/kube-apiserver.yaml \
  /etc/kubernetes/manifests/
CKA Exam: etcd backup/restore is a high-frequency exam task. Practice the full cycle.
Reliability

High Availability: Multi-Master Clusters

A single control plane node is a single point of failure. For production, you need high availability.

Load Balancer (:6443) | +---------+---------+ | | | Master-1 Master-2 Master-3 (API,etcd) (API,etcd) (API,etcd) | | | +---------+---------+ | +---------+---------+ | | | Worker-1 Worker-2 Worker-3

HA Requirements

  • 3+ control plane nodes (odd number for etcd quorum)
  • Load balancer in front of API Servers
  • Shared or replicated etcd

Two HA Topologies

Stacked etcd

etcd runs on the same nodes as the control plane components. Simpler to set up, but a node failure loses both a control plane member and an etcd member.

Used by: AKS, most managed services

External etcd

etcd runs on its own dedicated nodes, separate from the control plane. More resilient (etcd and control plane failures are independent) but more complex and costly.

Used by: Large enterprise, high-compliance environments

AKS handles all of this for you — the managed control plane is always HA with an SLA of 99.95% (with SLA tier).

Operations

Node Management: Cordon, Drain, and Taint

Sometimes you need to take a node out of service — for upgrades, maintenance, or decommissioning. Here's how to do it gracefully.

Cordon

Mark a node as unschedulable. Existing pods keep running, but no new pods will be placed here.

kubectl cordon node-2

# Status shows:
# SchedulingDisabled

Like putting a "No Vacancy" sign on a hotel floor.

Drain

Evict all pods from a node. Pods are rescheduled to other nodes. The node is also cordoned.

kubectl drain node-2 \
  --ignore-daemonsets \
  --delete-emptydir-data

# Pods gracefully terminated
# and rescheduled elsewhere

Like evacuating a hotel floor for renovation.

Uncordon

Mark the node as schedulable again. New pods can be placed here once more.

kubectl uncordon node-2

# Status: Ready
# New pods can now be
# scheduled here

Like reopening the hotel floor after renovation.

CKA Exam: Drain and uncordon is a common task during upgrade questions. Always use --ignore-daemonsets to avoid errors from system pods.

Operations

Resource Management: Requests and Limits

Without resource controls, one runaway pod could consume all the CPU on a node and starve every other workload. Let's prevent that.

Requests vs Limits

  • Request: The minimum resources a pod needs. The Scheduler uses this to find a suitable node.
  • Limit: The maximum resources a pod can use. The kubelet enforces this — exceeding memory limits causes OOMKill.
spec:
  containers:
  - name: my-app
    resources:
      requests:
        memory: "256Mi"
        cpu: "250m"    # 0.25 CPU core
      limits:
        memory: "512Mi"
        cpu: "500m"    # 0.5 CPU core

Namespace-Level Controls

ResourceQuota

Limits total resources a namespace can consume: max CPU, memory, number of pods, etc.

LimitRange

Sets default requests/limits for pods that don't specify them, and enforces min/max per container.

QoS Classes

  • Guaranteed: requests == limits (highest priority, last to be evicted)
  • Burstable: requests < limits (medium priority)
  • BestEffort: no requests/limits set (first to be evicted under pressure)
Checkpoint

Quiz 4: Operations

Final quiz! Let's test our knowledge of day-2 operations — upgrades, backups, and node management.

1. During a cluster upgrade, what is the correct order?

a) Worker nodes first, then control plane
b) Control plane first, then worker nodes one at a time
c) All nodes simultaneously for consistency
d) etcd first, then workers, then control plane
Answer: B. Always upgrade the control plane first, then worker nodes one at a time. The control plane must be at the new version before workers, because worker kubelets at a newer version than the API Server is not supported. One-at-a-time ensures availability.

2. What does kubectl drain node-1 do?

a) Deletes the node from the cluster permanently
b) Stops the kubelet service on the node
c) Only marks the node as unschedulable
d) Cordons the node and gracefully evicts all pods so they reschedule on other nodes
Answer: D. kubectl drain both cordons the node (marks it unschedulable) AND evicts all pods. The pods are gracefully terminated and the controllers (e.g., ReplicaSet) reschedule them on other available nodes. Use --ignore-daemonsets since DaemonSet pods can't be evicted.

3. A pod has requests: 256Mi memory and limits: 512Mi memory. What QoS class is it?

a) Burstable
b) Guaranteed
c) BestEffort
d) Standard
Answer: A. A pod is Burstable when requests and limits are set but not equal. Guaranteed requires requests == limits for all containers. BestEffort means no requests or limits are set at all. There is no "Standard" QoS class in Kubernetes.
Security

Admission Controllers: The Gatekeepers

Remember the API Server pipeline? Admission Controllers are the last checkpoint before a request is persisted. They can validate or even modify requests.

Two Types

Validating

Checks if a request meets certain criteria and rejects it if not. Example: "Reject pods without resource limits."

Mutating

Modifies the request before it's persisted. Example: "Automatically inject a sidecar container into every pod."

Mutating runs first, then validating. This ensures modifications are also validated.

Built-in Admission Controllers

  • NamespaceLifecycle: Prevents creating objects in terminating namespaces
  • LimitRanger: Enforces LimitRange defaults
  • ResourceQuota: Enforces namespace quotas
  • ServiceAccount: Auto-mounts SA tokens
  • DefaultStorageClass: Assigns default storage class to PVCs
  • MutatingAdmissionWebhook: Calls external webhooks to mutate requests
  • ValidatingAdmissionWebhook: Calls external webhooks to validate requests

Tools like OPA Gatekeeper and Kyverno use webhook-based admission controllers to enforce custom policies.

Security

Network Policies: Firewall Rules for Pods

By default, all pods can talk to all other pods. In production, that's a security risk. Network Policies let you control traffic flow.

Imagine your cluster runs a web frontend, an API, and a database. Should the frontend be able to connect directly to the database? No! Only the API should talk to the database. Network Policies enforce these boundaries.

Key Concepts

  • Applied to pods via label selectors
  • Control both Ingress (incoming) and Egress (outgoing) traffic
  • Require a CNI that supports them (Calico, Cilium, Azure CNI with Network Policy)
  • Are additive — if any policy selects a pod, only explicitly allowed traffic is permitted

Example: Allow Only API to DB

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-allow-api-only
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: database
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: api
    ports:
    - protocol: TCP
      port: 5432

This policy says: "For pods labeled app=database, only allow incoming TCP traffic on port 5432 from pods labeled app=api." All other incoming traffic to the database is denied.

Certification

CKA Exam Tips: Cluster Architecture Domain (25%)

This domain is worth a quarter of your exam score. Here's what to focus on and how to practice.

High-Frequency Exam Tasks

  1. etcd backup and restore — Practice the full cycle with ETCDCTL_API=3
  2. Cluster upgrades — Know the kubeadm upgrade workflow
  3. RBAC creation — Create Roles, ClusterRoles, and bindings quickly
  4. Node drain and cordon — Know the flags
  5. kubeconfig management — Switch contexts, set namespaces
  6. Troubleshoot kubelet — Check service status, read logs

Speed Tips

  • Use kubectl create with --dry-run=client -o yaml to generate YAML quickly
  • Set up aliases at the start: alias k=kubectl
  • Use kubectl explain <resource> instead of the docs when possible
  • Bookmark the official K8s docs pages for RBAC, etcd, kubeadm
  • Practice with killer.sh (included with exam purchase)
Time management: You have 2 hours for ~17 tasks. That's about 7 minutes per task. Skip hard questions and come back. Partial credit is possible.
Summary

Key Takeaways

Let's return to our mission. Your team lead asked you to set up a production cluster. Now you know how.

Architecture Mastery

  1. API Server is the single entry point; every request passes through auth, authz, and admission
  2. etcd stores all state — back it up or risk losing everything
  3. Scheduler places pods via filter + score
  4. Controllers run reconciliation loops to maintain desired state
  5. kubelet manages pods on each node; kube-proxy manages network rules
  6. CNI plugins (Azure CNI) implement the flat network model

Operations Confidence

  1. RBAC controls access with Roles + Bindings (additive only, deny by default)
  2. Upgrades go control plane first, then workers one at a time
  3. Drain/cordon/uncordon for safe node maintenance
  4. Resource requests/limits prevent noisy neighbors
  5. Network Policies restrict pod-to-pod traffic
  6. AKS automates the hard parts — use it

Next session: Workloads and Scheduling — Deployments, StatefulSets, Jobs, CronJobs, and the art of keeping your applications running exactly how you want them.

Interactive

Questions & Discussion

We've covered cluster architecture from top to bottom. What would you like to explore further?

Architecture

Control plane, worker nodes, networking, CNI

Security

RBAC, service accounts, admission controllers, network policies

Operations

Upgrades, etcd backup, node management, resource limits

Presentation 2 of 8  |  CKA Domain: Cluster Architecture (25%)  |  Next: Workloads & Scheduling

← Back