Introduction to Kubernetes
From containers to orchestration — the full story
Presentation 1 of 8 | CKA + CKAD Certification Track
From containers to orchestration — the full story
Presentation 1 of 8 | CKA + CKAD Certification Track
You just joined a company that deploys 50 microservices across a dozen servers. The deployment process? A shared wiki page with 47 steps, half of them outdated. Last Friday, someone deployed the payment service to the wrong server, and the whole checkout flow went down for two hours.
Your team lead looks at you and says: "We need to fix this. I've heard about this thing called Kubernetes..."
By the end of this presentation, you'll understand exactly why Kubernetes is the answer to this problem — and how it works under the hood.
We'll follow the journey from "why do we need this?" to "how does it all fit together?"
Think about the last time you deployed software manually. You SSH into a server, pull the latest code, restart the service, and hope nothing breaks. Now multiply that by 50 services, across 12 servers, with 5 developers all deploying at different times.
To understand Kubernetes, we first need to understand what it orchestrates. Let's compare the two main approaches to isolating applications.
| Aspect | Virtual Machine | Container |
|---|---|---|
| What it is | Full OS running on a hypervisor | Isolated process sharing the host OS kernel |
| Boot time | Minutes | Seconds (or less) |
| Size | Gigabytes (full OS image) | Megabytes (just app + dependencies) |
| Resource overhead | High — each VM runs its own kernel | Low — shares the host kernel |
| Isolation | Strong — hardware-level | Good — process-level via namespaces/cgroups |
| Density | ~10-20 per host | ~100s per host |
| Portability | Less portable (hypervisor-dependent) | Highly portable (runs anywhere with a container runtime) |
Containers won because they gave us VM-like isolation at a fraction of the cost. But they created a new problem: how do you manage hundreds of them?
Now that we see why containers matter, let's talk about the tool that made them mainstream.
In 2013, Solomon Hykes demoed Docker at a PyCon lightning talk. In just 5 minutes, he showed how any application could be packaged into a lightweight, portable container. The audience went wild. Within a year, every major tech company was adopting Docker.
# A simple Dockerfile FROM node:18-alpine WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 CMD ["node", "server.js"]
This file guarantees the app runs the same way everywhere — your laptop, CI server, staging, production. The "works on my machine" problem is solved.
Let's make sure we have the foundations solid before moving on to Kubernetes.
Now that we understand containers, let's meet the system that manages them at scale.
By 2014, companies were running thousands of containers. Docker was great at running one container, but who decides which server runs which container? What happens when a server crashes? How do you scale from 3 containers to 300? Someone needed to be the conductor of this orchestra.
The name "Kubernetes" comes from the Greek word for "helmsman" or "pilot" — the person who steers the ship. The abbreviation "K8s" replaces the 8 letters between K and s.
Every superhero has an origin story. Kubernetes is no different.
By open-sourcing Kubernetes, Google did something strategic and generous at the same time. They gave the world a platform that:
Fun fact: Google runs over 4 billion containers per week using Borg. Kubernetes brings that expertise to everyone.
Let's look at the numbers that tell the story of K8s adoption.
You don't have to build a cluster from scratch. Every major cloud provider offers a managed Kubernetes service.
Azure Kubernetes Service
Elastic Kubernetes Service
Google Kubernetes Engine
Why managed? Running your own K8s control plane is like owning a restaurant and also being the plumber. Managed services handle the infrastructure so you can focus on deploying your applications.
Let's connect each benefit back to the deployment nightmare we started with.
A container crashes at 3 AM? K8s detects it and automatically restarts it. No pager alerts, no manual intervention. Remember our story — the payment service going down? K8s would have restarted it in seconds.
Black Friday traffic spike? K8s scales your pods from 3 to 30 based on CPU/memory usage. When traffic drops, it scales back down. You pay only for what you use.
Deploy a new version while the old one is still serving traffic. If the new version fails health checks, K8s automatically rolls back. Zero-downtime deployments become the default, not the exception.
Containers need to find each other. K8s provides built-in DNS so services can communicate by name, not IP address. No more hardcoding server addresses in config files.
Great progress! Let's test what we've learned about Kubernetes before diving into its architecture.
Now that we know what K8s does, let's open the hood and see how it's built. Think of a K8s cluster as a team with clear roles.
Makes all the decisions: where to place pods, how to respond to failures, what the desired state should be. You talk to the control plane; it talks to the workers.
Actually run your application containers. Each worker node has an agent (kubelet) that takes orders from the control plane and reports back on status.
Let's meet each component. Think of the control plane as the management team of a company.
The Front Door. Every request — from kubectl, the dashboard, or other components — goes through the API Server. It validates requests, authenticates users, and is the only component that talks to etcd.
The Memory. A distributed key-value store that holds all cluster state. Every pod, service, config, and secret is stored here. If etcd is lost and there's no backup, the entire cluster state is gone.
The Matchmaker. When a new pod needs a home, the Scheduler evaluates available nodes based on resources, affinity rules, and constraints, then assigns the pod to the best-fit node.
The Enforcer. Runs control loops that continuously compare the actual state with the desired state. If you said "I want 3 replicas" and one dies, the controller creates a new one.
The control plane makes decisions, but worker nodes do the actual work. Each worker has three key components.
The Node Agent. Runs on every worker node. It receives pod specifications from the API Server and ensures the containers described in those specs are running and healthy.
Think of kubelet as a site manager who takes blueprints from HQ and makes sure the building gets built correctly.
The Network Plumber. Maintains network rules on each node that allow pods to communicate with each other and with the outside world. Handles load balancing for Services.
Without kube-proxy, your pods would be isolated islands with no way to reach each other.
The Engine. The software that actually runs containers. K8s supports any runtime that implements the Container Runtime Interface (CRI).
containerd is the default. Docker (dockershim) was removed in K8s 1.24. CRI-O is another popular option.
This is the single most important concept in Kubernetes. It changes how you think about infrastructure.
"Step 1: SSH into server-3. Step 2: Pull the latest image. Step 3: Stop the old container. Step 4: Start the new one. Step 5: Check if it's healthy..."
You tell the system HOW to do every step. If something fails halfway through, you're left in a broken state.
"I want 3 replicas of my-app:v2, each with 256MB RAM, exposed on port 80." K8s figures out the rest.
You tell the system WHAT you want. K8s continuously works to make reality match your declaration.
# You declare this: # K8s does all of this: apiVersion: apps/v1 - Finds nodes with enough resources kind: Deployment - Pulls the container image metadata: - Starts 3 pods across nodes name: my-app - Sets up networking spec: - Monitors health continuously replicas: 3 - Restarts any pod that fails ... - Maintains 3 replicas forever
We've covered the cluster architecture. Let's make sure the key components are clear.
Now that we understand the cluster architecture, let's zoom in on the most fundamental object you'll work with.
Imagine you're moving to a new apartment. A container is like a single box with all your kitchen stuff. A Pod is the apartment unit itself — it can hold one or more boxes (containers) that need to share the same space (network, storage).
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: my-app
spec:
containers:
- name: my-app
image: nginx:1.25
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
You've defined a pod. But which node will it land on? Here's how the scheduling dance works.
Most pods have one container. But sometimes, your app needs a helper. That's where multi-container patterns shine.
Imagine a web server that needs its logs shipped to a central logging system. You could build log shipping into the app, or you could add a sidecar container that reads the log files and forwards them. The app stays simple; the sidecar handles the cross-cutting concern.
apiVersion: v1
kind: Pod
metadata:
name: web-with-logging
spec:
containers:
- name: web-server
image: nginx:1.25
volumeMounts:
- name: logs
mountPath: /var/log/nginx
- name: log-shipper
image: fluentd:latest
volumeMounts:
- name: logs
mountPath: /var/log/nginx
readOnly: true
volumes:
- name: logs
emptyDir: {}
Now that we understand the theory, let's talk about how you'll actually run Kubernetes in practice.
Perfect for learning and testing
What you'll use at work
For this training, we'll use AKS for cloud exercises and minikube/kind for local practice.
Since Civica uses Azure, let's take a closer look at AKS — the managed Kubernetes service we'll use throughout this training.
# Create a resource group az group create \ --name myResourceGroup \ --location uksouth # Create an AKS cluster az aks create \ --resource-group myResourceGroup \ --name myAKSCluster \ --node-count 3 \ --node-vm-size Standard_DS2_v2 \ --generate-ssh-keys # Get credentials az aks get-credentials \ --resource-group myResourceGroup \ --name myAKSCluster # Verify connection kubectl get nodes
Every interaction with a Kubernetes cluster starts with kubectl (pronounced "kube-control" or "kube-cuddle" — the community disagrees).
kubectl [command] [resource] [name] [flags]
# View cluster info kubectl cluster-info kubectl get nodes # Work with pods kubectl get pods kubectl get pods -o wide kubectl describe pod my-app kubectl logs my-app # Create resources kubectl apply -f deployment.yaml kubectl create namespace dev # Delete resources kubectl delete pod my-app kubectl delete -f deployment.yaml
source <(kubectl completion bash)alias k=kubectlCKA/CKAD exam tip: You'll spend the entire exam in a terminal with kubectl. Speed with this tool directly translates to exam performance. Practice until the commands are muscle memory.
Almost done! Let's check our understanding of pods and the tools we'll use daily.
kubectl describe shows comprehensive details about a resource including metadata, spec, status, conditions, and events. Events are especially useful for debugging (e.g., image pull errors, scheduling failures). Note: there is no kubectl inspect command.K8s isn't the only orchestrator out there. Let's understand why it won and when alternatives make sense.
| Feature | Kubernetes | Docker Swarm | HashiCorp Nomad |
|---|---|---|---|
| Complexity | High learning curve | Simple, quick setup | Moderate |
| Scalability | Thousands of nodes | Hundreds of nodes | Thousands of nodes |
| Community | Massive (CNCF ecosystem) | Declining | Growing |
| Workloads | Containers only | Containers only | Containers, VMs, bare-metal |
| Auto-scaling | Built-in (HPA, VPA, Cluster) | Manual | External (Autoscaler plugin) |
| Service mesh | Istio, Linkerd, etc. | Limited | Consul Connect |
| Cloud support | All major clouds (AKS, EKS, GKE) | Limited | Self-managed primarily |
| Best for | Large-scale microservices | Small, simple setups | Multi-workload environments |
Docker Swarm was simpler but lost the market. Nomad is excellent for mixed workloads. For most organizations in 2024+, Kubernetes is the industry standard.
One of the goals of this training is to prepare you for the CKA exam. Here's what you're signing up for.
Key insight: The CKA is a hands-on exam. You won't answer multiple-choice questions — you'll fix broken clusters, create deployments, and troubleshoot real issues in a live environment. Practice, practice, practice.
While CKA focuses on cluster administration, CKAD is about building and deploying applications on Kubernetes.
If you're a DevOps/Platform engineer, start with CKA. If you're an application developer, start with CKAD. Both are valuable, and there's significant overlap. This training covers content for both.
Today was just the beginning. Here's the full path from Kubernetes novice to certified professional.
Next session: We'll dive deep into cluster architecture — understanding exactly how each control plane component works, how to bootstrap a cluster with kubeadm, RBAC, networking, and more. If today was "what is K8s," next time is "how does K8s really work."
Let's bring our story full circle. Remember our developer with 50 microservices and a 47-step wiki page?
Our developer now has a plan: containerize each microservice with Docker, deploy them to an AKS cluster, let Kubernetes handle scheduling, scaling, and recovery. That 47-step wiki? Replaced by a set of YAML manifests and kubectl apply.
The deployment that used to take a nerve-wracking afternoon now takes 30 seconds and happens 10 times a day.
We've covered a lot of ground today. What questions do you have about containers, Kubernetes, or the certification path?
Containers, K8s architecture, pods, declarative model
kubectl, AKS, minikube, local setup
CKA, CKAD, study strategy, exam tips
Presentation 1 of 8 | Next: Cluster Architecture, Installation & Configuration