Kubernetes Training — Presentation 7 of 8

Configuration, Security
& RBAC

Your app is deployed and running. But who can access it? And how does it get its configuration? Let's secure and configure your cluster properly.

CKAD Domain Application Environment, Configuration & Security — 25%
The Story So Far

You've Built the House. Now Lock the Doors.

You know how to deploy pods, services, and ingresses. Your application is live. But right now, every developer has admin access, your database password is hardcoded in your YAML, and any container can consume unlimited resources.

"We had an intern run kubectl delete namespace production on a Friday afternoon. That's when we learned about RBAC."

— A real story from the Kubernetes community

Today we fix all of that.

Overview

Our Journey Today

Part 1: Configuration

  1. ConfigMaps — the settings panel
  2. Secrets — the vault for sensitive data
  3. Azure Key Vault CSI — enterprise secrets
  4. Resource Requirements — container budgets

Part 2: Security & Access

  1. SecurityContext — least privilege
  2. Pod Security Standards — the three tiers
  3. RBAC — who can do what, where
  4. Service Accounts — machine identity
  5. Azure AD + AKS RBAC
ConfigMaps

ConfigMaps: The Settings Panel for Your Apps

Think of a ConfigMap like the settings panel on your phone. You don't rebuild the phone to change the ringtone — you just change the setting. ConfigMaps let you do the same for containers.

What They Store

  • Environment variables (DB_HOST, LOG_LEVEL)
  • Configuration files (nginx.conf, app.properties)
  • Command-line arguments
  • Any non-sensitive key-value data

Key Properties

  • Namespace-scoped (not shared across namespaces)
  • Max size: 1 MiB
  • Can be marked immutable (v1.21+)
  • Changes propagate to mounted volumes (not env vars)
ConfigMaps

Creating ConfigMaps — Four Ways

# From literal key-value pairs
kubectl create configmap app-settings \
  --from-literal=DB_HOST=postgres.db.svc \
  --from-literal=LOG_LEVEL=info

# From a file (key = filename, value = file content)
kubectl create configmap nginx-conf --from-file=nginx.conf

# From an env file
kubectl create configmap env-config --from-env-file=.env

# From YAML (declarative, GitOps-friendly)
kubectl apply -f configmap.yaml
💡
Exam Tip: The --from-literal and --from-file imperative commands are fast in the CKA/CKAD. Practice typing them from memory.
ConfigMaps

Declarative ConfigMap YAML

The ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-settings
data:
  DB_HOST: "postgres.db.svc"
  LOG_LEVEL: "info"
  MAX_CONNECTIONS: "100"
immutable: true  # optional

Consuming as Env Vars

spec:
  containers:
  - name: app
    envFrom:
    - configMapRef:
        name: app-settings
    # Or specific keys:
    env:
    - name: DATABASE_HOST
      valueFrom:
        configMapKeyRef:
          name: app-settings
          key: DB_HOST
⚠️
Immutable ConfigMaps cannot be updated after creation — you must delete and recreate them. This protects against accidental changes and improves cluster performance (kubelet stops watching them).
ConfigMaps

Mounting ConfigMaps as Volumes

When your app reads config from a file (like nginx.conf), mount the ConfigMap as a volume. Bonus: volume-mounted ConfigMaps auto-update when the ConfigMap changes (within ~60s).

spec:
  containers:
  - name: web
    image: nginx
    volumeMounts:
    - name: config-vol
      mountPath: /etc/nginx/conf.d
      readOnly: true
  volumes:
  - name: config-vol
    configMap:
      name: nginx-conf
      items:            # optional: select specific keys
      - key: nginx.conf
        path: default.conf
💡
Key distinction: Env vars from ConfigMaps do NOT auto-update. Only volume mounts do. This is a common exam question.
Secrets

Secrets: The Vault Where Sensitive Data Lives

Secrets are structurally similar to ConfigMaps but designed for sensitive data: passwords, tokens, TLS certificates. But here's the uncomfortable truth...

What Secrets Provide

  • Base64-encoded storage (NOT encryption!)
  • Separate from pod spec (access control)
  • tmpfs mounting (memory, not disk)
  • RBAC-based access restriction
  • Encryption at rest (when configured)

The Hard Truth

  • Base64 is encoding, NOT encryption
  • Anyone with get secret RBAC can read them
  • Stored in etcd — unencrypted by default
  • Visible in pod spec via kubectl get pod -o yaml

Secrets are only as secure as your cluster's RBAC and etcd encryption.

Secrets

Secret Types & Creation

TypeUse CaseCLI Shortcut
OpaqueGeneric key-value secrets (default)kubectl create secret generic
kubernetes.io/tlsTLS cert + key pairskubectl create secret tls
kubernetes.io/dockerconfigjsonContainer registry credentialskubectl create secret docker-registry
kubernetes.io/basic-authUsername + passwordManual YAML
kubernetes.io/service-account-tokenAuto-generated SA tokensAutomatic
# Create a generic secret imperatively
kubectl create secret generic db-creds \
  --from-literal=username=admin \
  --from-literal=password=S3cureP@ss!

# Create TLS secret
kubectl create secret tls app-tls \
  --cert=tls.crt --key=tls.key
Secrets

Consuming Secrets in Pods

As Environment Variables

spec:
  containers:
  - name: app
    env:
    - name: DB_PASSWORD
      valueFrom:
        secretKeyRef:
          name: db-creds
          key: password

As Volume Mounts

spec:
  containers:
  - name: app
    volumeMounts:
    - name: secret-vol
      mountPath: /etc/secrets
      readOnly: true
  volumes:
  - name: secret-vol
    secret:
      secretName: db-creds
🔒
Best Practice: Prefer volume mounts over env vars for secrets. Env vars can leak into logs, crash dumps, and child processes. Volume-mounted secrets are files with restricted permissions (mode 0644 by default, configurable via defaultMode).
Knowledge Check

Quiz: Configuration Fundamentals

Q1: What is the maximum size of a ConfigMap in Kubernetes?

256 KiB
1 MiB
5 MiB
No limit
Correct: 1 MiB. ConfigMaps are stored in etcd which has a per-object size limit of 1 MiB. For larger configs, consider mounting a volume from a PersistentVolumeClaim.

Q2: If you update a ConfigMap, which consumption method will see the change automatically?

Environment variables (envFrom)
Volume mounts
Both env vars and volumes
Neither — pods must restart
Correct: Volume mounts. Mounted ConfigMaps auto-update within the kubelet sync period (~60 seconds). Env vars are set at pod startup and never refreshed.

Q3: Kubernetes Secrets are encrypted by default in etcd. True or false?

True — Secrets are always encrypted at rest
False — Secrets are only base64-encoded by default
True — but only in managed clusters like AKS
False — Secrets are stored in plaintext
Correct: False. By default, Secrets are stored as base64-encoded data in etcd without encryption. You must enable EncryptionConfiguration to encrypt Secrets at rest. AKS does enable encryption at rest by default for etcd.
Azure Integration

Azure Key Vault CSI: Enterprise Secrets Management

Kubernetes Secrets have limitations. For production, you want a dedicated secrets manager with auditing, rotation, and centralized control. Azure Key Vault fills that role.

How It Works

  1. Secrets stored in Azure Key Vault (encrypted, audited)
  2. CSI driver mounts them as files in your pods
  3. Optionally syncs to Kubernetes Secret objects
  4. Uses Workload Identity for authentication

Why Use It?

  • Centralized secret management across services
  • Audit trail — who accessed what, when
  • Automatic rotation support
  • No secrets stored in Git or etcd
  • HSM-backed encryption
Azure Integration

SecretProviderClass Configuration

apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: azure-kv-secrets
spec:
  provider: azure
  parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "false"
    clientID: "<workload-identity-client-id>"
    keyvaultName: "my-keyvault"
    tenantId: "<tenant-id>"
    objects: |
      array:
        - |
          objectName: db-password
          objectType: secret
  secretObjects:             # Optional: sync to K8s Secret
  - secretName: db-creds-synced
    type: Opaque
    data:
    - objectName: db-password
      key: password
Resources

Resource Requirements: Setting Budgets for Containers

Imagine a shared office with no rules about desk space. One team takes over three floors while another has no room to work. That's what happens in a cluster without resource limits.

Requests (Minimum Guarantee)

  • The scheduler uses requests to find a node
  • Your container is guaranteed this amount
  • If you request 256Mi, you'll always have 256Mi
  • Under-requesting = risk of OOM kills

Limits (Maximum Ceiling)

  • Hard cap on resource usage
  • CPU: throttled when hitting the limit
  • Memory: OOMKilled if exceeding the limit
  • Over-limiting = wasted capacity
⚠️
Golden Rule: Always set requests. Set limits for memory (to prevent OOM). CPU limits are debated — throttling can cause latency issues. Many teams skip CPU limits in favor of good requests.
Resources

Resource Specs & QoS Classes

Pod Resource Specification

spec:
  containers:
  - name: api
    image: myapp:v2
    resources:
      requests:
        cpu: "250m"     # 0.25 cores
        memory: "128Mi"
      limits:
        cpu: "500m"     # 0.5 cores
        memory: "256Mi"

QoS Classes (Automatic)

ClassCondition
Guaranteedrequests = limits for all containers
BurstableAt least one request or limit set
BestEffortNo requests or limits at all

When a node runs out of memory, Kubernetes evicts BestEffort first, then Burstable, and Guaranteed last.

Resources

LimitRanges & ResourceQuotas

What if developers forget to set resource requests? Cluster admins use LimitRanges and ResourceQuotas as guardrails.

LimitRange (per-pod defaults)

apiVersion: v1
kind: LimitRange
metadata:
  name: default-limits
spec:
  limits:
  - type: Container
    default:
      cpu: "500m"
      memory: "256Mi"
    defaultRequest:
      cpu: "100m"
      memory: "128Mi"

ResourceQuota (namespace totals)

apiVersion: v1
kind: ResourceQuota
metadata:
  name: team-quota
spec:
  hard:
    requests.cpu: "10"
    requests.memory: "20Gi"
    limits.cpu: "20"
    limits.memory: "40Gi"
    pods: "50"
💡
Key difference: LimitRange sets per-container/pod defaults and maximums. ResourceQuota sets total limits for the entire namespace. Use both together.
Security

SecurityContext: Running with Least Privilege

By default, containers run as root. That's like giving every employee a master key to every room in the building. SecurityContext lets you lock down exactly what a container can do.

Pod-Level Settings

  • runAsUser — UID to run processes as
  • runAsGroup — primary GID
  • fsGroup — GID for mounted volumes
  • runAsNonRoot — reject if image runs as root
  • sysctls — kernel parameters

Container-Level Settings

  • allowPrivilegeEscalation — block setuid/setgid
  • readOnlyRootFilesystem — immutable FS
  • capabilities — add/drop Linux capabilities
  • privileged — full host access (avoid!)
  • seccompProfile — syscall filtering
Security

SecurityContext in Practice

apiVersion: v1
kind: Pod
metadata:
  name: secure-app
spec:
  securityContext:         # Pod-level
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  containers:
  - name: app
    image: myapp:v2
    securityContext:       # Container-level (overrides pod)
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: RuntimeDefault
🔒
Hardening checklist: (1) Run as non-root, (2) Drop ALL capabilities, (3) Read-only root FS, (4) No privilege escalation, (5) Use seccomp profile. This blocks most container escape attacks.
Security

Pod Security Standards: The Three Tiers

Pod Security Standards (PSS) replaced the deprecated PodSecurityPolicy. They define three profiles applied at the namespace level via labels.

Privileged

Unrestricted. No checks. Use for system workloads like kube-system.

  • Allows everything
  • Host networking, privileged containers
  • Only for trusted infrastructure pods

Baseline

Prevents known privilege escalations. Good starting point for most workloads.

  • Blocks hostNetwork, hostPID, hostIPC
  • Blocks privileged containers
  • Allows running as root (use Restricted to block)

Restricted

Hardened. Follows current best practices. Target this for production.

  • Must run as non-root
  • Must drop ALL capabilities
  • Must use seccomp RuntimeDefault
  • No privilege escalation
Security

Applying Pod Security Standards

Apply standards via namespace labels. Three enforcement modes let you roll out gradually:

ModeBehaviorUse Case
enforceReject pods that violateProduction namespaces
auditAllow but log violationsMonitoring compliance
warnAllow but show user warningsDeveloper experience
# Apply restricted standard to a namespace
kubectl label namespace production \
  pod-security.kubernetes.io/enforce=restricted \
  pod-security.kubernetes.io/enforce-version=latest \
  pod-security.kubernetes.io/warn=restricted \
  pod-security.kubernetes.io/audit=restricted
💡
Rollout strategy: Start with warn + audit on existing namespaces to see what would break, then enable enforce once workloads comply.
Knowledge Check

Quiz: Security Fundamentals

Q1: Which SecurityContext setting prevents a container from gaining more privileges than its parent process?

runAsNonRoot: true
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
privileged: false
Correct: allowPrivilegeEscalation: false. This prevents setuid binaries and other mechanisms from granting more privileges. runAsNonRoot prevents running as root but doesn't stop escalation from non-root.

Q2: Which Pod Security Standard level requires containers to run as non-root and drop ALL capabilities?

Privileged
Baseline
Restricted
Hardened (not a real level)
Correct: Restricted. This is the most strict profile. It requires runAsNonRoot, dropping ALL capabilities, seccomp RuntimeDefault, and no privilege escalation. Baseline only blocks known escalation vectors like hostNetwork.

Q3: When a node runs out of memory, which QoS class is evicted first?

BestEffort
Burstable
Guaranteed
They are evicted randomly
Correct: BestEffort. Eviction order is BestEffort first (no resource guarantees), then Burstable (partial guarantees), and Guaranteed last (full resource guarantees).
RBAC

The Story of the Accidental Deletion

"A junior developer on our team was asked to clean up some test pods. They ran kubectl delete pods --all but forgot they were still in the production namespace context. Every running pod was terminated. The load balancer had nothing to route to. Our application was down for 12 minutes during peak hours."

The fix wasn't to blame the developer. It was to ensure they never had permission to delete pods in production in the first place. That's what RBAC does.

RBAC answers three questions: Who can do what to which resources?

RBAC

RBAC: The Four Building Blocks

Role (namespace-scoped)

Defines what actions are allowed on which resources within a single namespace.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: dev
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log"]
  verbs: ["get", "list", "watch"]

ClusterRole (cluster-wide)

Same as Role but applies across all namespaces or to cluster-scoped resources (nodes, PVs).

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: node-viewer
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get", "list", "watch"]
RBAC

RoleBinding & ClusterRoleBinding

Roles define permissions. Bindings attach those permissions to users, groups, or service accounts. Think of it as: Role = "what you can do", Binding = "who gets to do it".

RoleBinding (namespace-scoped)

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods-dev
  namespace: dev
subjects:
- kind: User
  name: [email protected]
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-admin-binding
subjects:
- kind: Group
  name: platform-team
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
RBAC

RBAC Verbs & Common Patterns

VerbHTTP MethodDescription
getGET (single)Read a specific resource
listGET (collection)List all resources of a type
watchGET (streaming)Watch for changes
createPOSTCreate new resources
updatePUTReplace entire resource
patchPATCHModify specific fields
deleteDELETERemove a resource
deletecollectionDELETEDelete all resources of a type
💡
Exam shortcut: kubectl create role pod-reader --verb=get,list,watch --resource=pods -n dev creates roles imperatively. Much faster than writing YAML during the exam.
RBAC

Testing & Debugging RBAC

Before applying RBAC in production, always test. Kubernetes provides kubectl auth can-i to check permissions without trial and error.

# Can I create deployments in the dev namespace?
kubectl auth can-i create deployments --namespace dev
# yes

# Can user jane list pods in production?
kubectl auth can-i list pods --namespace production --as [email protected]
# no

# What can I do in this namespace? (full list)
kubectl auth can-i --list --namespace dev

# Can a service account create secrets?
kubectl auth can-i create secrets \
  --as=system:serviceaccount:dev:my-sa -n dev
💡
Debugging tip: If permissions aren't working, check: (1) Is the binding in the right namespace? (2) Is the subject name spelled correctly? (3) Is the apiGroup correct? Use kubectl get rolebindings -n dev -o yaml to inspect.
RBAC

Preventing the Disaster: RBAC for Our Junior Dev

Let's go back to our story. Here's how we'd set up RBAC so that the junior developer can work freely in dev but can only view (never delete) in production:

Dev Namespace: Full Access

kind: RoleBinding
metadata:
  name: junior-dev-full
  namespace: dev
subjects:
- kind: User
  name: [email protected]
roleRef:
  kind: ClusterRole
  name: edit  # built-in role

Prod Namespace: View Only

kind: RoleBinding
metadata:
  name: junior-dev-readonly
  namespace: production
subjects:
- kind: User
  name: [email protected]
roleRef:
  kind: ClusterRole
  name: view  # built-in role
💡
Built-in ClusterRoles: cluster-admin (full access), admin (namespace admin), edit (create/delete most resources), view (read-only). Reuse these instead of creating custom roles.
Knowledge Check

Quiz: RBAC

Q1: Which command checks if a specific user can perform an action?

kubectl rbac check
kubectl auth can-i --as=username
kubectl get permissions --user=username
kubectl describe user username
Correct: kubectl auth can-i --as=username. The --as flag impersonates another user to check their permissions. You need impersonation privileges to use this.

Q2: What is the difference between a Role and a ClusterRole?

Roles use YAML, ClusterRoles use JSON
Roles are for pods, ClusterRoles are for nodes
Roles are namespace-scoped, ClusterRoles are cluster-wide
ClusterRoles have more verbs available
Correct: Roles are namespace-scoped (apply to one namespace), while ClusterRoles are cluster-wide and can also grant access to cluster-scoped resources like nodes, PersistentVolumes, and namespaces themselves.

Q3: Can a RoleBinding reference a ClusterRole?

Yes — it grants the ClusterRole's permissions only within the RoleBinding's namespace
No — RoleBindings can only reference Roles
Yes — it grants cluster-wide permissions
Only if the ClusterRole has a namespace annotation
Correct: Yes, a RoleBinding can reference a ClusterRole, but the permissions are scoped to the RoleBinding's namespace. This is a common pattern to reuse ClusterRoles (like "view" or "edit") across multiple namespaces without duplicating Role definitions.
Service Accounts

Service Accounts: Machine Identity in Kubernetes

Users authenticate as humans. But what about your applications? When your pod needs to talk to the Kubernetes API (e.g., to list pods or create jobs), it uses a Service Account — an identity for machines.

Key Facts

  • Every namespace has a default Service Account
  • Every pod gets a SA token auto-mounted (v1.24+: bound tokens)
  • Tokens are projected volumes with short expiry
  • RBAC applies to SAs just like users

Best Practices

  • Create dedicated SAs per application
  • Disable auto-mounting when not needed
  • Grant minimum required RBAC permissions
  • Use Workload Identity for cloud API access
# Create a service account
kubectl create serviceaccount my-app-sa -n dev

# Disable auto-mount on a pod
spec:
  serviceAccountName: my-app-sa
  automountServiceAccountToken: false
Service Accounts

Binding RBAC to Service Accounts

A real-world scenario: your monitoring app needs to list pods across all namespaces, but should not be able to create or delete anything.

# 1. Create the ServiceAccount
kubectl create serviceaccount monitor-sa -n monitoring

# 2. Create a ClusterRole with read-only pod access
kubectl create clusterrole pod-reader \
  --verb=get,list,watch --resource=pods

# 3. Bind the ClusterRole to the ServiceAccount
kubectl create clusterrolebinding monitor-pods \
  --clusterrole=pod-reader \
  --serviceaccount=monitoring:monitor-sa

# 4. Verify
kubectl auth can-i list pods --all-namespaces \
  --as=system:serviceaccount:monitoring:monitor-sa
# yes

kubectl auth can-i delete pods -n production \
  --as=system:serviceaccount:monitoring:monitor-sa
# no
Azure Integration

Azure AD + AKS RBAC Integration

In AKS, you can integrate Microsoft Entra ID (Azure AD) with Kubernetes RBAC for enterprise identity management. Users authenticate with their corporate credentials.

How It Works

  1. User runs az aks get-credentials
  2. kubectl triggers Entra ID login
  3. Token is sent to API server
  4. API server validates with Entra ID
  5. K8s RBAC rules applied to the authenticated identity

AKS-Managed RBAC

AKS provides built-in Azure roles:

  • Azure Kubernetes Service RBAC Admin
  • Azure Kubernetes Service RBAC Cluster Admin
  • Azure Kubernetes Service RBAC Reader
  • Azure Kubernetes Service RBAC Writer

These map to K8s RBAC without needing RoleBindings.

Security

Network Policies: The Firewall for Pods

By default, every pod can talk to every other pod. Network Policies let you restrict traffic — like firewall rules for your cluster network.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-allow
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes: ["Ingress", "Egress"]
  ingress:
  - from:
    - podSelector:
        matchLabels: { app: frontend }
    ports:
    - port: 8080
  egress:
  - to:
    - podSelector:
        matchLabels: { app: database }
    ports:
    - port: 5432
⚠️
Important: Network Policies require a CNI that supports them (Calico, Cilium, Azure CNI). The default kubenet in AKS does NOT enforce Network Policies.
Security

Admission Controllers & Image Security

Admission controllers are gatekeepers that intercept API requests before they are persisted. They can validate, mutate, or reject requests.

Key Admission Controllers

  • NamespaceLifecycle — prevents creating objects in terminating namespaces
  • LimitRanger — applies default resource limits
  • ResourceQuota — enforces namespace quotas
  • PodSecurity — enforces Pod Security Standards
  • MutatingAdmissionWebhook — custom logic
  • ValidatingAdmissionWebhook — custom validation

Image Security Best Practices

  • Always use specific image tags (never :latest)
  • Use image digest pinning for critical workloads
  • Scan images with Trivy, Snyk, or Azure Defender
  • Use private registries with pull secrets
  • Sign images with Notary / Cosign
  • Use distroless or minimal base images
Security

Production Security Checklist

Cluster Level

  • Enable RBAC (default in modern K8s)
  • Enable encryption at rest for etcd
  • Use Network Policies
  • Enable audit logging
  • Keep Kubernetes up to date
  • Restrict API server access (private endpoints)

Workload Level

  • Run as non-root, drop ALL capabilities
  • Read-only root filesystem
  • Set resource requests and limits
  • Use dedicated Service Accounts
  • Scan images, use digests
  • Apply Pod Security Standards (Restricted)
💡
CKA/CKAD Exam: Security topics appear across both exams. Know how to create Roles/RoleBindings imperatively, apply SecurityContext to pods, and troubleshoot permission denied errors.
Workshop

Hands-On Lab: Secure a Namespace

Practice the concepts from this session with this end-to-end exercise:

Exercise Steps

  1. Create a namespace called secure-app
  2. Create a ConfigMap with app settings (DB_HOST, LOG_LEVEL)
  3. Create a Secret with database credentials
  4. Apply Pod Security Standards (Restricted) to the namespace
  5. Create a ServiceAccount app-sa
  6. Create a Role that allows get/list pods and secrets
  7. Bind the Role to the ServiceAccount
  8. Deploy a pod that uses the ConfigMap, Secret, ServiceAccount, and a hardened SecurityContext
  9. Verify RBAC with kubectl auth can-i
  10. Set resource requests and limits on the pod
# Quick start:
kubectl create namespace secure-app
kubectl label namespace secure-app pod-security.kubernetes.io/enforce=restricted
kubectl create configmap app-config --from-literal=DB_HOST=db.svc -n secure-app
kubectl create secret generic db-creds --from-literal=password=secret -n secure-app
kubectl create serviceaccount app-sa -n secure-app
Final Knowledge Check

Quiz: Configuration, Security & RBAC

Q1: Which resource enforces a total CPU/memory budget across all pods in a namespace?

LimitRange
ResourceQuota
PodDisruptionBudget
HorizontalPodAutoscaler
Correct: ResourceQuota. It sets hard limits on total resource consumption within a namespace. LimitRange sets per-container/pod defaults and maximums, not namespace-wide totals.

Q2: A pod using Azure Key Vault CSI driver accesses secrets via what mechanism?

Kubernetes Secret objects only
Volume mount (with optional sync to K8s Secrets)
Environment variables injected by the CSI driver
Azure SDK calls from within the container
Correct: Volume mount. The CSI driver mounts secrets from Key Vault as files in the pod's filesystem. Optionally, it can also sync those to Kubernetes Secret objects for use as env vars.

Q3: Which built-in ClusterRole grants read-only access to most resources but NOT secrets?

view
edit
admin
reader
Correct: view. The built-in "view" ClusterRole grants read-only access to most namespaced resources but explicitly excludes Secrets to prevent accidental exposure of sensitive data. The "edit" and "admin" roles do include Secret access.
Summary

What We Learned Today

Configuration

  • ConfigMaps for non-sensitive config
  • Secrets for sensitive data (base64, not encrypted by default)
  • Azure Key Vault CSI for production secrets
  • Resource requests/limits and QoS classes

Security

  • SecurityContext: run as non-root, drop capabilities
  • Pod Security Standards: Privileged, Baseline, Restricted
  • Network Policies as cluster firewalls
  • Image scanning and admission controllers

Access Control

  • RBAC: Role + RoleBinding = permissions
  • ClusterRole for cluster-wide access
  • Service Accounts for machine identity
  • Azure AD integration for enterprise auth
Up Next

Next: Observability, Troubleshooting & Exam Prep

Your cluster is configured and secured. But what happens when things go wrong at 3 AM? In our final session, we'll master logging, monitoring, debugging, and prepare for the CKA/CKAD exams.

Presentation 8 of 8
← Back