Your app is running. Now, how does anyone find it?
CKA 20%CKAD 20%
Civica Kubernetes Training
The Story
The Discovery Problem
"Your app is running in Pods, but how does anyone find it? Pods are ephemeral -- they come and go. Their IP addresses change every time they restart. Imagine giving out your phone number, but it changes every day. Nobody could ever reach you."
The big idea: Services give your app a stable address -- like a phone number that never changes, even when you move houses. Today we will learn how Kubernetes solves the discovery and connectivity problem.
By the end of this module, you will understand how traffic flows from the outside world all the way to your Pods, and how to control every step of that journey.
Overview
What We Will Cover
Services
Why Services exist
ClusterIP, NodePort, LoadBalancer
Service discovery & DNS
Headless & ExternalName
Ingress & Routing
Ingress objects & controllers
Host & path-based routing
TLS termination
NGINX & Azure AGIC
Network Security
Kubernetes network model
Network Policies
Azure CNI & Calico
Troubleshooting
Fundamentals
What Is a Service?
The Problem: Pods are ephemeral. Their IPs change on restart. If your frontend Pod hardcodes the IP of a backend Pod, it breaks every time that backend restarts.
The Solution: A Service is an abstraction that provides a stable virtual IP (ClusterIP) and DNS name. It load-balances traffic to a set of Pods selected by label selectors.
Analogy: Think of a Service as a company's main phone number. Employees (Pods) come and go, but the main number stays the same. The phone system (kube-proxy) routes calls to whoever is currently at their desk.
Endpoints -- Kubernetes automatically maintains a list of healthy Pod IPs that match the selector
kube-proxy -- Runs on every node, programs iptables/IPVS rules to route Service IP traffic to Pod IPs
Virtual IP -- The Service ClusterIP does not correspond to any real interface. It exists only in iptables rules
Key insight: When a Pod dies and a new one starts, the Endpoints controller updates the list. kube-proxy updates the routing rules. No client changes needed.
# The Endpoints object is auto-managed
$ kubectl get endpoints my-service
NAME ENDPOINTS
my-service 10.244.1.5:8080,
10.244.2.3:8080,
10.244.3.7:8080
Traffic Flow
Client
→
ClusterIP
→
iptables
→
Pod
Service Types
Four Types of Services
Kubernetes offers four Service types, each building on the previous one. Think of them as escalating levels of exposure.
ClusterIP
Internal-only. Default type. Only reachable from within the cluster.
The internal hotline
NodePort
Exposes on a static port on every node's IP. Range: 30000-32767.
Opening a door outside
LoadBalancer
Provisions an external cloud LB. The production standard for public traffic.
The receptionist
ExternalName
Maps to a DNS CNAME. No proxying. Points to an external service.
A redirect sign
ClusterIP
ClusterIP -- The Internal Hotline
"Think of ClusterIP as your company's internal phone extension. Employees can call each other, but nobody outside the building can dial in."
Default Service type (if you omit type:)
Reachable only within the cluster
Perfect for internal microservice communication
Gets a stable virtual IP from the Service CIDR range
Use case: Your frontend Pods need to talk to a backend API. The backend runs as a ClusterIP Service -- internal, stable, discoverable.
apiVersion: v1
kind: Service
metadata:name:backend-apispec:type:ClusterIP# defaultselector:app:backendports:
- port:80# Service porttargetPort:8080# Pod portprotocol:TCP
# From any Pod in the cluster:
$ curl http://backend-api.default.svc.cluster.local
{"status": "ok"}
NodePort
NodePort -- Opening a Door to the Outside
"If ClusterIP is the internal extension, NodePort is like installing a public payphone on the outside wall of your building. Anyone who knows the building's address and the phone number can call in."
Opens a specific port (30000-32767) on every node
Traffic to <NodeIP>:<NodePort> is forwarded to the Service
Automatically creates a ClusterIP too
Not ideal for production (no single entry point)
Limitation: Clients need to know a node's IP. If a node goes down, that entry point is lost. NodePort is mostly used for development or as a building block for LoadBalancer.
apiVersion: v1
kind: Service
metadata:name:web-nodeportspec:type:NodePortselector:app:webports:
- port:80targetPort:8080nodePort:30080# optional: auto-assigned if omitted
NodePort Traffic Flow
External User
→
Node:30080
→
Service
→
Pod
LoadBalancer
LoadBalancer -- The Receptionist
"Imagine a receptionist at the front desk of a large office building. Visitors arrive and say 'I need to see someone in Marketing.' The receptionist routes them to the right person -- without visitors needing to know specific room numbers."
The standard way to expose services to the internet
apiVersion: v1
kind: Service
metadata:name:web-publicspec:type:LoadBalancerselector:app:webports:
- port:80targetPort:8080
# After creation:
$ kubectl get svc web-public
NAME TYPE EXTERNAL-IP
web-public LoadBalancer 20.50.100.25
Azure
LoadBalancer on Azure (AKS)
When you create a LoadBalancer Service on AKS, Azure automatically provisions an Azure Load Balancer with a public IP.
Standard LB -- default in AKS; supports availability zones
Internal LB -- use annotation for private IP only
Health probes auto-configured for the NodePort
Integrated with Azure NSGs for security
Cost note: Each LoadBalancer Service creates its own Azure LB. For multiple services, use Ingress instead to share a single LB.
# Internal LoadBalancer on AzureapiVersion: v1
kind: Service
metadata:name:internal-apiannotations:service.beta.kubernetes.io/
azure-load-balancer-internal:"true"spec:type:LoadBalancerselector:app:apiports:
- port:443targetPort:8443
Service Types
Service Type Comparison
Type
Scope
External Access
Use Case
Azure Behaviour
ClusterIP
Cluster-internal
No
Microservice-to-microservice
No cloud resource created
NodePort
Node IPs
Yes (node IP + port)
Dev/test, or building block
Opens port on VMSS instances
LoadBalancer
Cloud LB
Yes (public or internal IP)
Production external access
Azure Standard LB provisioned
ExternalName
DNS CNAME
N/A
Alias external services
Pure DNS, no LB
Remember: Each type builds on the previous. LoadBalancer creates a NodePort, which creates a ClusterIP. It's like Russian nesting dolls.
Quiz Time
Check Your Understanding: Service Types
1. Which Service type is the default when you omit the type field?
A) ClusterIP
B) NodePort
C) LoadBalancer
D) ExternalName
ClusterIP is the default Service type. If you do not specify a type, Kubernetes creates a ClusterIP Service that is only reachable from within the cluster.
2. What port range does NodePort use by default?
A) 1-1024
B) 8000-9000
C) 30000-32767
D) 49152-65535
The default NodePort range is 30000-32767. This can be configured with the --service-node-port-range flag on the API server, but 30000-32767 is the default.
3. On Azure AKS, what annotation creates an internal (private) LoadBalancer?
A) service.beta.kubernetes.io/azure-internal: "true"
B) service.beta.kubernetes.io/azure-load-balancer-internal: "true"
C) cloud.azure.com/load-balancer-type: "internal"
D) kubernetes.io/service-type: "internal-lb"
The correct annotation is service.beta.kubernetes.io/azure-load-balancer-internal: "true". This tells the Azure cloud provider to create an internal LB with a private IP instead of a public one.
DNS
Service Discovery & DNS -- The Phone Book
"Every cluster has a phone book. When a Service is created, an entry is automatically added. Any Pod can look up any Service by name -- no hardcoded IPs needed."
CoreDNS runs as a Deployment in kube-system
Every Service gets a DNS A record automatically
Format: <service>.<namespace>.svc.cluster.local
Within the same namespace, just use the service name
# Full DNS name
backend-api.production.svc.cluster.local
# Within same namespace (short name)
backend-api
# Cross-namespace
backend-api.production
# SRV records for ports
_http._tcp.backend-api.production.svc.cluster.local
DNS
DNS Resolution in Practice
Every Pod's /etc/resolv.conf is automatically configured to use CoreDNS as its nameserver.
Search domains are set so short names resolve correctly
Same-namespace lookups: my-svc resolves
Cross-namespace: my-svc.other-ns resolves
External DNS: falls through to upstream nameservers
Debugging tip: Use kubectl run -it dns-test --image=busybox -- nslookup my-service to test DNS resolution from inside the cluster.
# Quick DNS test
$ kubectl exec -it debug-pod -- nslookup backend-api
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: backend-api.default.svc.cluster.local
Address: 10.96.142.37
Discovery
Environment Variables vs DNS
Kubernetes offers two mechanisms for service discovery. DNS is preferred in nearly all cases.
Environment Variables (Legacy)
Set automatically when a Pod starts
{SVC_NAME}_SERVICE_HOST
{SVC_NAME}_SERVICE_PORT
Only sees Services that exist before the Pod starts
Does not update if Service changes
DNS (Recommended)
Always up-to-date
Works across namespaces
Supports SRV records for port discovery
Services created after the Pod are still resolvable
Human-readable names
Gotcha: If you have hundreds of Services, environment variable injection can slow down Pod startup. Stick with DNS.
Advanced
Headless Services -- When You Need to Talk to Specific Pods
"Sometimes you do not want a load balancer in the middle. You want to know exactly who you are talking to -- like calling someone's direct line instead of going through the operator."
Set clusterIP: None
DNS returns the IPs of individual Pods instead of a single VIP
No load balancing by kube-proxy
Client decides which Pod to connect to
Use cases: StatefulSets (databases like PostgreSQL, MongoDB), service meshes, any workload needing direct Pod-to-Pod communication with stable DNS per Pod.
apiVersion: v1
kind: Service
metadata:name:postgresspec:clusterIP:None# Headless!selector:app:postgresports:
- port:5432
# DNS returns all Pod IPs
$ nslookup postgres
Name: postgres.default.svc.cluster.local
Address: 10.244.1.10
Address: 10.244.2.11
Address: 10.244.3.12
# StatefulSet pods get individual DNS:# postgres-0.postgres.default.svc.cluster.local# postgres-1.postgres.default.svc.cluster.local
Advanced
Multi-Port & Named Ports
Services can expose multiple ports. This is common for apps that serve HTTP on one port and metrics on another.
Each port must have a unique name when multiple ports are defined
Named ports in Pods let you change the actual port number without updating the Service
SRV records use port names for discovery
apiVersion: v1
kind: Service
metadata:name:web-appspec:selector:app:webports:
- name:httpport:80targetPort:web-port# named port
- name:metricsport:9090targetPort:metrics-port
ExternalName
ExternalName -- The Redirect Sign
"ExternalName is like a redirect sign on a door that says 'We moved to Building B.' It simply points you somewhere else."
Maps a Service name to a DNS CNAME record
No proxying, no ClusterIP, no selectors
Useful for pointing to external databases or APIs
Allows you to migrate services without changing application code
apiVersion: v1
kind: Service
metadata:name:my-databasespec:type:ExternalNameexternalName:db.prod.example.com
# From inside the cluster:
$ nslookup my-database
# Returns CNAME: db.prod.example.com# App connects to "my-database" --# later you can switch to an in-cluster# DB without changing app config
Quiz Time
Check Your Understanding: DNS & Discovery
1. What is the full DNS name format for a Service called "api" in namespace "prod"?
A) api.prod.cluster.local
B) api.prod.svc.cluster.local
C) api.svc.prod.cluster.local
D) prod.api.svc.cluster.local
The format is <service>.<namespace>.svc.cluster.local. So the answer is api.prod.svc.cluster.local.
2. What does a Headless Service (clusterIP: None) return when queried via DNS?
A) The Service's virtual IP
B) The IP addresses of individual Pods
C) A CNAME record pointing to the cluster DNS
D) No DNS record is created
A Headless Service returns A records for each Pod's IP instead of a single ClusterIP. This lets clients connect directly to individual Pods.
3. Which component provides DNS resolution in a Kubernetes cluster?
A) kube-proxy
B) kubelet
C) CoreDNS
D) etcd
CoreDNS is the default DNS provider in Kubernetes (since v1.13). It runs as a Deployment in the kube-system namespace and resolves Service names to IPs.
Transition
From Services to Ingress
"We've learned how to expose a single Service. But what if you have 20 services? Creating 20 LoadBalancers means 20 public IPs, 20 cloud LBs, and 20 bills. There has to be a better way..."
The Problem: One LB per Service
LB $$$
LB $$$
LB $$$
LB $$$
↓ vs ↓
The Solution: One Ingress, many routes
Ingress Controller (single LB)
↓↓↓↓
Svc A
Svc B
Svc C
Svc D
Ingress
Ingress -- The Smart Router at the Front Door
"Think of Ingress as a smart receptionist who reads the visitor's badge and routes them to the right department -- all through a single front door."
Ingress is a Kubernetes resource that defines HTTP/HTTPS routing rules
Ingress Controller is the actual reverse proxy that implements those rules
Route by hostname and/or URL path
Handles TLS termination
One load balancer, many services
Key distinction: The Ingress resource is just a config. Without an Ingress Controller running, nothing happens.
Ingress Routing
Internet Traffic
↓
Ingress Controller
/api/*
↓
api-svc
/web/*
↓
web-svc
/docs/*
↓
docs-svc
Ingress
Ingress Resource -- Path-Based Routing
Path-based routing sends different URL paths to different backend Services. This is the most common Ingress pattern.
pathType: Prefix -- matches URL path prefixes
pathType: Exact -- matches exact path only
pathType: ImplementationSpecific -- up to the controller
Azure Application Gateway Ingress Controller (AGIC) turns Azure Application Gateway into the Ingress for your AKS cluster.
L7 Load Balancing -- HTTP/HTTPS aware routing
WAF v2 -- Protects against OWASP Top 10
SSL offloading -- Integrate with Azure Key Vault for certs
Autoscaling -- App Gateway scales independently
Health probes -- Auto-configured from readiness probes
When to choose AGIC over NGINX: When you need WAF, Azure-native integration, or when compliance requires traffic to stay within Azure networking.
AGIC Architecture
Internet
↓
Azure App Gateway (WAF)
↓
AKS Cluster
↓↓
Pod A
Pod B
AGIC Pod watches Ingress resources and configures App Gateway via Azure API
Quiz Time
Check Your Understanding: Ingress
1. What is the relationship between an Ingress resource and an Ingress Controller?
A) They are the same thing
B) The Ingress Controller defines rules; the Ingress resource executes them
C) The Ingress resource defines rules; the Ingress Controller implements them
D) The Ingress Controller is only needed for TLS termination
The Ingress resource is a Kubernetes object that defines routing rules. The Ingress Controller is the actual reverse proxy (like NGINX or AGIC) that reads those rules and configures itself to implement them.
2. Which field in an Ingress spec selects which controller should handle it?
A) controllerName
B) ingressClassName
C) annotations.ingress-type
D) spec.controller
The ingressClassName field (introduced in Kubernetes v1.18) specifies which IngressClass / Ingress Controller should handle the Ingress resource. Before this field, the deprecated annotation kubernetes.io/ingress.class was used.
3. What is a key advantage of Azure AGIC over NGINX Ingress Controller?
A) AGIC is faster at routing HTTP traffic
B) AGIC provides built-in WAF (Web Application Firewall) capabilities
C) AGIC supports path-based routing while NGINX does not
D) AGIC does not require any Pods in the cluster
Azure Application Gateway includes WAF v2, which protects against OWASP Top 10 vulnerabilities. NGINX can also have WAF via ModSecurity, but it is not built-in to the free version. AGIC still runs a Pod in the cluster to watch Ingress resources.
Transition
Now Let's Lock the Doors
"We have learned how to route traffic into and around the cluster. But right now, every Pod can talk to every other Pod. Imagine an apartment building where every door is unlocked and every tenant can walk into anyone's apartment. That is the Kubernetes default."
The default: Kubernetes allows all Pod-to-Pod communication by default. There is no network isolation unless you explicitly create Network Policies.
Network Policies are the firewall rules between your apartments. Let's learn how to use them.
Network Model
The Kubernetes Network Model
Kubernetes has three fundamental networking requirements:
Every Pod gets its own IP address -- no NAT between Pods
Pods on any node can communicate with Pods on any other node without NAT
Agents on a node (e.g., kubelet) can communicate with all Pods on that node
This is a flat network. Every Pod can reach every other Pod by IP. The CNI plugin (Calico, Azure CNI, Cilium) implements this model.
Flat Pod Network
Node 1
Pod 10.244.1.5
Pod 10.244.1.6
Node 2
Pod 10.244.2.3
Pod 10.244.2.4
All Pods can reach each other directly by IP
Azure
Azure CNI -- How AKS Implements Networking
Azure CNI (Default)
Every Pod gets an IP from the Azure VNet subnet
Pods are directly reachable from VNet-peered networks
No overlay network -- flat, native Azure networking
Requires careful IP address planning
Best for production; integrates with Azure NSGs, UDRs
Azure CNI Overlay
Pods get IPs from a private overlay (e.g., 10.244.0.0/16)
Only Node IPs come from the VNet
Scales to larger clusters without IP exhaustion
Slight latency overhead from encapsulation
Good for dev/test or large-scale clusters
Planning tip: With Azure CNI, each node reserves IPs for its max Pod count (default 30). A 10-node cluster needs 300+ Pod IPs. Plan your subnets accordingly.
Network Policies
Network Policies -- Firewall Rules Between Apartments
"Network Policies are like firewall rules for your Pods. By default, all doors are unlocked. Once you create a Network Policy, you start locking doors and only allowing specific visitors through."
Namespace-scoped resources
Select Pods using podSelector
Control ingress (incoming) and egress (outgoing) traffic
Additive -- if any policy allows the traffic, it is allowed
Requires a CNI that supports it (Calico, Cilium, Azure NPM)
Important: Creating a Network Policy that selects a Pod automatically denies all traffic not explicitly allowed for that Pod (on the direction specified -- ingress/egress).
Pods within the same namespace can communicate, but cross-namespace traffic is blocked.
Network Policies
Egress Policies & Namespace Selectors
Controlling Outbound Traffic
Egress policies control where Pods can send traffic. Essential for preventing data exfiltration.
Block outbound internet access from sensitive Pods
Restrict database Pods to only accept connections from app Pods
Allow DNS traffic (port 53) to CoreDNS -- otherwise DNS resolution breaks!
Common mistake: If you create an egress policy but forget to allow DNS (UDP port 53 to kube-system), all DNS resolution in the affected Pods will fail.
# Allow egress to specific namespace + DNSapiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:name:restrict-egressspec:podSelector:matchLabels:app:backendpolicyTypes:
- Egressegress:# Allow DNS
- to:
- namespaceSelector:matchLabels:kubernetes.io/metadata.name:kube-systemports:
- port:53protocol:UDP# Allow to database namespace
- to:
- namespaceSelector:matchLabels:tier:database
Azure
Network Policies on AKS
Azure NPM (Network Policy Manager)
Azure-native, uses IPTables/IPSets
Enabled at cluster creation time
Supports standard Kubernetes NetworkPolicy API
No extra installation needed
# Create AKS with Azure NPM
az aks create --network-policy azure ...
Calico on AKS
More mature, richer feature set
Supports GlobalNetworkPolicy (cluster-wide)
Host-level policies and DNS-based policies
Better performance at scale (eBPF mode)
# Create AKS with Calico
az aks create --network-policy calico ...
Choose at creation: The network policy plugin must be selected when creating the AKS cluster. It cannot be changed later without recreating the cluster.
Quiz Time
Check Your Understanding: Network Policies
1. What happens when you create a Network Policy selecting a Pod with policyType Ingress but no ingress rules?
A) All ingress traffic is allowed
B) All ingress traffic is denied
C) Only traffic from the same namespace is allowed
D) The policy has no effect
Once a Network Policy selects a Pod for a given direction (ingress/egress), all traffic in that direction is denied unless explicitly allowed by a rule. An empty ingress rules list means "deny all ingress."
2. Why must you explicitly allow DNS (UDP port 53) in egress policies?
A) DNS is blocked by default in Kubernetes
B) CoreDNS requires explicit authorization
C) Egress policies deny all outbound traffic not explicitly allowed, including DNS queries
D) DNS runs on TCP only and needs a special rule
When you create an egress Network Policy, all outbound traffic is denied by default. Since DNS resolution requires outbound UDP traffic to port 53, you must explicitly allow it, or all name resolution in affected Pods will fail.
3. Which of these is true about Network Policy support on AKS?
A) Network Policies work without any plugin on AKS
B) You can switch between Azure NPM and Calico at any time
C) The network policy plugin must be chosen at cluster creation time
D) AKS only supports Calico for network policies
On AKS, the network policy plugin (Azure NPM or Calico) must be selected at cluster creation time using --network-policy azure or --network-policy calico. It cannot be changed after the cluster is created.
Troubleshooting
Networking Troubleshooting Toolkit
Diagnose Service Issues
# Check Service and Endpoints
$ kubectl get svc,endpoints my-svc
# No endpoints? Check selector matches
$ kubectl get pods --show-labels
# Test from inside the cluster
$ kubectl run test --rm -it --image=busybox \
-- wget -qO- http://my-svc:80
# Check kube-proxy logs
$ kubectl logs -n kube-system -l component=kube-proxy
Diagnose DNS Issues
# DNS resolution test
$ kubectl run dns-test --rm -it --image=busybox \
-- nslookup my-svc.default.svc.cluster.local
# Check CoreDNS is running
$ kubectl get pods -n kube-system -l k8s-app=kube-dns
# Check CoreDNS logs
$ kubectl logs -n kube-system -l k8s-app=kube-dns
# Check Pod's resolv.conf
$ kubectl exec my-pod -- cat /etc/resolv.conf
Verify Service exists and path/pathType are correct
Cross-namespace traffic blocked
Network Policy only allows same-namespace
Add namespaceSelector to Network Policy
Best Practices
Networking Best Practices
Services & Routing
Use ClusterIP for internal services (do not over-expose)
Use Ingress instead of multiple LoadBalancers to save cost
Always set readiness probes -- Services only route to ready Pods
Use named ports for flexibility
Prefer DNS over environment variables for discovery
Security
Start with a deny-all Network Policy per namespace
Explicitly allow only needed traffic (zero-trust)
Always allow DNS egress to kube-system
Use TLS on all Ingress resources
On Azure, choose Calico for richer Network Policy features
Hands-On
Workshop: Build a Networking Stack
Exercise 1: Services
Deploy an nginx Pod with label app: web
Create a ClusterIP Service for it
Test DNS resolution from a busybox Pod
Change to NodePort and access from outside
Exercise 2: Ingress
Deploy two different apps (web + api)
Create an Ingress with path-based routing
Test that /api goes to one Service and / goes to another
Add TLS termination
Exercise 3: Network Policies
Create a deny-all Network Policy
Verify Pods can no longer communicate
Add an allow rule for specific labels
Verify only allowed traffic flows
Exercise 4: Troubleshooting
Break a Service by changing labels
Use kubectl get endpoints to diagnose
Fix the selector and verify
Test DNS resolution end-to-end
Recap
Module 5 Summary
Services
Stable VIP + DNS for Pods
ClusterIP (internal), NodePort, LoadBalancer
Label selectors bind Services to Pods
Headless for direct Pod access
Ingress
L7 routing (host + path)
One LB, many services
TLS termination
NGINX or AGIC on Azure
Network Policies
Default: all traffic allowed
Deny-all + explicit allow = zero trust
Remember DNS egress rules
Azure: choose NPM or Calico at creation
Key takeaway: Services give your Pods a stable identity. Ingress gives your cluster a smart front door. Network Policies lock the doors between apartments. Together, they form the complete networking story.
Preview
Coming Up Next
In Module 6: Storage & Volumes, we will tackle the next big challenge:
"Container dies. Data gone. That is the problem. Let's solve it..."
Volumes
emptyDir, hostPath, and the lifecycle of Pod storage
PV & PVC
Persistent Volumes, claims, and StorageClasses on Azure
Config & Secrets
ConfigMaps, Secrets, and the CSI driver architecture
Final Quiz
Module 5 -- Final Check
1. A Service has endpoints but Pods are not receiving traffic. What should you check first?
A) The Pod's CPU limits
B) The Pod's readiness probe -- it may be failing
C) The node's disk space
D) The cluster's etcd health
Services only route traffic to Pods that pass their readiness probe. If the probe is failing, the Pod is removed from the Endpoints list and does not receive traffic, even though it is running.
2. You want one public IP to route traffic to 5 different internal Services. What should you use?
A) 5 LoadBalancer Services
B) 5 NodePort Services
C) An Ingress resource with an Ingress Controller
D) A Headless Service with 5 ports
An Ingress resource combined with an Ingress Controller (like NGINX or AGIC) uses a single LoadBalancer/public IP and routes traffic to multiple backend Services based on host or path rules.
3. Which of these accurately describes how LoadBalancer type builds on other Service types?
A) LoadBalancer is independent; it does not create ClusterIP or NodePort
B) LoadBalancer creates a NodePort, which creates a ClusterIP
C) LoadBalancer only creates a ClusterIP, not a NodePort
D) LoadBalancer replaces both ClusterIP and NodePort
Service types are layered. LoadBalancer builds on NodePort, which builds on ClusterIP. A LoadBalancer Service will have all three: a ClusterIP, a NodePort on every node, and an external cloud load balancer.
End of Module
Thank You!
Module 5: Services & Networking -- Complete
Questions? Let's discuss before moving to Module 6.