Skip to main content
Kubernetes Essentials·Lesson 2 of 5

Pods & Deployments

Pods and Deployments are the two most important Kubernetes resources. A Pod runs your container. A Deployment manages your pods — handling replication, updates, and self-healing.

What Is a Pod?

A Pod is the smallest unit in Kubernetes. It wraps one or more containers that:

  • Share the same network namespace (same IP address and ports)
  • Share the same storage volumes
  • Are scheduled together on the same node

Most pods run a single container. Multi-container pods are used for sidecars (logging agents, proxies, etc.).

Creating a Pod

Imperative:

kubectl run my-nginx --image=nginx:1.25 --port=80

Declarative (recommended):

Create pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: my-nginx
  labels:
    app: nginx
    environment: dev
spec:
  containers:
    - name: nginx
      image: nginx:1.25
      ports:
        - containerPort: 80
      resources:
        requests:
          memory: "64Mi"
          cpu: "100m"
        limits:
          memory: "128Mi"
          cpu: "250m"
kubectl apply -f pod.yaml

Pod Lifecycle

A pod goes through these phases:

PhaseDescription
PendingPod accepted, waiting to be scheduled or pull images
RunningAt least one container is running
SucceededAll containers exited successfully (exit code 0)
FailedAt least one container exited with an error
UnknownPod state cannot be determined
# Check pod status
kubectl get pods

# Watch pods in real time
kubectl get pods -w

# Get detailed pod information
kubectl describe pod my-nginx

# View pod logs
kubectl logs my-nginx

# Follow logs
kubectl logs -f my-nginx

# View logs from a previous container instance (after crash)
kubectl logs my-nginx --previous

Multi-Container Pods

A pod with a sidecar container for logging:

apiVersion: v1
kind: Pod
metadata:
  name: app-with-logging
spec:
  containers:
    - name: app
      image: my-app:1.0
      ports:
        - containerPort: 3000
      volumeMounts:
        - name: logs
          mountPath: /var/log/app

    - name: log-shipper
      image: fluent-bit:latest
      volumeMounts:
        - name: logs
          mountPath: /var/log/app
          readOnly: true

  volumes:
    - name: logs
      emptyDir: {}

Both containers share the logs volume. The app writes logs, and the sidecar ships them to a logging service.

What Is a Deployment?

A Deployment manages a set of identical pods (called replicas). It ensures the desired number of pods are always running and handles updates gracefully.

You should almost never create pods directly. Always use Deployments.

Creating a Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  labels:
    app: web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
        - name: web-app
          image: my-web-app:1.0.0
          ports:
            - containerPort: 3000
          env:
            - name: NODE_ENV
              value: "production"
            - name: PORT
              value: "3000"
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
kubectl apply -f deployment.yaml

Deployment Management

# View deployments
kubectl get deployments

# View the pods managed by a deployment
kubectl get pods -l app=web-app

# Scale a deployment
kubectl scale deployment web-app --replicas=5

# View deployment details
kubectl describe deployment web-app

# View rollout status
kubectl rollout status deployment web-app

# View rollout history
kubectl rollout history deployment web-app

Rolling Updates

When you update a deployment's image, Kubernetes performs a rolling update — gradually replacing old pods with new ones to avoid downtime:

# Update the image
kubectl set image deployment/web-app web-app=my-web-app:2.0.0

# Watch the rollout
kubectl rollout status deployment web-app

Or update the YAML and reapply:

spec:
  template:
    spec:
      containers:
        - name: web-app
          image: my-web-app:2.0.0  # Updated version
kubectl apply -f deployment.yaml

Update Strategy

Control how updates are performed:

spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1        # Max pods above desired count during update
      maxUnavailable: 0   # Max pods that can be unavailable during update
SettingEffect
maxSurge: 1, maxUnavailable: 0Zero downtime — always have full capacity
maxSurge: 0, maxUnavailable: 1No extra resources — replace one at a time
maxSurge: 25%, maxUnavailable: 25%Balanced — update in batches

Rollbacks

If an update goes wrong, roll back to the previous version:

# Roll back to the previous revision
kubectl rollout undo deployment web-app

# Roll back to a specific revision
kubectl rollout history deployment web-app
kubectl rollout undo deployment web-app --to-revision=2

Health Checks (Probes)

Probes tell Kubernetes whether your containers are healthy:

spec:
  containers:
    - name: web-app
      image: my-web-app:1.0.0
      ports:
        - containerPort: 3000

      # Is the container alive? Restart if not.
      livenessProbe:
        httpGet:
          path: /health
          port: 3000
        initialDelaySeconds: 10
        periodSeconds: 15
        failureThreshold: 3

      # Is the container ready to receive traffic?
      readinessProbe:
        httpGet:
          path: /ready
          port: 3000
        initialDelaySeconds: 5
        periodSeconds: 10

      # Has the container started? (for slow-starting apps)
      startupProbe:
        httpGet:
          path: /health
          port: 3000
        failureThreshold: 30
        periodSeconds: 10
ProbePurposeFailure Action
LivenessIs the container alive?Restart the container
ReadinessCan it serve traffic?Remove from service endpoints
StartupHas it finished starting?Kill and restart if exceeded

Resource Requests and Limits

Always define resource boundaries:

resources:
  requests:
    memory: "128Mi"   # Minimum guaranteed memory
    cpu: "100m"        # Minimum guaranteed CPU (100 millicores = 0.1 core)
  limits:
    memory: "256Mi"    # Maximum allowed memory
    cpu: "500m"        # Maximum allowed CPU
  • Requests — the scheduler uses these to place pods on nodes with enough resources
  • Limits — the hard cap. If a container exceeds its memory limit, it gets killed (OOMKilled)

Labels and Selectors

Labels are key-value pairs attached to resources. Selectors filter resources by labels:

metadata:
  labels:
    app: web-app
    version: "2.0"
    environment: production
    team: backend
# Filter pods by label
kubectl get pods -l app=web-app
kubectl get pods -l environment=production
kubectl get pods -l "app=web-app,version=2.0"

# Show labels on pods
kubectl get pods --show-labels

Namespaces

Organize resources into logical groups:

# List namespaces
kubectl get namespaces

# Create a namespace
kubectl create namespace staging

# Deploy to a specific namespace
kubectl apply -f deployment.yaml -n staging

# View pods in a namespace
kubectl get pods -n staging

# Set a default namespace for your context
kubectl config set-context --current --namespace=staging

Summary

Pods run containers and Deployments manage pods. You learned how to create both, perform rolling updates with zero downtime, roll back failed releases, configure health checks for self-healing, set resource limits, and organize workloads with labels and namespaces. In the next lesson, you will learn about Services and networking.