Images and containers are the two core concepts in Docker. An image is a snapshot of an application and its environment. A container is a live, running instance of that image.
Image Layers
Docker images are built in layers. Each layer represents a change — installing a package, copying files, or setting a configuration. Layers are cached and shared between images, which saves disk space and speeds up builds.
┌─────────────────────────┐
│ Layer 4: COPY app.js │ (your code)
├─────────────────────────┤
│ Layer 3: RUN npm install│ (dependencies)
├─────────────────────────┤
│ Layer 2: RUN apt update │ (system packages)
├─────────────────────────┤
│ Layer 1: node:20-alpine │ (base image)
└─────────────────────────┘When you change your application code, only Layer 4 is rebuilt. Layers 1-3 are pulled from cache.
Listing and Inspecting Images
# List all local images
docker images
# Show image details
docker inspect nginx
# Show the layer history of an image
docker history nginx
# Show image size
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"Image Tags
Tags identify specific versions of an image. The format is repository:tag:
# Pull a specific version
docker pull node:20-alpine
# Pull the latest version (default if no tag specified)
docker pull node:latest
# Pull a specific digest (immutable)
docker pull node@sha256:abc123...Common tag conventions:
| Tag | Meaning |
|---|---|
latest | Most recent version (can change) |
20 | Major version |
20.11 | Minor version |
20.11.1 | Patch version |
alpine | Built on Alpine Linux (smaller) |
slim | Minimal Debian-based image |
bookworm | Built on Debian Bookworm |
Always use a specific tag in production. The latest tag is a moving target.
Container Lifecycle
A container goes through these states:
Created → Running → Paused → Running → Stopped → Removed# Create a container without starting it
docker create --name my-app nginx
# Start the container
docker start my-app
# Pause the container (freeze processes)
docker pause my-app
# Unpause
docker unpause my-app
# Stop the container (sends SIGTERM, then SIGKILL after 10s)
docker stop my-app
# Remove the container
docker rm my-app
# Force remove a running container
docker rm -f my-appRunning Containers
The docker run command combines create and start:
# Run in the foreground (attached)
docker run nginx
# Run in the background (detached)
docker run -d nginx
# Run and remove automatically when stopped
docker run --rm nginx
# Run with an interactive terminal
docker run -it ubuntu bash
# Run with a custom name
docker run -d --name web-server nginx
# Run with environment variables
docker run -d \
-e DATABASE_URL="postgres://localhost/mydb" \
-e NODE_ENV="production" \
my-app
# Run with port mapping
docker run -d -p 3000:3000 my-app
# Run with resource limits
docker run -d \
--memory="512m" \
--cpus="1.0" \
my-appPort Mapping
Containers have their own network. To access a service inside a container, map a host port to a container port:
# Map host port 8080 to container port 80
docker run -d -p 8080:80 nginx
# Map multiple ports
docker run -d -p 3000:3000 -p 9229:9229 node-app
# Map to a specific interface
docker run -d -p 127.0.0.1:8080:80 nginx
# Map to a random host port
docker run -d -p 80 nginx
docker port <container-id> # See the assigned port
Environment Variables
Environment variables configure the application inside the container:
# Pass individual variables
docker run -d \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=secret \
-e POSTGRES_DB=myapp \
postgres:16
# Load from a file
docker run -d --env-file .env my-appExample .env file:
DATABASE_URL=postgres://admin:secret@db:5432/myapp
REDIS_URL=redis://cache:6379
NODE_ENV=production
SECRET_KEY=my-secret-key-123Container Inspection and Debugging
# View container details (IP address, mounts, config)
docker inspect my-app
# View running processes inside a container
docker top my-app
# View resource usage (CPU, memory, network)
docker stats
# View real-time stats for a specific container
docker stats my-app
# View logs
docker logs my-app
# Follow logs in real time
docker logs -f my-app
# Show only the last 100 lines
docker logs --tail 100 my-app
# Show logs with timestamps
docker logs -t my-appExecuting Commands in Running Containers
# Open a bash shell
docker exec -it my-app bash
# Open a sh shell (if bash is not available, e.g., Alpine)
docker exec -it my-app sh
# Run a single command
docker exec my-app cat /etc/hostname
# Run as root user
docker exec -u root my-app whoami
# Check environment variables
docker exec my-app envCopying Files Between Host and Container
# Copy a file from host to container
docker cp config.json my-app:/app/config.json
# Copy a file from container to host
docker cp my-app:/var/log/app.log ./app.log
# Copy an entire directory
docker cp ./dist my-app:/usr/share/nginx/html/Committing Container Changes
You can save a modified container as a new image (though Dockerfiles are preferred for reproducibility):
# Make changes inside a container
docker exec -it my-app bash
# ... install packages, modify files ...
# exit
# Save the container as a new image
docker commit my-app my-custom-image:v1
# Verify
docker images | grep my-custom-imageImage Management
# Tag an image
docker tag my-app:latest my-app:v1.0.0
# Remove an image
docker rmi nginx
# Remove all unused images
docker image prune -a
# Save an image to a tar file
docker save -o my-app.tar my-app:v1.0.0
# Load an image from a tar file
docker load -i my-app.tar
# Export a container's filesystem
docker export my-app > my-app-fs.tar
Practical Example: Running a Development Database
A common use case is running a database for local development:
# Start a PostgreSQL container
docker run -d \
--name dev-db \
-e POSTGRES_USER=dev \
-e POSTGRES_PASSWORD=devpass \
-e POSTGRES_DB=myapp_dev \
-p 5432:5432 \
postgres:16
# Connect to it
docker exec -it dev-db psql -U dev -d myapp_dev
# Check if it is running
docker ps | grep dev-db
# View logs if something goes wrong
docker logs dev-db
# Stop and remove when done
docker stop dev-db
docker rm dev-dbSummary
You now understand how images are built from layers, how containers are created and managed, and how to interact with running containers through ports, environment variables, and shell access. In the next lesson, you will learn to build your own custom images with Dockerfiles.