Skip to main content

Docker Done Right: Containers for Production

February 18, 2026

Docker has become the standard for containerizing applications, but writing efficient, secure, and maintainable Docker configurations takes practice. Here are the best practices every developer should follow.

Dockerfile Best Practices

Use Official Base Images

Always start from official, well-maintained images:

# Bad - unverified image
FROM random-user/node:latest

# Good - official image with specific version
FROM node:20-alpine

Pin Image Versions

Never use latest in production — it breaks reproducibility:

# Bad - unpredictable
FROM node:latest
FROM python:latest

# Good - pinned versions
FROM node:20.11-alpine3.19
FROM python:3.12-slim-bookworm

Use Alpine or Slim Variants

Smaller base images mean faster builds, smaller attack surface, and less storage:

# Full image: ~1GB
FROM node:20

# Slim image: ~200MB
FROM node:20-slim

# Alpine image: ~50MB
FROM node:20-alpine

Order Layers by Change Frequency

Docker caches layers — put things that change least at the top:

FROM node:20-alpine

# Rarely changes - cached
WORKDIR /app

# Changes when dependencies change
COPY package.json pnpm-lock.yaml ./
RUN corepack enable && pnpm install --frozen-lockfile

# Changes most often - last
COPY . .
RUN pnpm build

CMD ["node", "dist/server.js"]

Use Multi-Stage Builds

Keep your final image lean by separating build and runtime stages:

# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN corepack enable && pnpm install --frozen-lockfile
COPY . .
RUN pnpm build

# Stage 2: Production
FROM node:20-alpine AS runner
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./

EXPOSE 3000
CMD ["node", "dist/server.js"]

For Go or Rust, you can use scratch or distroless for even smaller images:

# Go multi-stage
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o server .

FROM gcr.io/distroless/static-debian12
COPY --from=builder /app/server /server
CMD ["/server"]

Combine RUN Commands

Each RUN creates a layer. Combine related commands to reduce image size:

# Bad - 3 layers, apt cache stored in layer 1
RUN apt-get update
RUN apt-get install -y curl git
RUN rm -rf /var/lib/apt/lists/*

# Good - 1 layer, clean in same layer
RUN apt-get update && \
    apt-get install -y --no-install-recommends curl git && \
    rm -rf /var/lib/apt/lists/*

Use .dockerignore

Exclude unnecessary files from the build context:

# .dockerignore
node_modules
.git
.gitignore
.env
.env.*
*.md
.next
dist
coverage
.vscode
.idea
docker-compose*.yml
Dockerfile*

Security

Don't Run as Root

FROM node:20-alpine

WORKDIR /app
COPY --chown=node:node . .
RUN corepack enable && pnpm install --frozen-lockfile

# Switch to non-root user
USER node

CMD ["node", "server.js"]

Don't Store Secrets in Images

# Bad - secret baked into image
ENV API_KEY=sk-secret-key-123
COPY .env .

# Good - pass at runtime
# docker run -e API_KEY=sk-secret-key-123 myapp
# or use Docker secrets
CMD ["node", "server.js"]

For build-time secrets, use --mount=type=secret:

RUN --mount=type=secret,id=npmrc,target=/root/.npmrc \
    npm install
docker build --secret id=npmrc,src=.npmrc .

Scan Images for Vulnerabilities

# Built-in Docker scan
docker scout cves myapp:latest

# Or use Trivy
trivy image myapp:latest

Use Read-Only File Systems

# docker-compose.yml
services:
  app:
    image: myapp:latest
    read_only: true
    tmpfs:
      - /tmp
      - /var/run

Docker Compose

Use Compose for Local Development

# docker-compose.yml
services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    volumes:
      - .:/app
      - /app/node_modules  # Prevent overwriting node_modules
    environment:
      - NODE_ENV=development
      - DATABASE_URL=postgresql://postgres:postgres@db:5432/myapp
    depends_on:
      db:
        condition: service_healthy

  db:
    image: postgres:16-alpine
    ports:
      - "5432:5432"
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: myapp
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  pgdata:

Separate Dev and Prod Compose Files

# docker-compose.yml (base)
services:
  app:
    image: myapp:latest
    environment:
      - DATABASE_URL=postgresql://postgres:postgres@db:5432/myapp

# docker-compose.dev.yml (dev overrides)
services:
  app:
    build: .
    volumes:
      - .:/app
    environment:
      - NODE_ENV=development

# docker-compose.prod.yml (prod overrides)
services:
  app:
    restart: always
    environment:
      - NODE_ENV=production
# Development
docker compose -f docker-compose.yml -f docker-compose.dev.yml up

# Production
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

Use Health Checks

HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
  CMD curl -f http://localhost:3000/health || exit 1
# Or in docker-compose.yml
services:
  app:
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

Image Optimization

Check Image Size

# See image sizes
docker images

# See layer breakdown
docker history myapp:latest

# Detailed size analysis with dive
dive myapp:latest

Use COPY Over ADD

ADD has extra features (URL downloading, tar extraction) you rarely need:

# Bad - unnecessary magic
ADD . /app
ADD https://example.com/file.tar.gz /tmp/

# Good - explicit and predictable
COPY . /app
RUN curl -L https://example.com/file.tar.gz | tar xz -C /tmp/

Set Proper Labels

LABEL org.opencontainers.image.title="My App"
LABEL org.opencontainers.image.description="Production web server"
LABEL org.opencontainers.image.version="1.0.0"
LABEL org.opencontainers.image.source="https://github.com/user/repo"

Networking

Use Custom Networks

services:
  app:
    networks:
      - frontend
      - backend

  db:
    networks:
      - backend  # Not accessible from frontend

  nginx:
    networks:
      - frontend

networks:
  frontend:
  backend:

Don't Expose Unnecessary Ports

services:
  db:
    image: postgres:16-alpine
    # Bad - exposes to host
    ports:
      - "5432:5432"

  db:
    image: postgres:16-alpine
    # Good - only accessible within Docker network
    expose:
      - "5432"

Logging

Log to stdout/stderr

# Bad - logging to file inside container
RUN ln -sf /dev/stdout /var/log/app.log

# Good - app logs to stdout directly
CMD ["node", "server.js"]
// In your app, just use console
console.log('Server started on port 3000');
console.error('Connection failed:', error);

Configure Log Drivers

services:
  app:
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

Production Tips

Use Restart Policies

services:
  app:
    restart: unless-stopped  # Restart unless manually stopped

  worker:
    restart: on-failure      # Only restart on failure
    deploy:
      restart_policy:
        condition: on-failure
        max_attempts: 3

Set Resource Limits

services:
  app:
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 512M
        reservations:
          cpus: "0.25"
          memory: 128M

Handle Signals for Graceful Shutdown

# Use exec form for CMD so signals are forwarded
CMD ["node", "server.js"]

# Bad - shell form wraps in /bin/sh, signals not forwarded
CMD node server.js
// Handle graceful shutdown in your app
process.on('SIGTERM', async () => {
  console.log('SIGTERM received, shutting down gracefully');
  await server.close();
  await db.disconnect();
  process.exit(0);
});

Use BuildKit

# Enable BuildKit for faster builds and better caching
DOCKER_BUILDKIT=1 docker build -t myapp .

# Or set globally in /etc/docker/daemon.json
{
  "features": { "buildkit": true }
}

Quick Reference

PracticeWhy
Pin image versionsReproducible builds
Use Alpine/slim variantsSmaller images, less attack surface
Multi-stage buildsLean production images
Order layers by change frequencyBetter cache utilization
.dockerignoreFaster builds, no sensitive files leaked
Run as non-rootSecurity — principle of least privilege
No secrets in imagesPrevent credential leaks
Health checksAutomatic recovery from failures
Custom networksIsolate services
Log to stdoutWorks with Docker log drivers
Resource limitsPrevent runaway containers
Graceful shutdownNo dropped requests during deploys

Summary

Docker best practices boil down to:

  1. Keep images small — Alpine, multi-stage builds, .dockerignore
  2. Be secure — non-root user, no secrets in images, scan for vulnerabilities
  3. Be reproducible — pin versions, use lockfiles, frozen installs
  4. Be production-ready — health checks, resource limits, graceful shutdown, restart policies
  5. Layer smartly — order by change frequency, combine RUN commands

Recommended Posts