Advanced25 min read

Docker for Node.js

Containerize your Node.js API with Docker — Dockerfiles, images, containers, and docker-compose for development.

What is Docker?

You have built a Node.js API on your laptop. It works perfectly — the right Node.js version is installed, all npm packages are available, environment variables are set, and the database is running. You push your code to GitHub and your teammate clones it. It does not work. They have a different Node.js version, a missing system dependency, a different OS, or a conflicting port. "It works on my machine" is one of the most frustrating phrases in software development.

Docker solves this problem by packaging your application and ALL of its dependencies into a container — a lightweight, isolated, portable environment that runs identically everywhere. Your laptop, your teammate's laptop, a CI server, a production server in the cloud — the container is the same. If it works in the container, it works everywhere.

A container is NOT a virtual machine (VM). VMs virtualize an entire operating system — each VM runs its own kernel, its own OS, its own file system. This means a VM can be gigabytes in size and take minutes to start. Containers, on the other hand, share the host operating system's kernel. They only package the application and its dependencies, not an entire OS. This makes containers extremely lightweight (megabytes, not gigabytes) and fast to start (milliseconds, not minutes).

The two core Docker concepts are images and containers. Think of it like object-oriented programming:

  • Docker Image = A blueprint (like a class). It defines what the environment looks like: which OS base, which packages are installed, which files are copied, which command runs on startup. Images are built from a Dockerfile and are immutable — once built, they never change.
  • Docker Container = A running instance (like an object). You create a container from an image, and it runs as an isolated process. You can create multiple containers from the same image, just like creating multiple objects from the same class.

Docker images are stored in registries — Docker Hub (the default public registry, like npm for containers), Amazon ECR, Google Container Registry, or GitHub Container Registry. You push your images to a registry and pull them on any machine you want to run them on. This is how deployment works: build the image in CI, push to a registry, pull and run on the production server.

Dockerfile for Node.js

A Dockerfile is a text file with step-by-step instructions for building a Docker image. Each instruction creates a layer in the image, and Docker caches layers intelligently — if a layer has not changed, Docker reuses the cached version instead of rebuilding it. This makes subsequent builds much faster.

Here are the key Dockerfile instructions for Node.js applications:

FROM node:20-alpine — The base image. Every Dockerfile starts with FROM. node:20-alpine is the official Node.js 20 image based on Alpine Linux. Alpine is a minimal Linux distribution (~5 MB) that produces much smaller images than the default Debian-based Node.js images (~350 MB vs ~1 GB). Always use Alpine unless you need specific system libraries that Alpine does not include.

WORKDIR /app — Sets the working directory inside the container. All subsequent commands run from this directory. Like cd /app, but it also creates the directory if it does not exist.

COPY package*.json ./ — Copies package.json and package-lock.json into the container. We copy these FIRST, before the application code, because of Docker's layer caching. The npm ci layer (next step) only rebuilds when package files change, not when your application code changes.

RUN npm ci — Installs dependencies. npm ci is preferred over npm install in Docker because it does a clean install from package-lock.json (faster, more reproducible). This is the most time-consuming step, so caching it by copying package files first saves significant build time.

COPY . . — Copies the rest of your application code into the container. This layer changes every time your code changes, but the npm ci layer above it is cached.

EXPOSE 3000 — Documents which port the application listens on. This does NOT actually publish the port — it is metadata. You still need -p 3000:3000 when running the container.

CMD ["node", "server.js"] — The command that runs when the container starts. Use the exec form (array syntax) instead of the shell form (CMD node server.js) so that Node.js receives OS signals (SIGTERM, SIGINT) correctly for graceful shutdown.

.dockerignore — Like .gitignore, but for Docker. Create a .dockerignore file to exclude files from the build context: node_modules, .git, .env, *.md, tests/, .vscode/. This speeds up builds and prevents accidentally including sensitive files in your image.

Production-Ready Dockerfile with Multi-Stage Build

dockerfile
# ── Stage 1: Build ────────────────────────────────────
FROM node:20-alpine AS builder
WORKDIR /app

# Copy package files first (layer caching optimization)
COPY package*.json ./

# Clean install dependencies
RUN npm ci

# Copy source code
COPY . .

# If using TypeScript: compile to JavaScript
# RUN npm run build

# Remove dev dependencies for smaller production image
RUN npm prune --production


# ── Stage 2: Production ───────────────────────────────
FROM node:20-alpine
WORKDIR /app

# Copy only production artifacts from builder stage
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/server.js ./server.js
# COPY --from=builder /app/dist ./dist  # If using TypeScript

# Set production environment
ENV NODE_ENV=production

# Document the port
EXPOSE 3000

# Run as non-root user for security
# The 'node' user is built into the official Node.js image
USER node

# Start the application
CMD ["node", "server.js"]


# ── .dockerignore file (create separately) ────────────
# node_modules
# .git
# .gitignore
# .env
# .env.*
# Dockerfile
# docker-compose.yml
# *.md
# tests/
# coverage/
# .vscode/
# .DS_Store

Docker Commands

Once you have a Dockerfile, you use Docker CLI commands to build images, run containers, and manage the lifecycle of your application.

Building an image:

bash
docker build -t myapp:1.0 .

-t myapp:1.0 tags the image with a name and version. The . at the end is the build context — the directory Docker uses to find the Dockerfile and copy files. Always tag your images with version numbers in production (not just latest).

Running a container:

bash
docker run -p 3000:3000 --name my-api myapp:1.0

-p 3000:3000 maps port 3000 on your host to port 3000 in the container. --name my-api gives the container a human-readable name. Add -d to run in the background (detached mode).

Environment variables:

bash
docker run -e PORT=3000 -e DATABASE_URL=mongodb://... myapp:1.0

Or use an env file: docker run --env-file .env myapp:1.0. Never bake secrets into your Docker image — pass them at runtime via environment variables.

Volumes (persist data):

bash
docker run -v ./data:/app/data myapp:1.0

Mounts the host's ./data directory into the container at /app/data. Container file systems are ephemeral — when a container is removed, its data is gone. Volumes persist data across container restarts and removals. Essential for databases.

Listing and managing containers:

bash
docker ps                    # List running containers
docker ps -a                 # List ALL containers (including stopped)
docker stop my-api           # Stop a container gracefully
docker rm my-api             # Remove a stopped container
docker logs my-api           # View container logs
docker logs -f my-api        # Follow logs in real-time
docker exec -it my-api sh    # Open a shell inside a running container

Image management:

bash
docker images                # List all local images
docker rmi myapp:1.0         # Remove an image
docker system prune          # Clean up unused images, containers, volumes

Important tip: Docker images and containers can consume significant disk space over time. Run docker system prune regularly to clean up. In CI/CD pipelines, always clean up after yourself.

Docker Compose

A real backend application is rarely just one container. You have your Node.js API, a MongoDB database, a Redis cache, maybe an Nginx reverse proxy. Running each with separate docker run commands is tedious and error-prone. Docker Compose lets you define and run multi-container applications with a single YAML file.

docker-compose.yml describes your entire application stack: which services to run, how they connect, which ports to expose, which volumes to mount, and which environment variables to set. One command starts everything: docker-compose up. One command stops everything: docker-compose down.

Docker Compose creates an isolated network for your services. Containers can reach each other by their service name — your Node.js app connects to MongoDB at mongodb://mongo:27017 (where mongo is the service name), not localhost. This is service discovery built into Docker Compose.

Key docker-compose features for development:

  • Volumes for hot-reload: Mount your source code as a volume so changes on your host are immediately reflected in the container. Combined with nodemon, you get automatic server restarts on code changes.
  • Depends-on: Specify that your API depends on MongoDB and Redis. Docker Compose starts dependencies first.
  • Health checks: Define health check commands so Docker Compose knows when a service is actually ready, not just started.
  • Environment files: Load environment variables from .env files automatically.
  • Profiles: Define profiles for different setups (dev, test, production) so you can selectively start services.

Docker Compose is perfect for local development — spin up your entire stack with one command, work on your code, tear it down when you are done. For production, you typically use Docker Compose for simple deployments on a single server, or graduate to Kubernetes for multi-server orchestration. Many teams start with Docker Compose in production and only move to Kubernetes when they need to scale across multiple servers.

Common workflow: docker-compose up -d (start in background), docker-compose logs -f api (follow API logs), docker-compose exec api sh (shell into API container), docker-compose down -v (stop and remove volumes).

Complete docker-compose.yml

yaml
# docker-compose.yml — Full-stack Node.js development environment
version: '3.8'

services:
  # ── Node.js API ──────────────────────────────────────
  api:
    build: .                          # Build from Dockerfile in current dir
    ports:
      - '3000:3000'                   # Map host:container ports
    environment:
      - NODE_ENV=development
      - PORT=3000
      - DATABASE_URL=mongodb://mongo:27017/myapp
      - REDIS_URL=redis://redis:6379
      - JWT_SECRET=dev-secret-change-in-production
    volumes:
      - .:/app                        # Mount source code for hot-reload
      - /app/node_modules             # Prevent overwriting container's node_modules
    depends_on:
      mongo:
        condition: service_healthy
      redis:
        condition: service_healthy
    command: npx nodemon server.js    # Auto-restart on file changes
    restart: unless-stopped

  # ── MongoDB Database ─────────────────────────────────
  mongo:
    image: mongo:7                    # Official MongoDB 7 image
    ports:
      - '27017:27017'                 # Expose for local DB tools
    volumes:
      - mongo-data:/data/db           # Persist data across restarts
    healthcheck:
      test: echo 'db.runCommand("ping").ok' | mongosh --quiet
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped

  # ── Redis Cache ──────────────────────────────────────
  redis:
    image: redis:7-alpine             # Official Redis 7 on Alpine
    ports:
      - '6379:6379'
    volumes:
      - redis-data:/data
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped

# ── Named Volumes ────────────────────────────────────
volumes:
  mongo-data:                         # Persists MongoDB data
  redis-data:                         # Persists Redis data

# Usage:
#   docker-compose up -d        Start all services in background
#   docker-compose logs -f api  Follow API logs
#   docker-compose down         Stop all services
#   docker-compose down -v      Stop and delete volumes (reset data)

What is the difference between a Docker image and a container?

Ready to practice?

Create your free account to access the interactive code editor, run challenges, and track your progress.