Deploy your Node.js API to production — environment setup, process managers, reverse proxies, and cloud platforms.
Development and production are fundamentally different environments with different priorities. In development, you optimize for speed of iteration — hot-reload, verbose logging, detailed error messages, no security restrictions. In production, you optimize for reliability, security, and performance — every decision has real consequences for real users.
Here is a comprehensive production readiness checklist for your Node.js API:
Environment Configuration:
NODE_ENV=production — this is not just a convention. Express disables verbose error pages, template engines enable caching, and many libraries optimize their behavior in production mode. Performance can improve by 3-10x just by setting this variable..env file to git. In production, set environment variables through your hosting provider's dashboard, Docker environment, or a secrets manager (AWS Secrets Manager, HashiCorp Vault).Security:
helmet middleware — prevents XSS, clickjacking, MIME sniffing, and other common attacks.origin: '*' in production. Whitelist only your frontend domains.express-rate-limit — prevent abuse, brute-force attacks, and DDoS.Logging & Monitoring:
console.log with structured JSON logging (Winston, Pino). Structured logs can be parsed by log aggregation services (Datadog, LogDNA, Papertrail) for searching, filtering, and alerting.Database:
Build Step:
tsc or tsup) before deploying. Never run ts-node in production — the runtime compilation overhead is unnecessary.Node.js runs as a single process. If that process crashes (uncaught exception, out-of-memory error, unhandled promise rejection), your entire API goes down. Every connected user gets an error. Every in-flight request is lost. And the process stays down until someone manually restarts it — or until a process manager does it automatically.
PM2 (Process Manager 2) is the most popular process manager for Node.js in production. It keeps your app alive, manages multiple instances, and provides monitoring and logging. Install it globally: npm install -g pm2.
Auto-restart on crash: PM2 watches your Node.js process. If it crashes for any reason, PM2 automatically restarts it — typically within a second. Your users might experience a brief blip, but the API comes back without human intervention.
pm2 start server.js # Start with PM2
pm2 start server.js -i max # Cluster mode: one instance per CPU coreCluster mode is one of PM2's most powerful features. Node.js is single-threaded — a single process can only use one CPU core. If your server has 4 cores, 75% of your CPU is idle. Cluster mode starts one instance of your app per CPU core and load-balances incoming requests across all instances. This multiplies your throughput proportionally to the number of cores.
When using cluster mode, be aware that in-memory data (caches, sessions stored in variables) is NOT shared between instances. Each instance has its own memory space. Use Redis for any data that needs to be shared across instances.
Essential PM2 commands:
pm2 status # Dashboard showing all apps, CPU, memory
pm2 logs # View logs from all instances
pm2 logs --lines 100 # Last 100 lines
pm2 restart all # Restart all apps
pm2 reload all # Zero-downtime restart (cluster mode)
pm2 stop all # Stop all apps
pm2 delete all # Remove all apps from PM2
pm2 monit # Real-time CPU/memory monitorEcosystem file — For complex configurations, use ecosystem.config.js:
module.exports = {
apps: [{
name: 'my-api',
script: 'server.js',
instances: 'max',
exec_mode: 'cluster',
env_production: {
NODE_ENV: 'production',
PORT: 3000,
},
max_memory_restart: '500M', // Restart if memory exceeds 500MB
log_date_format: 'YYYY-MM-DD HH:mm:ss',
}]
};Start with: pm2 start ecosystem.config.js --env production
PM2 startup: Run pm2 startup to configure PM2 to start automatically when the server boots. Then pm2 save to save the current process list. If the server reboots (after a kernel update, power outage, etc.), PM2 starts automatically and restarts all your apps.
You should never expose your Node.js process directly to the internet. Node.js is designed to handle application logic, not to be a front-line web server. Instead, put Nginx (pronounced "engine-X") in front of your Node.js app as a reverse proxy — it receives all incoming requests from the internet and forwards them to your Node.js process.
Why use Nginx as a reverse proxy?
SSL/TLS Termination (HTTPS): Nginx handles the SSL handshake and encryption/decryption. Your Node.js app receives plain HTTP traffic from Nginx on a local network connection. This offloads the computationally expensive TLS work from Node.js and simplifies certificate management. Nginx integrates with Let's Encrypt (via Certbot) for free, auto-renewing SSL certificates.
Load Balancing: If you run multiple Node.js instances (via PM2 cluster mode or separate servers), Nginx distributes incoming requests across them. Algorithms include round-robin (default), least connections, and IP hash (sticky sessions).
Static File Serving: Nginx serves static files (images, CSS, JavaScript bundles) significantly faster than Node.js. Nginx is written in C and optimized for file serving — it uses zero-copy mechanisms (sendfile) that bypass the application entirely. Let Nginx serve your static files and forward only API requests to Node.js.
Request Buffering: Nginx buffers incoming requests before forwarding them to Node.js. This protects your app from slow clients — a client on a slow connection uploads a file byte by byte, but Nginx buffers the entire request and sends it to Node.js all at once. Without this, Node.js would hold the connection open for the entire slow upload, wasting a valuable event loop tick.
Gzip Compression: Nginx compresses responses (JSON, HTML, CSS, JS) before sending them to clients, reducing bandwidth by 60-80%. While Node.js can do this too (via the compression middleware), Nginx does it more efficiently.
Rate Limiting and Security: Nginx can enforce rate limits, block suspicious IPs, and handle basic DDoS protection at the infrastructure level before requests ever reach your application.
Basic Nginx configuration for a Node.js reverse proxy:
server {
listen 80;
server_name myapp.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_cache_bypass $http_upgrade;
}
}The proxy_set_header lines are important — they pass the original client information (IP address, host) to Node.js, which would otherwise only see Nginx's local IP. The Upgrade and Connection headers enable WebSocket support through the proxy.
There are four main categories of cloud deployment, each with different trade-offs between ease-of-use, control, cost, and scalability.
PaaS (Platform as a Service) — Easiest Platforms like Railway, Render, Fly.io, and the classic Heroku handle everything: you push your code (or a Docker image), and they build, deploy, scale, and manage your app. They provide managed databases, automatic SSL, environment variable management, and CI/CD integration. The deployment experience is magical: connect your GitHub repo, push to main, and your app is live in minutes.
Pros: fastest path to production, zero infrastructure management, built-in CI/CD. Cons: limited customization, can get expensive at scale, vendor lock-in. Best for: startups, MVPs, small-to-medium projects, solo developers.
Railway and Render offer generous free tiers and are the current favorites in the Node.js community. Railway is particularly developer-friendly with instant deployments and a clean UI.
VPS (Virtual Private Server) — More Control Services like DigitalOcean, Linode, Hetzner, and Vultr give you a Linux server that you manage yourself. You SSH in, install Node.js, Nginx, PM2, set up firewall rules, SSL certificates, and deploy your code. You have full control over the entire stack.
Pros: cheaper at scale, full customization, no vendor lock-in. Cons: you manage everything (security patches, updates, monitoring), requires Linux/DevOps knowledge. Best for: experienced developers who want full control and cost optimization.
A typical VPS deployment: provision a server, install Node.js + PM2 + Nginx, set up a firewall (ufw), configure SSL with Certbot/Let's Encrypt, clone your repo, install dependencies, start with PM2, configure Nginx as a reverse proxy.
Serverless (Event-Driven) AWS Lambda, Vercel serverless functions, Cloudflare Workers, and Google Cloud Functions run your code in response to individual requests. There is no server to manage — your function spins up when a request arrives and shuts down when it is done.
Pros: auto-scales to zero (no cost when idle), auto-scales to infinity (handles traffic spikes), no server management. Cons: cold starts (first request can be slow), limited execution time (typically 10-30 seconds), stateless (no persistent connections), can get expensive at high throughput. Best for: APIs with variable traffic, webhooks, background tasks.
Containers (Docker-Based) AWS ECS/Fargate, Google Cloud Run, Azure Container Apps, and Fly.io run your Docker containers. You build an image, push it to a container registry, and the platform runs and scales it.
Pros: portable (same Docker image runs anywhere), auto-scaling, good balance of control and convenience. Cons: requires Docker knowledge, more complex than PaaS. Best for: teams with Docker experience, microservice architectures.
CI/CD stands for Continuous Integration / Continuous Deployment — the practice of automatically testing, building, and deploying your code every time you push a change. Instead of manually running tests, building a Docker image, and deploying to a server, a CI/CD pipeline automates the entire process. Push to main, and your code is live in minutes.
Continuous Integration (CI) is the first half: every push triggers an automated pipeline that runs your linter (ESLint), type checker (TypeScript), unit tests, and integration tests. If any step fails, the pipeline fails and blocks the deployment. This catches bugs before they reach production.
Continuous Deployment (CD) is the second half: if all CI checks pass, the pipeline automatically deploys the new code to production. Some teams use Continuous Delivery instead — the pipeline prepares the deployment but requires a manual approval click before going live.
The most popular CI/CD tool for open-source and small teams is GitHub Actions — it is built into GitHub, free for public repos, and has generous free minutes for private repos. Other options include GitLab CI (built into GitLab), CircleCI, and Jenkins (self-hosted).
A typical GitHub Actions workflow for a Node.js API:
name: CI/CD Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20 }
- run: npm ci
- run: npm run lint
- run: npm test
deploy:
needs: test # Only runs if test job passes
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: docker build -t myapp .
- run: docker push registry.example.com/myapp
# Platform-specific deploy stepThe golden rule of CI/CD: if tests fail, the deployment is blocked. No exceptions. This is your safety net against deploying broken code. Write tests for your critical paths (auth flow, payment flow, core business logic), and trust the pipeline to catch regressions.
Best practices:
node_modules to speed up npm ciWhy should you use a reverse proxy like Nginx in front of Node.js?