Deploy your full-stack application to production with AWS, Docker, and automated CI/CD pipelines
Deployment is the process of making your application available to users on the internet. Modern cloud platforms provide scalable infrastructure, automated deployments, and monitoring tools.
Deploy Next.js frontend with zero configuration
Host backend, database, and file storage
Containerize applications for consistent environments
Deploy your Task Manager to production with automated CI/CD, environment management, monitoring, and scaling capabilities.
Docker is a platform that packages your application and all its dependencies into containers. Containers ensure your app runs the same way everywhere - on your laptop, your teammate's computer, and in production.
Think of it like this: A container is like a shipping container for your code. Just as shipping containers can be moved between ships, trucks, and trains without unpacking, Docker containers can run on any system without modification.
Download and install Docker Desktop from docker.com
# Verify installation
docker --version
docker-compose --versionIn your backend project root, create Dockerfile:
# Use official Node.js image
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Generate Prisma Client
RUN npx prisma generate
# Expose port
EXPOSE 3001
# Start application
CMD ["npm", "start"]Exclude unnecessary files from the Docker image:
node_modules
npm-debug.log
.env
.git
.gitignore
README.md
.DS_Store# Build image
docker build -t task-manager-backend .
# Run container
docker run -p 3001:3001 --env-file .env task-manager-backend
# View running containers
docker ps
# Stop container
docker stop <container-id>docker build - Creates an image from Dockerfile
docker run - Starts a container from an image
-p 3001:3001 - Maps port 3001 from container to host
--env-file - Loads environment variables from file
docker ps - Lists running containers
Docker Compose lets you define and run multi-container applications. Instead of running separate commands for your backend, database, and other services, you define everything in one file.
In your project root, create docker-compose.yml:
version: '3.8'
services:
# PostgreSQL Database
postgres:
image: postgres:15-alpine
container_name: task-manager-db
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: task_manager
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
# Backend API
backend:
build:
context: ./backend
dockerfile: Dockerfile
container_name: task-manager-backend
environment:
DATABASE_URL: postgresql://postgres:postgres@postgres:5432/task_manager
PORT: 3001
NODE_ENV: production
ports:
- "3001:3001"
depends_on:
postgres:
condition: service_healthy
command: sh -c "npx prisma migrate deploy && npm start"
volumes:
postgres_data:# Start all services
docker-compose up -d
# View logs
docker-compose logs -f
# Stop all services
docker-compose down
# Rebuild and restart
docker-compose up -d --buildVercel is the company behind Next.js and provides the best deployment experience for Next.js apps. It offers automatic deployments, preview URLs for pull requests, and global CDN distribution.
Update your next.config.js:
/** @type {import('next').NextConfig} */
const nextConfig = {
env: {
NEXT_PUBLIC_API_URL: process.env.NEXT_PUBLIC_API_URL,
},
// Enable static optimization
output: 'standalone',
}
module.exports = nextConfigCreate .env.local for development:
NEXT_PUBLIC_API_URL=http://localhost:3001/api# Install Vercel CLI
npm i -g vercel
# Login to Vercel
vercel login
# Deploy
vercel
# Deploy to production
vercel --prodIn Vercel dashboard → Settings → Environment Variables:
NEXT_PUBLIC_API_URL=https://your-backend-url.com/apiIn Vercel dashboard → Settings → Domains:
Every push to your main branch triggers a production deployment. Pull requests get preview URLs automatically. No manual deployment needed!
Amazon EC2 (Elastic Compute Cloud) provides virtual servers in the cloud. You get full control over the server environment, making it perfect for hosting Node.js backends.
# Make key file secure
chmod 400 your-key.pem
# Connect via SSH
ssh -i your-key.pem ec2-user@your-ec2-public-ip# Update system
sudo yum update -y
# Install Node.js 18
curl -fsSL https://rpm.nodesource.com/setup_18.x | sudo bash -
sudo yum install -y nodejs
# Install Git
sudo yum install -y git
# Install PM2 (process manager)
sudo npm install -g pm2
# Verify installations
node --version
npm --version
git --version# Clone your repository
git clone https://github.com/yourusername/task-manager-backend.git
cd task-manager-backend
# Install dependencies
npm ci --only=production
# Create .env file
nano .env
# Add your environment variables and save (Ctrl+X, Y, Enter)
# Generate Prisma Client
npx prisma generate
# Run migrations
npx prisma migrate deploy
# Start with PM2
pm2 start npm --name "task-manager-api" -- start
# Save PM2 configuration
pm2 save
# Setup PM2 to start on system boot
pm2 startup
# Run the command that PM2 outputs# Install Nginx
sudo yum install -y nginx
# Create Nginx configuration
sudo nano /etc/nginx/conf.d/api.confAdd this configuration:
server {
listen 80;
server_name your-domain.com;
location /api {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}# Start Nginx
sudo systemctl start nginx
sudo systemctl enable nginx
# Test configuration
sudo nginx -t
# Reload Nginx
sudo systemctl reload nginxAmazon RDS (Relational Database Service) is a managed database service. AWS handles backups, updates, scaling, and high availability, so you can focus on your application.
Allow your EC2 instance to connect to RDS:
Get the RDS endpoint from AWS Console and update your .env:
DATABASE_URL="postgresql://postgres:your-password@your-rds-endpoint.rds.amazonaws.com:5432/postgres"# SSH into your EC2 instance
ssh -i your-key.pem ec2-user@your-ec2-ip
# Navigate to your app
cd task-manager-backend
# Update .env with RDS connection string
nano .env
# Run migrations
npx prisma migrate deploy
# Seed database (optional)
npx prisma db seed
# Restart application
pm2 restart task-manager-apiRDS automatically backs up your database:
• Use strong passwords and rotate them regularly
• Enable encryption at rest for sensitive data
• Set up CloudWatch alarms for CPU and storage
• Use read replicas for high-traffic applications
• Keep PostgreSQL version updated
CI/CD (Continuous Integration/Continuous Deployment) automates testing and deployment. Every time you push code, it automatically runs tests and deploys to production if tests pass.
Create .github/workflows/deploy.yml:
name: Deploy to Production
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run linter
run: npm run lint
- name: Run tests
run: npm test
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
deploy:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v3
- name: Deploy to EC2
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.EC2_HOST }}
username: ec2-user
key: ${{ secrets.EC2_SSH_KEY }}
script: |
cd /home/ec2-user/task-manager-backend
git pull origin main
npm ci --only=production
npx prisma generate
npx prisma migrate deploy
pm2 restart task-manager-apiIn your GitHub repository → Settings → Secrets and variables → Actions:
Create scripts/deploy.sh in your backend:
#!/bin/bash
echo "🚀 Starting deployment..."
# Pull latest code
git pull origin main
# Install dependencies
npm ci --only=production
# Generate Prisma Client
npx prisma generate
# Run database migrations
npx prisma migrate deploy
# Restart application
pm2 restart task-manager-api
echo "✅ Deployment complete!"# Make script executable
chmod +x scripts/deploy.sh# Make a change and push
git add .
git commit -m "feat: add new feature"
git push origin main
# GitHub Actions will automatically:
# 1. Run tests
# 2. Deploy to EC2 if tests pass
# 3. Show status in GitHub UIHTTPS encrypts data between your users and your server. It's essential for security, SEO, and user trust. Modern browsers show warnings for non-HTTPS sites.
Purchase a domain from registrars like:
In your domain registrar's DNS settings, add an A record:
Type: A
Name: api (or @ for root domain)
Value: Your EC2 public IP
TTL: 3600SSH into your EC2 instance and install Certbot:
# Install Certbot
sudo yum install -y certbot python3-certbot-nginx
# Get SSL certificate
sudo certbot --nginx -d api.yourdomain.com
# Follow prompts:
# - Enter email address
# - Agree to terms
# - Choose to redirect HTTP to HTTPS (recommended)
# Test auto-renewal
sudo certbot renew --dry-runCertbot automatically updates Nginx, but verify the configuration:
sudo nano /etc/nginx/conf.d/api.confShould look like:
server {
listen 80;
server_name api.yourdomain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name api.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/api.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.yourdomain.com/privkey.pem;
location /api {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}In Vercel dashboard, update environment variable:
NEXT_PUBLIC_API_URL=https://api.yourdomain.com/apiLet's Encrypt certificates expire after 90 days. Certbot automatically renews them. The renewal cron job is set up during installation and runs twice daily.
Monitoring helps you detect issues before users report them. Track performance, errors, and resource usage to maintain a healthy application.
PM2 provides built-in monitoring:
# View application status
pm2 status
# Monitor in real-time
pm2 monit
# View logs
pm2 logs task-manager-api
# View last 100 lines
pm2 logs task-manager-api --lines 100
# Clear logs
pm2 flushInstall CloudWatch agent on EC2:
# Download CloudWatch agent
wget https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
# Install
sudo rpm -U ./amazon-cloudwatch-agent.rpm
# Configure (follow prompts)
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
# Start agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
-a fetch-config \
-m ec2 \
-s \
-c file:/opt/aws/amazon-cloudwatch-agent/bin/config.jsonAdd structured logging to your application with Winston:
npm install winstonCreate src/lib/logger.ts:
import winston from 'winston';
const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
transports: [
new winston.transports.File({
filename: 'logs/error.log',
level: 'error'
}),
new winston.transports.File({
filename: 'logs/combined.log'
}),
],
});
if (process.env.NODE_ENV !== 'production') {
logger.add(new winston.transports.Console({
format: winston.format.simple(),
}));
}
export default logger;Use in your application:
import logger from './lib/logger';
// Log info
logger.info('User logged in', { userId: user.id });
// Log errors
logger.error('Database connection failed', { error: err.message });
// Log warnings
logger.warn('High memory usage detected');Create CloudWatch alarms in AWS Console:
You've successfully deployed a full-stack application to production! Your app is now running on AWS with automated deployments, SSL encryption, and monitoring.