Back to Full-Stack Mastery

Module 4: Cloud & Deployment

Deploy your full-stack application to production with AWS, Docker, and automated CI/CD pipelines

📚 Understanding Cloud Deployment

Deployment is the process of making your application available to users on the internet. Modern cloud platforms provide scalable infrastructure, automated deployments, and monitoring tools.

Deployment Stack

Vercel

Deploy Next.js frontend with zero configuration

AWS

Host backend, database, and file storage

Docker

Containerize applications for consistent environments

🎯 What We'll Build

Deploy your Task Manager to production with automated CI/CD, environment management, monitoring, and scaling capabilities.

Deployment Features:

  • Frontend deployment on Vercel
  • Backend deployment on AWS EC2/ECS
  • PostgreSQL on AWS RDS
  • File storage with AWS S3
  • CI/CD with GitHub Actions
  • Custom domain and SSL certificates

What You'll Learn:

• Docker containerization
• AWS services (EC2, RDS, S3)
• Environment variables
• CI/CD pipelines
• Domain configuration
• SSL/HTTPS setup
• Monitoring and logging
• Auto-scaling

Lesson 1: Docker Fundamentals

📖 What is Docker?

Docker is a platform that packages your application and all its dependencies into containers. Containers ensure your app runs the same way everywhere - on your laptop, your teammate's computer, and in production.

Think of it like this: A container is like a shipping container for your code. Just as shipping containers can be moved between ships, trucks, and trains without unpacking, Docker containers can run on any system without modification.

Step 1: Install Docker

Download and install Docker Desktop from docker.com

# Verify installation
docker --version
docker-compose --version

Step 2: Create Dockerfile for Backend

In your backend project root, create Dockerfile:

# Use official Node.js image
FROM node:18-alpine

# Set working directory
WORKDIR /app

# Copy package files
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy application code
COPY . .

# Generate Prisma Client
RUN npx prisma generate

# Expose port
EXPOSE 3001

# Start application
CMD ["npm", "start"]

Step 3: Create .dockerignore

Exclude unnecessary files from the Docker image:

node_modules
npm-debug.log
.env
.git
.gitignore
README.md
.DS_Store

Step 4: Build and Run Docker Image

# Build image
docker build -t task-manager-backend .

# Run container
docker run -p 3001:3001 --env-file .env task-manager-backend

# View running containers
docker ps

# Stop container
docker stop <container-id>

💡 Docker Commands Explained:

docker build - Creates an image from Dockerfile

docker run - Starts a container from an image

-p 3001:3001 - Maps port 3001 from container to host

--env-file - Loads environment variables from file

docker ps - Lists running containers

Lesson 2: Docker Compose for Multi-Container Apps

📖 What is Docker Compose?

Docker Compose lets you define and run multi-container applications. Instead of running separate commands for your backend, database, and other services, you define everything in one file.

Step 1: Create docker-compose.yml

In your project root, create docker-compose.yml:

version: '3.8'

services:
  # PostgreSQL Database
  postgres:
    image: postgres:15-alpine
    container_name: task-manager-db
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: task_manager
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  # Backend API
  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
    container_name: task-manager-backend
    environment:
      DATABASE_URL: postgresql://postgres:postgres@postgres:5432/task_manager
      PORT: 3001
      NODE_ENV: production
    ports:
      - "3001:3001"
    depends_on:
      postgres:
        condition: service_healthy
    command: sh -c "npx prisma migrate deploy && npm start"

volumes:
  postgres_data:

Step 2: Run with Docker Compose

# Start all services
docker-compose up -d

# View logs
docker-compose logs -f

# Stop all services
docker-compose down

# Rebuild and restart
docker-compose up -d --build

🎯 What You Learned:

  • ✓ Creating Dockerfiles for Node.js apps
  • ✓ Multi-container orchestration with Docker Compose
  • ✓ Container networking and dependencies
  • ✓ Volume management for data persistence

Lesson 3: Deploy Frontend to Vercel

📖 Why Vercel?

Vercel is the company behind Next.js and provides the best deployment experience for Next.js apps. It offers automatic deployments, preview URLs for pull requests, and global CDN distribution.

Step 1: Prepare Your Frontend

Update your next.config.js:

/** @type {import('next').NextConfig} */
const nextConfig = {
  env: {
    NEXT_PUBLIC_API_URL: process.env.NEXT_PUBLIC_API_URL,
  },
  // Enable static optimization
  output: 'standalone',
}

module.exports = nextConfig

Step 2: Create Environment Variables

Create .env.local for development:

NEXT_PUBLIC_API_URL=http://localhost:3001/api

Step 3: Deploy to Vercel

Option 1: Vercel CLI

# Install Vercel CLI
npm i -g vercel

# Login to Vercel
vercel login

# Deploy
vercel

# Deploy to production
vercel --prod

Option 2: GitHub Integration (Recommended)

  1. Push your code to GitHub
  2. Go to vercel.com and sign in with GitHub
  3. Click "New Project" and import your repository
  4. Vercel auto-detects Next.js and configures everything
  5. Add environment variables in Vercel dashboard
  6. Click "Deploy"

Step 4: Configure Environment Variables in Vercel

In Vercel dashboard → Settings → Environment Variables:

NEXT_PUBLIC_API_URL=https://your-backend-url.com/api

Step 5: Custom Domain (Optional)

In Vercel dashboard → Settings → Domains:

  1. Add your custom domain (e.g., taskmanager.com)
  2. Update DNS records at your domain registrar
  3. Vercel automatically provisions SSL certificate

🚀 Automatic Deployments:

Every push to your main branch triggers a production deployment. Pull requests get preview URLs automatically. No manual deployment needed!

Lesson 4: Deploy Backend to AWS EC2

📖 AWS EC2 Overview

Amazon EC2 (Elastic Compute Cloud) provides virtual servers in the cloud. You get full control over the server environment, making it perfect for hosting Node.js backends.

Step 1: Launch EC2 Instance

  1. Sign in to AWS Console → EC2 → Launch Instance
  2. Choose Amazon Linux 2023 AMI (free tier eligible)
  3. Select t2.micro instance type (free tier)
  4. Create or select a key pair for SSH access
  5. Configure security group:
    • SSH (port 22) - Your IP only
    • HTTP (port 80) - Anywhere
    • HTTPS (port 443) - Anywhere
    • Custom TCP (port 3001) - Anywhere (for API)
  6. Launch instance

Step 2: Connect to EC2 Instance

# Make key file secure
chmod 400 your-key.pem

# Connect via SSH
ssh -i your-key.pem ec2-user@your-ec2-public-ip

Step 3: Install Node.js and Dependencies

# Update system
sudo yum update -y

# Install Node.js 18
curl -fsSL https://rpm.nodesource.com/setup_18.x | sudo bash -
sudo yum install -y nodejs

# Install Git
sudo yum install -y git

# Install PM2 (process manager)
sudo npm install -g pm2

# Verify installations
node --version
npm --version
git --version

Step 4: Deploy Your Application

# Clone your repository
git clone https://github.com/yourusername/task-manager-backend.git
cd task-manager-backend

# Install dependencies
npm ci --only=production

# Create .env file
nano .env
# Add your environment variables and save (Ctrl+X, Y, Enter)

# Generate Prisma Client
npx prisma generate

# Run migrations
npx prisma migrate deploy

# Start with PM2
pm2 start npm --name "task-manager-api" -- start

# Save PM2 configuration
pm2 save

# Setup PM2 to start on system boot
pm2 startup
# Run the command that PM2 outputs

Step 5: Configure Nginx as Reverse Proxy

# Install Nginx
sudo yum install -y nginx

# Create Nginx configuration
sudo nano /etc/nginx/conf.d/api.conf

Add this configuration:

server {
    listen 80;
    server_name your-domain.com;

    location /api {
        proxy_pass http://localhost:3001;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}
# Start Nginx
sudo systemctl start nginx
sudo systemctl enable nginx

# Test configuration
sudo nginx -t

# Reload Nginx
sudo systemctl reload nginx

🎯 What You Learned:

  • ✓ Launching and configuring EC2 instances
  • ✓ SSH access and server management
  • ✓ Process management with PM2
  • ✓ Reverse proxy with Nginx

Lesson 5: Database with AWS RDS

📖 Why AWS RDS?

Amazon RDS (Relational Database Service) is a managed database service. AWS handles backups, updates, scaling, and high availability, so you can focus on your application.

Step 1: Create RDS PostgreSQL Instance

  1. AWS Console → RDS → Create database
  2. Choose PostgreSQL
  3. Select Free tier template
  4. Configure:
    • DB instance identifier: task-manager-db
    • Master username: postgres
    • Master password: (create strong password)
    • DB instance class: db.t3.micro (free tier)
    • Storage: 20 GB
  5. Connectivity:
    • VPC: Default VPC
    • Public access: Yes (for development)
    • VPC security group: Create new
  6. Create database (takes 5-10 minutes)

Step 2: Configure Security Group

Allow your EC2 instance to connect to RDS:

  1. RDS → Databases → Select your database
  2. Click on VPC security group
  3. Edit inbound rules → Add rule:
    • Type: PostgreSQL
    • Port: 5432
    • Source: Security group of your EC2 instance

Step 3: Update Application Configuration

Get the RDS endpoint from AWS Console and update your .env:

DATABASE_URL="postgresql://postgres:your-password@your-rds-endpoint.rds.amazonaws.com:5432/postgres"

Step 4: Run Migrations on RDS

# SSH into your EC2 instance
ssh -i your-key.pem ec2-user@your-ec2-ip

# Navigate to your app
cd task-manager-backend

# Update .env with RDS connection string
nano .env

# Run migrations
npx prisma migrate deploy

# Seed database (optional)
npx prisma db seed

# Restart application
pm2 restart task-manager-api

Step 5: Enable Automated Backups

RDS automatically backs up your database:

  • Backup retention: 7 days (configurable)
  • Automated snapshots: Daily during maintenance window
  • Point-in-time recovery: Restore to any second in retention period

💡 RDS Best Practices:

• Use strong passwords and rotate them regularly

• Enable encryption at rest for sensitive data

• Set up CloudWatch alarms for CPU and storage

• Use read replicas for high-traffic applications

• Keep PostgreSQL version updated

Lesson 6: CI/CD with GitHub Actions

📖 What is CI/CD?

CI/CD (Continuous Integration/Continuous Deployment) automates testing and deployment. Every time you push code, it automatically runs tests and deploys to production if tests pass.

Step 1: Create GitHub Actions Workflow

Create .github/workflows/deploy.yml:

name: Deploy to Production

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  test:
    runs-on: ubuntu-latest
    
    steps:
    - uses: actions/checkout@v3
    
    - name: Setup Node.js
      uses: actions/setup-node@v3
      with:
        node-version: '18'
        cache: 'npm'
    
    - name: Install dependencies
      run: npm ci
    
    - name: Run linter
      run: npm run lint
    
    - name: Run tests
      run: npm test
      env:
        DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db

  deploy:
    needs: test
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    
    steps:
    - uses: actions/checkout@v3
    
    - name: Deploy to EC2
      uses: appleboy/ssh-action@master
      with:
        host: ${{ secrets.EC2_HOST }}
        username: ec2-user
        key: ${{ secrets.EC2_SSH_KEY }}
        script: |
          cd /home/ec2-user/task-manager-backend
          git pull origin main
          npm ci --only=production
          npx prisma generate
          npx prisma migrate deploy
          pm2 restart task-manager-api

Step 2: Add GitHub Secrets

In your GitHub repository → Settings → Secrets and variables → Actions:

  • EC2_HOST: Your EC2 public IP or domain
  • EC2_SSH_KEY: Contents of your .pem key file

Step 3: Add Deployment Script

Create scripts/deploy.sh in your backend:

#!/bin/bash

echo "🚀 Starting deployment..."

# Pull latest code
git pull origin main

# Install dependencies
npm ci --only=production

# Generate Prisma Client
npx prisma generate

# Run database migrations
npx prisma migrate deploy

# Restart application
pm2 restart task-manager-api

echo "✅ Deployment complete!"
# Make script executable
chmod +x scripts/deploy.sh

Step 4: Test Your Pipeline

# Make a change and push
git add .
git commit -m "feat: add new feature"
git push origin main

# GitHub Actions will automatically:
# 1. Run tests
# 2. Deploy to EC2 if tests pass
# 3. Show status in GitHub UI

🎯 CI/CD Benefits:

  • ✓ Automated testing prevents bugs in production
  • ✓ Faster deployment (seconds instead of minutes)
  • ✓ Consistent deployment process
  • ✓ Easy rollback if something goes wrong
  • ✓ Deployment history in GitHub

Lesson 7: SSL/HTTPS and Domain Setup

📖 Why HTTPS?

HTTPS encrypts data between your users and your server. It's essential for security, SEO, and user trust. Modern browsers show warnings for non-HTTPS sites.

Step 1: Get a Domain Name

Purchase a domain from registrars like:

  • Namecheap
  • GoDaddy
  • Google Domains
  • AWS Route 53

Step 2: Point Domain to EC2

In your domain registrar's DNS settings, add an A record:

Type: A
Name: api (or @ for root domain)
Value: Your EC2 public IP
TTL: 3600

Step 3: Install Certbot for SSL

SSH into your EC2 instance and install Certbot:

# Install Certbot
sudo yum install -y certbot python3-certbot-nginx

# Get SSL certificate
sudo certbot --nginx -d api.yourdomain.com

# Follow prompts:
# - Enter email address
# - Agree to terms
# - Choose to redirect HTTP to HTTPS (recommended)

# Test auto-renewal
sudo certbot renew --dry-run

Step 4: Update Nginx Configuration

Certbot automatically updates Nginx, but verify the configuration:

sudo nano /etc/nginx/conf.d/api.conf

Should look like:

server {
    listen 80;
    server_name api.yourdomain.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl;
    server_name api.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/api.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/api.yourdomain.com/privkey.pem;
    
    location /api {
        proxy_pass http://localhost:3001;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Step 5: Update Frontend API URL

In Vercel dashboard, update environment variable:

NEXT_PUBLIC_API_URL=https://api.yourdomain.com/api

🔒 SSL Certificate Auto-Renewal:

Let's Encrypt certificates expire after 90 days. Certbot automatically renews them. The renewal cron job is set up during installation and runs twice daily.

Lesson 8: Monitoring and Logging

📖 Why Monitoring Matters

Monitoring helps you detect issues before users report them. Track performance, errors, and resource usage to maintain a healthy application.

Step 1: PM2 Monitoring

PM2 provides built-in monitoring:

# View application status
pm2 status

# Monitor in real-time
pm2 monit

# View logs
pm2 logs task-manager-api

# View last 100 lines
pm2 logs task-manager-api --lines 100

# Clear logs
pm2 flush

Step 2: Setup AWS CloudWatch

Install CloudWatch agent on EC2:

# Download CloudWatch agent
wget https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm

# Install
sudo rpm -U ./amazon-cloudwatch-agent.rpm

# Configure (follow prompts)
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard

# Start agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
  -a fetch-config \
  -m ec2 \
  -s \
  -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json

Step 3: Application Logging

Add structured logging to your application with Winston:

npm install winston

Create src/lib/logger.ts:

import winston from 'winston';

const logger = winston.createLogger({
  level: process.env.LOG_LEVEL || 'info',
  format: winston.format.combine(
    winston.format.timestamp(),
    winston.format.errors({ stack: true }),
    winston.format.json()
  ),
  transports: [
    new winston.transports.File({ 
      filename: 'logs/error.log', 
      level: 'error' 
    }),
    new winston.transports.File({ 
      filename: 'logs/combined.log' 
    }),
  ],
});

if (process.env.NODE_ENV !== 'production') {
  logger.add(new winston.transports.Console({
    format: winston.format.simple(),
  }));
}

export default logger;

Use in your application:

import logger from './lib/logger';

// Log info
logger.info('User logged in', { userId: user.id });

// Log errors
logger.error('Database connection failed', { error: err.message });

// Log warnings
logger.warn('High memory usage detected');

Step 4: Setup Alerts

Create CloudWatch alarms in AWS Console:

  • CPU utilization > 80% for 5 minutes
  • Memory usage > 85%
  • Disk space < 20% free
  • HTTP 5xx errors > 10 in 5 minutes

🎉 Congratulations!

You've successfully deployed a full-stack application to production! Your app is now running on AWS with automated deployments, SSL encryption, and monitoring.

What You've Accomplished:
✓ Docker containerization
✓ Frontend on Vercel
✓ Backend on AWS EC2
✓ Database on AWS RDS
✓ CI/CD with GitHub Actions
✓ SSL/HTTPS setup
✓ Monitoring and logging
✓ Production-ready infrastructure

🚀 Next Steps

  • →Implement auto-scaling with AWS Auto Scaling Groups
  • →Add Redis for caching and session management
  • →Set up AWS S3 for file uploads
  • →Implement rate limiting and DDoS protection
  • →Add error tracking with Sentry
  • →Optimize costs with AWS Cost Explorer