Skip to content

Instantly share code, notes, and snippets.

@swport
Last active March 13, 2025 19:01
Show Gist options
  • Save swport/a0dbc235c5d78f9e5f505a2e9fcb7af1 to your computer and use it in GitHub Desktop.
Save swport/a0dbc235c5d78f9e5f505a2e9fcb7af1 to your computer and use it in GitHub Desktop.
React app to s3

Updated Deployment Script for AWS SDK v3

This script uses the modular AWS SDK v3 to:

  1. Create a timestamped backup directory.
  2. Move existing files to the backup directory.
  3. Upload new files to the root of the S3 bucket.
  4. Enable static website hosting and log the publicly accessible URL.

1. Install AWS SDK v3

Install the required AWS SDK v3 packages:

npm install @aws-sdk/client-s3

2. Updated Script

Create a file named deploy-v3.js:

const { S3Client, CreateBucketCommand, ListObjectsV2Command, CopyObjectCommand, DeleteObjectCommand, PutBucketPolicyCommand, PutBucketWebsiteCommand } = require('@aws-sdk/client-s3');
const { CloudFrontClient, CreateDistributionCommand, GetDistributionCommand, UpdateDistributionCommand } = require('@aws-sdk/client-cloudfront');
const { Upload } = require('@aws-sdk/lib-storage');
const fs = require('fs');
const path = require('path');

// Load environment variables
const bucketName = process.env.S3_BUCKET_NAME;
const region = process.env.AWS_REGION;

if (!bucketName || !region) {
  console.error('Error: S3_BUCKET_NAME and AWS_REGION environment variables are required.');
  process.exit(1);
}

// Configure AWS SDK v3 clients
const s3Client = new S3Client({ region });
const cloudFrontClient = new CloudFrontClient({ region });

// Function to create S3 bucket (if it doesn't exist)
const createBucket = async () => {
  try {
    await s3Client.send(new CreateBucketCommand({ Bucket: bucketName }));
    console.log(`Bucket created: ${bucketName}`);
  } catch (err) {
    if (err.name !== 'BucketAlreadyOwnedByYou') {
      console.error('Error creating bucket:', err);
      throw err;
    }
  }
};

// Function to list all objects in the bucket (excluding folders/directories)
const listObjects = async () => {
  try {
    const data = await s3Client.send(new ListObjectsV2Command({ Bucket: bucketName }));
    // Filter out objects that end with '/' (folders/directories)
    return data.Contents ? data.Contents.filter((object) => !object.Key.endsWith('/')) : [];
  } catch (err) {
    console.error('Error listing objects:', err);
    throw err;
  }
};

// Function to move objects to a backup directory
const moveObjectsToBackup = async (objects, backupDir) => {
  for (const object of objects) {
    const copyParams = {
      Bucket: bucketName,
      CopySource: `${bucketName}/${object.Key}`,
      Key: `${backupDir}/${object.Key}`,
    };

    const deleteParams = {
      Bucket: bucketName,
      Key: object.Key,
    };

    try {
      // Copy object to backup directory
      await s3Client.send(new CopyObjectCommand(copyParams));
      console.log(`Copied to backup: ${object.Key}`);

      // Delete original object
      await s3Client.send(new DeleteObjectCommand(deleteParams));
      console.log(`Deleted original: ${object.Key}`);
    } catch (err) {
      console.error(`Error moving ${object.Key}:`, err);
      throw err;
    }
  }
};

// Function to upload new files to S3
const uploadFiles = async () => {
  const distFolder = path.join(__dirname, 'dist');
  const files = fs.readdirSync(distFolder);

  for (const file of files) {
    const filePath = path.join(distFolder, file);
    const fileContent = fs.readFileSync(filePath);

    const params = {
      Bucket: bucketName,
      Key: file,
      Body: fileContent,
      ContentType: file.endsWith('.html') ? 'text/html' : file.endsWith('.css') ? 'text/css' : 'application/javascript',
    };

    try {
      // Use the Upload class for multipart uploads (better for larger files)
      const upload = new Upload({
        client: s3Client,
        params,
      });

      await upload.done();
      console.log(`Uploaded: ${file}`);
    } catch (err) {
      console.error(`Error uploading ${file}:`, err);
      throw err;
    }
  }
};

// Function to make the bucket publicly accessible
const makeBucketPublic = async () => {
  const bucketPolicy = {
    Version: '2012-10-17',
    Statement: [
      {
        Sid: 'PublicReadGetObject',
        Effect: 'Allow',
        Principal: '*',
        Action: 's3:GetObject',
        Resource: `arn:aws:s3:::${bucketName}/*`,
      },
    ],
  };

  const params = {
    Bucket: bucketName,
    Policy: JSON.stringify(bucketPolicy),
  };

  try {
    await s3Client.send(new PutBucketPolicyCommand(params));
    console.log('Bucket policy updated to public.');
  } catch (err) {
    console.error('Error updating bucket policy:', err);
    throw err;
  }
};

// Function to enable static website hosting and log the URL
const enableStaticWebsiteHosting = async () => {
  const params = {
    Bucket: bucketName,
    WebsiteConfiguration: {
      ErrorDocument: {
        Key: 'index.html',
      },
      IndexDocument: {
        Suffix: 'index.html',
      },
    },
  };

  try {
    await s3Client.send(new PutBucketWebsiteCommand(params));
    console.log('Static website hosting enabled.');

    // Log the publicly accessible URL
    const websiteUrl = `http://${bucketName}.s3-website-${region}.amazonaws.com`;
    console.log(`Your static website URL: ${websiteUrl}`);
  } catch (err) {
    console.error('Error enabling static website hosting:', err);
    throw err;
  }
};

// Function to create or update a CloudFront distribution
const configureCloudFront = async () => {
  const distributionConfig = {
    CallerReference: `${Date.now()}`, // Unique identifier for the distribution
    Comment: 'CloudFront distribution for React app',
    Enabled: true,
    Origins: {
      Quantity: 1,
      Items: [
        {
          Id: 'S3-origin',
          DomainName: `${bucketName}.s3.amazonaws.com`, // S3 bucket domain
          S3OriginConfig: {
            OriginAccessIdentity: '', // Optional: Use an Origin Access Identity (OAI) for private buckets
          },
        },
      ],
    },
    DefaultCacheBehavior: {
      TargetOriginId: 'S3-origin',
      ViewerProtocolPolicy: 'redirect-to-https', // Redirect HTTP to HTTPS
      AllowedMethods: {
        Quantity: 2,
        Items: ['GET', 'HEAD'], // Only allow GET and HEAD requests
      },
      CachedMethods: {
        Quantity: 2,
        Items: ['GET', 'HEAD'],
      },
      ForwardedValues: {
        QueryString: false, // Do not forward query strings
        Cookies: {
          Forward: 'none', // Do not forward cookies
        },
      },
      MinTTL: 0, // Minimum TTL for caching
    },
    ViewerCertificate: {
      CloudFrontDefaultCertificate: true, // Use the default CloudFront certificate
    },
    DefaultRootObject: 'index.html', // Default file to serve
  };

  try {
    // Check if a distribution already exists
    const listDistributions = await cloudFrontClient.send(new ListDistributionsCommand({}));
    const existingDistribution = listDistributions.DistributionList.Items.find(
      (dist) => dist.Origins.Items[0].DomainName === `${bucketName}.s3.amazonaws.com`
    );

    if (existingDistribution) {
      // Update the existing distribution
      const updateParams = {
        Id: existingDistribution.Id,
        IfMatch: existingDistribution.ETag,
        DistributionConfig: distributionConfig,
      };
      await cloudFrontClient.send(new UpdateDistributionCommand(updateParams));
      console.log('CloudFront distribution updated.');
    } else {
      // Create a new distribution
      await cloudFrontClient.send(new CreateDistributionCommand({ DistributionConfig: distributionConfig }));
      console.log('CloudFront distribution created.');
    }
  } catch (err) {
    console.error('Error configuring CloudFront:', err);
    throw err;
  }
};

// Run the deployment
(async () => {
  try {
    // Create bucket if it doesn't exist
    await createBucket();

    // List existing objects in the bucket (excluding folders/directories)
    const objects = await listObjects();

    if (objects.length > 0) {
      // Create a timestamped backup directory
      const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
      const backupDir = `backup/${timestamp}`;

      // Move existing objects to the backup directory
      await moveObjectsToBackup(objects, backupDir);
    }

    // Upload new files
    await uploadFiles();

    // Make the bucket publicly accessible
    await makeBucketPublic();

    // Enable static website hosting and log the URL
    await enableStaticWebsiteHosting();

    // Configure CloudFront distribution
    await configureCloudFront();

    console.log('Deployment completed successfully!');
  } catch (err) {
    console.error('Deployment failed:', err);
    process.exit(1); // Exit with a non-zero code to indicate failure
  }
})();

Key Changes for AWS SDK v3

  1. Modular Imports:

    • Only the required clients and commands are imported from @aws-sdk/client-s3.
  2. New Client Initialization:

    • The S3Client is initialized instead of the v2 AWS.S3.
  3. Command-Based Operations:

    • Each operation (e.g., CreateBucketCommand, ListObjectsV2Command) is executed using s3Client.send().
  4. Multipart Uploads:

    • The Upload class from @aws-sdk/lib-storage is used for better handling of file uploads.

Run the Script

  1. Build your Vite project:

    npm run build
  2. Run the deployment script:

    node deploy-v3.js
  3. Check the console output for the static website URL:

    Your static website URL: http://my-react-ts-app-bucket.s3-website-us-east-1.amazonaws.com
    

Automate Deployment

Add the script to your package.json:

"scripts": {
  "deploy": "npm run build && node deploy-v3.js"
}

Run the deployment with:

npm run deploy

Advantages of AWS SDK v3

  • Modularity: Smaller bundle size since you only import what you need.
  • TypeScript Support: Better TypeScript integration out of the box.
  • Modern Features: Improved performance and support for newer AWS services.

This script is now fully compatible with AWS SDK v3 and provides the same functionality as the v2 version.

Github deployment pipeline

name: Deploy to S3

on:
  push:
    branches:
      - main # Trigger on pushes to the main branch

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      # Checkout the repository
      - name: Checkout code
        uses: actions/checkout@v3

      # Set up Node.js
      - name: Set up Node.js
        uses: actions/setup-node@v3
        with:
          node-version: 18 # Use the Node.js version you're using

      # Install dependencies
      - name: Install dependencies
        run: npm install

      # Build the project
      - name: Build project
        run: npm run build

      # Configure AWS credentials
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v3
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}

      # Deploy to S3
      - name: Deploy to S3
        run: |
          aws s3 sync ./dist s3://${{ secrets.S3_BUCKET_NAME }} --delete
          aws s3 cp ./dist/index.html s3://${{ secrets.S3_BUCKET_NAME }}/index.html --cache-control "no-cache"
          aws s3 cp ./dist/assets/ s3://${{ secrets.S3_BUCKET_NAME }}/assets/ --recursive --cache-control "max-age=31536000"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment