Skip to content

Instantly share code, notes, and snippets.

@tom-butler
Created October 31, 2017 03:41
Show Gist options
  • Save tom-butler/e7ea75aa3989a6e5ea60ac95885cec83 to your computer and use it in GitHub Desktop.
Save tom-butler/e7ea75aa3989a6e5ea60ac95885cec83 to your computer and use it in GitHub Desktop.
deploy to s3 and reload ec2 server
image: geoscienceaustralia/autobots-terraform
pipelines:
branches:
master:
- step:
script:
- export TF_VAR_archive=appname-"$(date +%s)".tar.gz
- tar -cvzf $TF_VAR_archive appname
- export TF_VAR_stack_name=appname
- export TF_VAR_environment=prod
- terraform init -backend-config="key=$TF_VAR_stack_name-$TF_VAR_environment"
- terraform plan
- terraform apply
- export ASG=`bash .pipelines/get-asg.sh appname_prod`
- bash .pipelines/aws-ha-release.sh -r ap-southeast-2 -a $ASG
#!/bin/bash
aws autoscaling describe-auto-scaling-groups | grep "$1" | grep 'ResourceId' | awk -F: {'print $2;}' | uniq | tr -d ',' | tr -d ' ' | tr -d '"'
#--------------------------------------------------------------
# Variables
#--------------------------------------------------------------
variable "region" {
description = "The AWS region we want to build this stack in"
default = "ap-southeast-2"
}
variable "stack_name" {
description = "The name of our application"
default = "appname"
}
variable "owner" {
description = "A group email address to be used in tags"
default = "[email protected]"
}
variable "environment" {
description = "Used for seperating terraform backends and naming items"
}
variable "archive" {
description = "The tar.gz archive you wish to deploy"
}
#--------------------------------------------------------------
# Terraform Remote State
#--------------------------------------------------------------
# Define the remote objects that terraform will use to store
# state. We use a remote store, so that you can run destroy
# from a seperate machine to the one it was built on.
terraform {
required_version = ">= 0.9.1"
backend "s3" {
# This is an s3bucket you will need to create in your aws
# space
bucket = "ENTERYOURBUCKETHERE"
# The key should be unique to each stack, because we want to
# have multiple enviornments alongside each other we set
# this dynamically in the bitbucket-pipelines.yml with the
# --backend
key = "appname-dev"
region = "ap-southeast-2"
# This is a DynamoDB table with the Primary Key set to LockID
lock_table = "terraform-lock"
#Enable server side encryption on your terraform state
encrypt = true
}
}
#--------------------------------------------------------------
# Global Config
#--------------------------------------------------------------
# Configure the cloud provider and define the terraform backend
provider "aws" {
region = "${var.region}"
}
#--------------------------------------------------------------
# Create Bucket
#--------------------------------------------------------------
resource "aws_s3_bucket" "deploy_bucket" {
bucket = "${var.stack_name}-${var.environment}-files"
acl = "private"
tags {
Name = "${var.stack_name}-files"
Owner = "${var.owner}"
}
}
#--------------------------------------------------------------
# Upload Archive to Bucket
#--------------------------------------------------------------
resource "aws_s3_bucket_object" "bucket_object" {
key = "appname.tar.gz"
bucket = "${aws_s3_bucket.deploy_bucket.bucket}"
# This archive gets created during the ci pipeline
source = "${var.archive}"
# Enable encryption at rest
server_side_encryption = "AES256"
}
@tom-butler
Copy link
Author

get aws-ha-release here: https://github.com/colinbjohnson/aws-missing-tools/tree/master/aws-ha-release

NB: your autoscaling groups should be managed seperatedly, if you un-cleanly stop aws-ha-release it will leave the asg in a state that breaks autoscaling capabilities, to fix this just re-run your terraform that creates the asg.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment