Skip to content

Instantly share code, notes, and snippets.

@M1ke
Last active July 12, 2021 14:04
Show Gist options
  • Save M1ke/fce79f372689923c75ebef24b01c831a to your computer and use it in GitHub Desktop.
Save M1ke/fce79f372689923c75ebef24b01c831a to your computer and use it in GitHub Desktop.
Useful scripts to automate components of AWS EC2

AWS EC2 Python scripts

Using EC2 instances within an autoscaling group is the best way to guarantee redundancy, scalability and fault tolerance across your infrastrucutre.

This comes at the price of common paradigms such as being able to SSH into a known URL to manage configuration or logs, and the requirement that configurations must be applied to multiple machines.

DevOps provisioning tools such as Puppet can be used to manage configurations, but they add an extra requirement to initial migrations to AWS which could slow or stop adoption.

In fact it can be easy to use a group of instances based on a single "master" instance which spends most of its time switched off, but is turned on to effect configuration changes. This can then be used to generate an AMI, and that AMI applied to an autoscaling group by a launch configuration. The group must then be cycled to push out the old instances on the previous configuration.

The SSH ability can be handled with an Elastic IP mapped to a Route 53 subdomain, and a watcher to ensure that this EIP is always allocated to an instance in the group.

# Given the specified parameters in the main() method this will
# find the latest AMI with a specific name (or wildcard prefix e.g. "my-server*")
# and allocate it to an auto scaling group by generating a new launch configuration
#
# In the event there are multiple AMI/snapshots for a specified prefix this will
# also delete old ones, so ensure that you do enter the prefix correctly
#
# Once the AS group has been updated with a new launch configuration this will
# go further to create scheduled events to increase capacity and then decrease it
# in order to push the new AMI in to production. For this to work the AS group
# must be set to first terminate instance with an old launch configuration
#
# The scheduler assumes the user runs in British Summer Time at present as the times in
# AWS seem inconsistent, so it needs to be determined how best to set the time
import string
import random
import boto3
import datetime
from pprint import pprint
def insecure_random_string(string_length):
return ''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(string_length))
def main(image_name_filter, launch_config_name_starts_with, as_group_name):
ec2 = boto3.client('ec2')
print "Searching for images using filter "+image_name_filter
images = ec2.describe_images(Filters=[
{'Name': 'name', 'Values': [image_name_filter]}], Owners=['self'])
recent_image = False
old_images = []
for image in images['Images']:
if (not recent_image):
recent_image = image
elif image['CreationDate']>recent_image['CreationDate']:
old_images.append(recent_image)
recent_image = image
else:
old_images.append(image)
print "Most recent image is "+recent_image['Name']+" ("+recent_image['ImageId']+") created "+recent_image['CreationDate']
if not old_images:
print "No old images to remove"
for old_image in old_images:
old_image_id = old_image['ImageId']
old_snapshot_id = old_image['BlockDeviceMappings'][0]['Ebs']['SnapshotId']
print "Deleting old image "+old_image_id+" and snapshot "+old_snapshot_id
ec2.deregister_image(ImageId=old_image_id)
ec2.delete_snapshot(SnapshotId=old_snapshot_id)
autoscaling = boto3.client('autoscaling')
launch_configs = autoscaling.describe_launch_configurations()
for launch_config in launch_configs['LaunchConfigurations']:
name = launch_config['LaunchConfigurationName']
if str.startswith(name, launch_config_name_starts_with):
print("Cloning launch config named "+launch_config['LaunchConfigurationName'])
launch_config['ImageId'] = recent_image['ImageId']
launch_config['BlockDeviceMappings'][0]['Ebs']['SnapshotId'] = recent_image['BlockDeviceMappings'][0]['Ebs']['SnapshotId']
launch_config['LaunchConfigurationName'] = recent_image['Name']+'_'+insecure_random_string(4)
del launch_config['LaunchConfigurationARN']
del launch_config['CreatedTime']
del launch_config['KernelId']
del launch_config['RamdiskId']
print("Creating new launch config as follows")
pprint(launch_config)
autoscaling.create_launch_configuration(**launch_config)
break
autoscaling.update_auto_scaling_group(AutoScalingGroupName=as_group_name, LaunchConfigurationName=launch_config['LaunchConfigurationName'])
dev_autoscale = autoscaling.describe_auto_scaling_groups(AutoScalingGroupNames=[
as_group_name,
])['AutoScalingGroups'][0]
size = dev_autoscale['DesiredCapacity']
boost_size = size * 2
print "Scheduling with capacity ", boost_size
if boost_size>1:
autoscaling.put_scheduled_update_group_action(
AutoScalingGroupName=as_group_name,
ScheduledActionName='as-update-boost-'+str(boost_size),
StartTime=(datetime.datetime.now() + datetime.timedelta(minutes=1) - datetime.timedelta(hours=1)),
MinSize=boost_size,
MaxSize=boost_size,
DesiredCapacity=boost_size
)
autoscaling.put_scheduled_update_group_action(
AutoScalingGroupName=as_group_name,
ScheduledActionName='as-update-reduce-'+str(size),
StartTime=(datetime.datetime.now() + datetime.timedelta(minutes=15) - datetime.timedelta(hours=1)),
MinSize=size,
MaxSize=size,
DesiredCapacity=size
# When using autoscaling instances may disappear or re-appear at will
# whilst it is generally not advised to do much work on the instances
# via SSH due to their temporary nature, checking things are working,
# using the SSH terminal to access RDS or EFS attached storage can be
# useful rather than creating extra servers for these purposes.
#
# This script can be run on a CloudWatch trigger for instance creation
# or termination to ensure that a specific Elastic IP is always allocated
# to at least one instance in a group. Be aware that this WILL pull the IP
# off of its current allocation if already allocated
#
# Declare the following
# AWS_ID = '' your account ID
# EIP_ID = '' the name beginning "eipalloc-" of your IP
# AS_GROUP_NAME = '' the name of your auto scaling group
import boto3
from pprint import pprint
def as_get_instances(asgroup, NextToken = None):
client = boto3.client('autoscaling')
# This method from
# https://gist.github.com/alertedsnake/4b85ea44481f518cf157
irsp = None
if NextToken:
irsp = client.describe_auto_scaling_instances(MaxRecords=2, NextToken=NextToken)
else:
irsp = client.describe_auto_scaling_instances(MaxRecords=2)
for i in irsp['AutoScalingInstances']:
if i['AutoScalingGroupName'] == asgroup:
yield i['InstanceId']
if 'NextToken' in irsp:
for i in as_get_instances(client, asgroup, NextToken = irsp['NextToken']):
yield i
def main():
instances = list(as_get_instances(AS_GROUP_NAME))
pprint(instances)
ec2 = boto3.client('ec2')
instances = ec2.describe_instances(InstanceIds=instances)
no_ip = True
for reservation in instances.get('Reservations'):
instance = reservation.get('Instances')[0];
association = instance.get('NetworkInterfaces')[0].get('Association');
ip_owner = association.get('IpOwnerId')
print('Owner for '+instance.get('InstanceId')+' is '+ip_owner)
if (ip_owner==AWS_ID):
print('Got elastic IP allocated '+association.get('PublicIp')+' for instance '+instance.get('InstanceId'))
no_ip = False
break
if (not no_ip):
print('Had an IP so stopping now')
return
first_instance = instances.get('Reservations')[0].get('Instances')[0]
instance_id = first_instance.get('InstanceId')
ec2.associate_address(AllocationId=EIP_ID, InstanceId=instance_id)
print('Allocated address to '+instance_id)
# This script will run to create an image of a specified Instance, based primarily
# off of a CloudWatch event to detect an instance has shut down. It can be used as
# part of a simple Auto Scaling set up where you provision machines from an AMI
# rather than using a tool such as Puppet or Chef which are extra barriers to begin
# using AWS. Once an image has been created, use the as-switch-image.py to replace
# your launch config and trigger the switch-over process for the AS group
#
# Set up a Cloud Watch event with the following pattern replacing YOUR_INSTANCE_ID
# with the ID of the instance you want to track in this form
#
# {
# "source": [
# "aws.ec2"
# ],
# "detail-type": [
# "EC2 Instance State-change Notification"
# ],
# "detail": {
# "state": [
# "stopped"
# ],
# "instance-id": [
# YOUR_INSTANCE_ID
# ]
# }
# }
#
# And set NAME_PREFIX before the method declarations
import boto3
from pprint import pprint
from datetime import datetime
def main(event, context):
ec2 = boto3.resource('ec2')
pprint(event)
instance_id = event['detail']['instance-id']
instance = ec2.Instance(instance_id)
date = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
name = NAME_PREFIX+'-'+date
print('Create image for '+instance_id+' with name '+name)
image = instance.create_image(Name=name, Description='Image created automatically at '+date+' with name '+name+' following instance shut down')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment