Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save drumadrian/e1601ab34e7f609b5075f65599108960 to your computer and use it in GitHub Desktop.
Save drumadrian/e1601ab34e7f609b5075f65599108960 to your computer and use it in GitHub Desktop.
How to Delete S3 Bucket Contents in CloudFormation
AWSTemplateFormatVersion: "2010-09-09"
Description: >
This template builds a bucket and Custom Cloudformation resource that cleans up the bucket on delete
Project site: https://github.com/drumadrian/custom-cloudformation-bucket-cleanup
Parameters:
ArtifactStoreBucketName:
Description: A name for the Deployment Artifact S3 bucket
Type: String
Default: "bucket"
Resources:
ArtifactStoreBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Ref ArtifactStoreBucketName
VersioningConfiguration:
Status: Enabled
cleanupBucketOnDelete:
DependsOn: cleanupBucketOnDeleteLambda
Type: Custom::cleanupbucket
Properties:
ServiceToken:
Fn::GetAtt:
- "cleanupBucketOnDeleteLambda"
- "Arn"
BucketName: !Ref ArtifactStoreBucketName
cleanupBucketOnDeleteLambda:
DependsOn: ArtifactStoreBucket
Type: "AWS::Lambda::Function"
Properties:
Code:
ZipFile: !Sub |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import json
import boto3
from botocore.vendored import requests
def empty_delete_buckets(bucket_name):
"""
Empties and deletes the bucket
:param bucket_name:
:param region:
:return:
"""
print "trying to delete the bucket {0}".format(bucket_name)
# s3_client = SESSION.client('s3', region_name=region)
s3_client = boto3.client('s3')
# s3 = SESSION.resource('s3', region_name=region)
s3 = boto3.resource('s3')
try:
bucket = s3.Bucket(bucket_name).load()
except ClientError:
print "bucket {0} does not exist".format(bucket_name)
return
# Check if versioning is enabled
response = s3_client.get_bucket_versioning(Bucket=bucket_name)
status = response.get('Status','')
if status == 'Enabled':
response = s3_client.put_bucket_versioning(Bucket=bucket_name,
VersioningConfiguration={'Status': 'Suspended'})
paginator = s3_client.get_paginator('list_object_versions')
page_iterator = paginator.paginate(
Bucket=bucket_name
)
for page in page_iterator:
print page
if 'DeleteMarkers' in page:
delete_markers = page['DeleteMarkers']
if delete_markers is not None:
for delete_marker in delete_markers:
key = delete_marker['Key']
versionId = delete_marker['VersionId']
s3_client.delete_object(Bucket=bucket_name, Key=key, VersionId=versionId)
if 'Versions' in page and page['Versions'] is not None:
versions = page['Versions']
for version in versions:
print version
key = version['Key']
versionId = version['VersionId']
s3_client.delete_object(Bucket=bucket_name, Key=key, VersionId=versionId)
object_paginator = s3_client.get_paginator('list_objects_v2')
page_iterator = object_paginator.paginate(
Bucket=bucket_name
)
for page in page_iterator:
if 'Contents' in page:
for content in page['Contents']:
key = content['Key']
s3_client.delete_object(Bucket=bucket_name, Key=content['Key'])
#UNCOMMENT THE LINE BELOW TO MAKE LAMBDA DELETE THE BUCKET.
# THIS WILL CAUSE AN FAILURE SINCE CLOUDFORMATION ALSO TRIES TO DELETE THE BUCKET
#s3_client.delete_bucket(Bucket=bucket_name)
#print "Successfully deleted the bucket {0}".format(bucket_name)
print "Successfully emptied the bucket {0}".format(bucket_name)
def lambda_handler(event, context):
try:
bucket = event['ResourceProperties']['BucketName']
if event['RequestType'] == 'Delete':
empty_delete_buckets(bucket)
#s3 = boto3.resource('s3')
#bucket.objects.all().delete()
#bucket = s3.Bucket(bucket)
#for obj in bucket.objects.filter():
#s3.Object(bucket.name, obj.key).delete()
sendResponseCfn(event, context, "SUCCESS")
except Exception as e:
print(e)
sendResponseCfn(event, context, "FAILED")
def sendResponseCfn(event, context, responseStatus):
response_body = {'Status': responseStatus,
'Reason': 'Log stream name: ' + context.log_stream_name,
'PhysicalResourceId': context.log_stream_name,
'StackId': event['StackId'],
'RequestId': event['RequestId'],
'LogicalResourceId': event['LogicalResourceId'],
'Data': json.loads("{}")}
requests.put(event['ResponseURL'], data=json.dumps(response_body))
Description: cleanup Bucket on Delete Lambda Lambda function.
# FunctionName: lambda_function
Handler: index.lambda_handler
Role : !GetAtt cleanupBucketOnDeleteLambdaRole.Arn
Runtime: python2.7
Timeout: 60
cleanupBucketOnDeleteLambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- sts:AssumeRole
Path: "/"
Policies:
- PolicyName: !Join [ -, [!Ref 'AWS::StackName', 'cleanupBucketOnDeleteLambdaPolicy'] ]
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:*
- s3:*
Resource: '*'
- Effect: Deny
Action:
- s3:DeleteBucket
Resource: '*'
############################################################################################################
# Extra resources created so the template is interesting
############################################################################################################
SSMstacknameparameter:
Type: "AWS::SSM::Parameter"
Properties:
Description: This is the stackname that created this asset
Name: !Sub "${AWS::StackName}-stackname"
Type: String
Value: !Sub "${AWS::StackName}"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment