If you would like to persist data from your ECS containers, i.e. hosting databases like MySQL or MongoDB with Docker, you need to ensure that you can mount the data directory of the database in the container to volume that's not going to dissappear when your container or worse yet, the EC2 instance that hosts your containers, is restarted or scaled up or down for any reason.
Don't know how to create your own AWS ECS Cluster? Go here!
Sadly the EC2 provisioning process doesn't allow you to configure EFS during the initial config. After your create your cluster, follow the guide below.
If you're using an Alpine-based Node server like duluca/minimal-node-web-server follow this guide: 0. Go to Amazon ECS
- Task Definitions -> Create new Task Definition
- Name: app-name-task, role: none, network: bridge
- Add container, name: app-name from before, image: URI from before, but append ":latest"
- Soft limit, 256 MB for Node.js
- Port mappings, Container port: 3000
- Log configuration: awslogs; app-name-logs, region, app-name-prod
If you're hosting a lightweight database like mongo or excellalabs/mongo: 0. Go to Amazon ECS
- Task Definitions -> Create new Task Definition
- Name: mongodb-task, role: none, network: bridge
- Add container, name: mongodb-prod, image: mongo or excellalabs/mongo, append a version number like ":3.4.7"
- Soft limit, 1024 MB
- Port mappings, Container port: 27017
- Log configuration: awslogs; mongodb-prod-logs, region, mongodb-prod
- Add Env Variables, see excellalabs/mongo repo for details MONGODB_ADMIN_PASS MONGODB_APPLICATION_DATABASE MONGODB_APPLICATION_PASS MONGODB_APPLICATION_USER
It is not a security best practice to store such secrets in an encrypted form. If you'd like to do the right way, here's your homework: https://aws.amazon.com/blogs/compute/managing-secrets-for-amazon-ecs-applications-using-parameter-store-and-iam-roles-for-tasks/
- Then create a new service based on this task definition. 8.1. Make sure that under Deployment Options Minimum healthy percent is 0 and Maximum percent 100. You don't ever want to seperate Mongo instances mounted to the same data source.
If you would like to encrypt your file system at-rest, then you must have a KMS key.
If not, you may skip but it is strongly recommended that you encrypt your data - no matter how unimportant you think your data is at the moment.
- Headover to IAM -> Encryption Keys
- Create key
- Provide Alias and a description
- Tag with 'Environment': 'production'
- Carefuly select 'Key Administrators'
- Uncheck 'Allow key administrators to delete this key.' to prevent accidental deletions
- Key Usage Permissions
- Select the 'Task Role' that was created when configuring your AWS ECS Cluster. If not see the Create Task Role section in the guide linked above. You'll need to update existing task definitions, and update your service with the new task definition for the changes to take affect.
- Finish
- Launch EFS
- Create file system
- Select the VPC that your ECS cluster resides in
- Select the AZs that your container instances reside in
- Next
- Add a name
- Enable encryption (You WANT this -- see above)
- Create File System
- Back on the EFS main page, expand the EFS definition, if not already expanded
- Copy the DNS name
- CloudFormation
- Select EC2ContainerService-cluster-name
- View/edit design template
- Modify the YML to add
EfsUri
amongst the input parameters
EfsUri:
Type: String
Description: >
EFS volume DNS URI you would like to mount your EC2 instances to. Directory -> /mnt/efs
Default: ''
- Find
EcsInstanceLc
update itsUserData
property to look like:
UserData: !If
- SetEndpointToECSAgent
- Fn::Base64: !Sub |
#!/bin/bash
# Install nfs-utils
cloud-init-per once yum_update yum update -y
cloud-init-per once install_nfs_utils yum install -y nfs-utils
# Create /efs folder
cloud-init-per once mkdir_efs mkdir /efs
# Mount /efs
cloud-init-per once mount_efs echo -e '${EfsUri}:/ /efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0' >> /etc/fstab
mount -a
echo ECS_CLUSTER=${EcsClusterName} >> /etc/ecs/ecs.config
echo ECS_BACKEND_HOST=${EcsEndpoint} >> /etc/ecs/ecs.config
- Fn::Base64: !Sub |
#!/bin/bash
# Install nfs-utils
cloud-init-per once yum_update yum update -y
cloud-init-per once install_nfs_utils yum install -y nfs-utils
# Create /efs folder
cloud-init-per once mkdir_efs mkdir /efs
# Mount /efs
cloud-init-per once mount_efs echo -e '${EfsUri}:/ /efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0' >> /etc/fstab
mount -a
echo ECS_CLUSTER=${EcsClusterName} >> /etc/ecs/ecs.config
- Validate the template
- Save the template to S3 and copy the URL
- Select your CloudFormation stack again -> Update stack
- Paste in the S3 url -> Next
- Now you'll see an
EfsUri
parameter, define it using the DNS name copied from the previous part - On the review screen make sure it is only updating the Auto Scaling Group (ASG) and the Launch Configuration (LC)
- Let it update the stack
- ECS -> Cluster
- Switch to ECS Instances tab There are two paths forward here, one is the sledgehammer, which will bring down your applications:
- Scale ECS instances to 0 Note This is the part where your applications come down
- After all instances have been brougt down, scale back up to 2 (or more) Or perform a rolling update, which will keep alive your application:
- Click on the EC2 instance and on the EC2 dashboard, select Actions -> State -> Terminate
- Wait while the instance is terminated and reprovisioned
- Rinse and repeat for the next instance
- ECS -> Task definitions
- Create new revision
- If you already have not added it, make sure the Role here matches the one for the KMS key
- Add volume
- Name: 'efs', Source Path: '/efs/your-dir' (If this doesn't work try '/mnt/efs/your-dir')
- Add
- Click on container name, under Storage and Logs
- Select mount point 'efs'
- Provide the internal container path. i.e. for MongoDB default is '/data/db'
- Update
- Create
- ECS -> Clusters
- Click on Service name
- Update
- Type in the new task definition name
- Update service
Your service should re-provision the existing containers and voila, you're done!
Test what you have done.
Go ahead and save some data.
Then scale your EC2 instance size down to 0 (the sledgehammer) and scale it back up again and see if the data is still accessible.