-
Star
(281)
You must be signed in to star a gist -
Fork
(110)
You must be signed in to fork a gist
-
-
Save eladnava/96bd9771cd2e01fb4427230563991c8d to your computer and use it in GitHub Desktop.
#!/bin/sh | |
# Make sure to: | |
# 1) Name this file `backup.sh` and place it in /home/ubuntu | |
# 2) Run sudo apt-get install awscli to install the AWSCLI | |
# 3) Run aws configure (enter s3-authorized IAM user and specify region) | |
# 4) Fill in DB host + name | |
# 5) Create S3 bucket for the backups and fill it in below (set a lifecycle rule to expire files older than X days in the bucket) | |
# 6) Run chmod +x backup.sh | |
# 7) Test it out via ./backup.sh | |
# 8) Set up a daily backup at midnight via `crontab -e`: | |
# 0 0 * * * /home/ubuntu/backup.sh > /home/ubuntu/backup.log | |
# DB host (secondary preferred as to avoid impacting primary performance) | |
HOST=db.example.com | |
# DB name | |
DBNAME=my-db | |
# S3 bucket name | |
BUCKET=s3-bucket-name | |
# Linux user account | |
USER=ubuntu | |
# Current time | |
TIME=`/bin/date +%d-%m-%Y-%T` | |
# Backup directory | |
DEST=/home/$USER/tmp | |
# Tar file of backup directory | |
TAR=$DEST/../$TIME.tar | |
# Create backup dir (-p to avoid warning if already exists) | |
/bin/mkdir -p $DEST | |
# Log | |
echo "Backing up $HOST/$DBNAME to s3://$BUCKET/ on $TIME"; | |
# Dump from mongodb host into backup directory | |
/usr/bin/mongodump -h $HOST -d $DBNAME -o $DEST | |
# Create tar of backup directory | |
/bin/tar cvf $TAR -C $DEST . | |
# Upload tar to s3 | |
/usr/bin/aws s3 cp $TAR s3://$BUCKET/ | |
# Remove tar file locally | |
/bin/rm -f $TAR | |
# Remove backup directory | |
/bin/rm -rf $DEST | |
# All done | |
echo "Backup available at https://s3.amazonaws.com/$BUCKET/$TIME.tar" |
I am using Debian on a ec2 container. I keep getting cowardly errors on archiving the database. The data is there and I got it to tar with out the -C by hand. But any time I use the script it fails.
@TopHatMan What is the exact error message you are facing?
@eladnava
will it be okay if the mongodump size is over 11G?
Hi @alidavid0418,
Should be fine, you will need at least 23GB free space on your /
mounted partition. S3 definitely supports large files. :)
@eladnava
Thank you for your kind attention and confirmation. +1
error parsing command line options: expected argument for flag -h, --host', but got option
-d'
how to backup MongoDB(ECS container) data backup on s3 bucket
how to run the backup script and where?
and a simpler way to do all this is:
mongodump --archive --gzip | aws s3 cp - s3://my-bucket/some-file
clean and simple! thanks
when running
aws configure
shall I set the default output format to zip? or tar.gz? by default it is json.