-
-
Save mmaday/c82743b1683ce4d27bfa6615b3ba2332 to your computer and use it in GitHub Desktop.
#!/usr/bin/env bash | |
# | |
# Usage: | |
# s3-get.sh <bucket> <region> <source-file> <dest-path> | |
# | |
# Description: | |
# Retrieve a secured file from S3 using AWS signature 4. | |
# To run, this shell script depends on command-line curl and openssl | |
# | |
# References: | |
# https://czak.pl/2015/09/15/s3-rest-api-with-curl.html | |
# https://gist.github.com/adrianbartyczak/1a51c9fa2aae60d860ca0d70bbc686db | |
# | |
# set -x | |
set -e | |
script="${0##*/}" | |
usage="USAGE: $script <bucket> <region> <source-file> <dest-path> | |
Example: $script dev.build.artifacts us-east-1 /jobs/dev-job/1/dist.zip ./dist.zip" | |
[ $# -ne 4 ] && printf "ERROR: Not enough arguments passed.\n\n$usage\n" && exit 1 | |
[ -z "$AWS_ACCESS_KEY_ID" -o -z "$AWS_SECRET_ACCESS_KEY" ] \ | |
&& printf "ERROR: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables must be defined.\n" && exit 1 | |
[ ! type openssl 2>/dev/null ] && echo "openssl is required and must be installed" && exit 1 | |
[ ! type curl 2>/dev/null ] && echo "curl is required and must be installed" && exit 1 | |
AWS_SERVICE='s3' | |
AWS_REGION="$2" | |
AWS_SERVICE_ENDPOINT_URL="${AWS_SERVICE}.${AWS_REGION}.amazonaws.com" | |
AWS_S3_BUCKET_NAME="$1" | |
AWS_S3_PATH="$(echo $3 | sed 's;^\([^/]\);/\1;')" | |
# Create an SHA-256 hash in hexadecimal. | |
# Usage: | |
# hash_sha256 <string> | |
function hash_sha256 { | |
printf "${1}" | openssl dgst -sha256 | sed 's/^.* //' | |
} | |
# Create an SHA-256 hmac in hexadecimal. | |
# Usage: | |
# hmac_sha256 <key> <data> | |
function hmac_sha256 { | |
printf "${2}" | openssl dgst -sha256 -mac HMAC -macopt "${1}" | sed 's/^.* //' | |
} | |
CURRENT_DATE_DAY="$(date -u '+%Y%m%d')" | |
CURRENT_DATE_ISO8601="${CURRENT_DATE_DAY}T$(date -u '+%H%M%S')Z" | |
HTTP_REQUEST_PAYLOAD_HASH="$(printf "" | openssl dgst -sha256 | sed 's/^.* //')" | |
HTTP_CANONICAL_REQUEST_URI="/${AWS_S3_BUCKET_NAME}${AWS_S3_PATH}" | |
HTTP_REQUEST_CONTENT_TYPE='application/octet-stream' | |
HTTP_CANONICAL_REQUEST_HEADERS="content-type:${HTTP_REQUEST_CONTENT_TYPE} | |
host:${AWS_SERVICE_ENDPOINT_URL} | |
x-amz-content-sha256:${HTTP_REQUEST_PAYLOAD_HASH} | |
x-amz-date:${CURRENT_DATE_ISO8601}" | |
# Note: The signed headers must match the canonical request headers. | |
HTTP_REQUEST_SIGNED_HEADERS="content-type;host;x-amz-content-sha256;x-amz-date" | |
HTTP_CANONICAL_REQUEST="GET | |
${HTTP_CANONICAL_REQUEST_URI}\n | |
${HTTP_CANONICAL_REQUEST_HEADERS}\n | |
${HTTP_REQUEST_SIGNED_HEADERS} | |
${HTTP_REQUEST_PAYLOAD_HASH}" | |
# Create the signature. | |
# Usage: | |
# create_signature | |
function create_signature { | |
stringToSign="AWS4-HMAC-SHA256\n${CURRENT_DATE_ISO8601}\n${CURRENT_DATE_DAY}/${AWS_REGION}/${AWS_SERVICE}/aws4_request\n$(hash_sha256 "${HTTP_CANONICAL_REQUEST}")" | |
dateKey=$(hmac_sha256 key:"AWS4${AWS_SECRET_ACCESS_KEY}" "${CURRENT_DATE_DAY}") | |
regionKey=$(hmac_sha256 hexkey:"${dateKey}" "${AWS_REGION}") | |
serviceKey=$(hmac_sha256 hexkey:"${regionKey}" "${AWS_SERVICE}") | |
signingKey=$(hmac_sha256 hexkey:"${serviceKey}" "aws4_request") | |
printf "${stringToSign}" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"${signingKey}" | sed 's/(stdin)= //' | |
} | |
SIGNATURE="$(create_signature)" | |
HTTP_REQUEST_AUTHORIZATION_HEADER="\ | |
AWS4-HMAC-SHA256 Credential=${AWS_ACCESS_KEY_ID}/${CURRENT_DATE_DAY}/\ | |
${AWS_REGION}/${AWS_SERVICE}/aws4_request, \ | |
SignedHeaders=${HTTP_REQUEST_SIGNED_HEADERS}, Signature=${SIGNATURE}" | |
[ -d $4 ] && OUT_FILE="$4/$(basename $AWS_S3_PATH)" || OUT_FILE=$4 | |
echo "Downloading https://${AWS_SERVICE_ENDPOINT_URL}${HTTP_CANONICAL_REQUEST_URI} to $OUT_FILE" | |
curl "https://${AWS_SERVICE_ENDPOINT_URL}${HTTP_CANONICAL_REQUEST_URI}" \ | |
-H "Authorization: ${HTTP_REQUEST_AUTHORIZATION_HEADER}" \ | |
-H "content-type: ${HTTP_REQUEST_CONTENT_TYPE}" \ | |
-H "x-amz-content-sha256: ${HTTP_REQUEST_PAYLOAD_HASH}" \ | |
-H "x-amz-date: ${CURRENT_DATE_ISO8601}" \ | |
-f -S -o ${OUT_FILE} |
First off; this is amazing! thank you!.. I'm trying to edit this to do the opposite. I'm trying to get it to cURL the file into an S3 bucket. I was able to get the S3 Get to work; and have been trying to reverse engineer it to create an S3 Put. Any advice?
First off; this is amazing! thank you!.. I'm trying to edit this to do the opposite. I'm trying to get it to cURL the file into an S3 bucket. I was able to get the S3 Get to work; and have been trying to reverse engineer it to create an S3 Put. Any advice?
I haven't tried it, but this script should handle it: https://gist.github.com/ziocleto/6ac24cf7faadce6a5d416a5194e910f5. Really just need to remove the -o ${OUT_FILE}
and add a -T ${YOUR_SRC_FILE_PATH}
. When doing the -T
to upload a file, a method of PUT
(-X PUT
) is implied.
Thank you! I appreciate the reply!! I'll give it a try and see what happens!
Thank you! I appreciate the reply!! I'll give it a try and see what happens!
Did you try? Can you tell me whether it worked for You, as it does not work for me?
Hi! I am getting "PermanentRedirectThe bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.", any ideas?
PermanentRedirectThe bucket you are attempting to access must be addressed using the specified endpoint
Are you using the correct AWS region?
The ${AWS_REGION}
variable isn't included in the endpoint url, causing the redirect problem.
Fix:
-AWS_SERVICE_ENDPOINT_URL="${AWS_SERVICE}.amazonaws.com"
+AWS_SERVICE_ENDPOINT_URL="${AWS_SERVICE}.${AWS_REGION}.amazonaws.com"
Hi! it doesn't work with me even when I added ${AWS_REGION} to AWS_SERVICE_ENDPOINT_URL, it returned The requested URL returned error: 404 Not Found.
Even without ${AWS_REGION}, I am getting " "PermanentRedirectThe bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint" !
Anyone can help please?
Wow; it's almost been a whole year since I was worried about this.... But either way; I was able to eventually modify this script enough to get and put files manually into an S3 Bucket. It all comes down to creating the canonical hash specifically how AWS wants you to and using the right aws region. I'll have to break the script out and verify that It still works.. And I'll post it. Feel free to email me ([email protected]).
Hi! it doesn't work with me even when I added ${AWS_REGION} to AWS_SERVICE_ENDPOINT_URL, it returned The requested URL returned error: 404 Not Found.
Even without ${AWS_REGION}, I am getting " "PermanentRedirectThe bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint" !
Anyone can help please?
@rgazzeh Check out this stackoverflow.
@mmaday Thank you for sharing it works perfectly. I am using this in the Docker initial setup where we need to pull the DB's to restore them.
Allow me to make some suggestions to it.
The users who have little experience with bash scripting won't understand environment variables so we need to explicitly tell them to make them with commands:
export AWS_ACCESS_KEY_ID=yourKeyHere
export AWS_SECRET_ACCESS_KEY=yourSecretKeyHere
Also since those credentials above are sensitive information, I would ask user explicitly to input them and then proceed will share it here soon.
Thank you.
This is awesome man. Thank you so much!
Running into "The requested URL returned error: 403" when filling everything in, access and secret key are definitely correct, and user can access the bucket, any recommendations?
+1
HTTP/1.1 403 Forbidden
now.
fyi - this works with wasabi by changing line 35 to
AWS_SERVICE_ENDPOINT_URL="${AWS_SERVICE}.${AWS_REGION}.wasabisys.com"