-
-
Save kevindice/87ee5ffca9523810253de3d9a41c3ae5 to your computer and use it in GitHub Desktop.
#!/bin/bash | |
S3_BUCKET_NAME=$1 | |
CF_ID=$2 | |
# Sync all files except for service-worker and index | |
echo "Uploading files to $S3_BUCKET_NAME..." | |
aws s3 sync build s3://$S3_BUCKET_NAME/ \ | |
--acl public-read \ | |
--exclude service-worker.js \ | |
--exclude index.html | |
# Upload service-worker.js with directive to not cache it | |
echo "Uploading service-worker.js" | |
aws s3 cp build/service-worker.js s3://$S3_BUCKET_NAME/service-worker.js \ | |
--metadata-directive REPLACE \ | |
--cache-control max-age=0,no-cache,no-store,must-revalidate \ | |
--content-type application/javascript \ | |
--acl public-read | |
# Upload index.html | |
echo "Uploading index.html" | |
aws s3 cp build/index.html s3://$S3_BUCKET_NAME/index.html \ | |
--metadata-directive REPLACE \ | |
--cache-control max-age=0,no-cache,no-store,must-revalidate \ | |
--content-type text/html \ | |
--acl public-read | |
# Purge the cloudfront cache | |
echo "Purging the cache for CloudFront" | |
aws cloudfront create-invalidation \ | |
--distribution-id $CF_ID \ | |
--paths / |
Why do an invalidation if you're going through the effort of setting cache-control on the only files which would end up needing to be invalidated? I'm surprised, avoiding an invalidation was my whole rationale for setting Cache-Control to begin with.
I agree with @john-osullivan there doesn't seem to be a need for cache invalidation.
Still a great script for reference.
Related script FYI https://gist.github.com/kellyrmilligan/e242d3dc743105fe91a83cc933ee1314
@john-osullivan @joaoportela the invalidation is needed as all other files except index and serviceworker might have been cached depending on cloudfront settings
@Fantaztig I think we are all assuming that all rest of the files are cache-busted (or have unique names) altogether. That's what I understand the whole point of this task is:
Cache-Control: no-cache.... for those files (as they're entry points into app)
- index.html
- service-worker.js
Those files are cache busted:
- js/some-bundle.0123213123.js
- js/some-vendor-bundle.1231232432.js
- css/some-style.123123213.css
And those files are have unique names (uploaded only once, and if you want to replace the image, you simply upload file under new name):
- img/logo-main.123123213.png
- img/blog-images/header.123123213.png
- img/blog-images/header.2342343242.png
etc.
Thus taking this approach you would never need to perform CloudFront cache-invalidation. (As imagine you have 100,000 files in your cache? are you going to be invalidating all 100,000 each time you replace image in an earlier blogpost?)
Oh I See! Thanks for the hint.
Tbh for best performance it would be a good idea to use s-max-age to let cloudFront cache those two files and use the invalidation to explicitly invalidate them on updating the bucket
@Fantaztig no. because those headers also go to the clients (browsers usually) and you have no way of invalidating those.
As a rule of thumb, if you have to purge cloudfront cache its because you messed up somewhere.
@joaoportela the s-max-age header is irrelevant to the client, but can be used to tell cloudFront how long to cache a file.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html#ExpirationDownloadDist
@Fantaztig , @joaoportela thanks for those interesting thoughts and remarks! It's very useful! It makes configuring cache less of a nightmare... :)
@joaoportela the s-max-age header is irrelevant to the client, but can be used to tell cloudFront how long to cache a file.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html#ExpirationDownloadDist
You're correct. I somehow missed the s-
prefix when reading your comment.
Personally, I would still avoid using that technique. But it is a valid and correct approach.
great script.
you may need to add a --profile param if having multiple aws accounts on the same machine.