Granted, this is little more than an obfuscated way of having a publicly writable S3 bucket, but if you don’t have a server which can pre-sign URLs for you, this might be an acceptable solution.
For this to work, you take the following steps:
- Create a Lambda func, along with a new IAM role, keeping the default code.
- Create an API in the API Gateway.
- Create a resource in said API.
- Create a POST method for that API resource, pointing it to the above Lambda func.
- Deploy the API to a new stage.
- Verify that you can call the Lambda func using
curl
followed by the URL shown for the stage, resource, and method. - Create an S3 bucket if you haven’t already, and one or more folders you want to upload to.
- Add a bucket policy granting public read access to those folders (a sample is included in this gist).
- Create an IAM policy granting write permissions in those folders (also included).
- Attach this policy to the IAM role created above.
- Update the Lambda func with the code in this gist.
- Add a configuration variable named
s3_bucket
with the name of your S3 bucket. - Verify that you can call it using
curl
followed by the same URL as previously, followed by the parameters--request POST --header 'Content-Type: application/json --data '{"object_key": "folder/filename.ext"}'
- Finally, verify that you can upload a file to the URL returned, using
curl --upload-file
followed by the path to some file and the URL received in the previous step.
That’s it!
Hi, thanks for this!
I've created the lambda and the bucket as specified. The lambda returns the pre-signed URL, which I'm using in the following request which fails with a 403:
Any idea why?