Among the options for local development of AWS Serverless Apps, the AWS SAM CLI is the latest to be supported by AWS.
We want to be able to use the AWS SAM CLI from a container so we can have a portable development and testing environment.
This is annoying, but is not insurmountable. Dockerfile
provides an adequite example for how to build such a container.
The way sam local start-api
works is by running a lambda container with the source code. When we containerize the AWS SAM CLI and mount our source code into it, we run into trouble with this process.
First, AWS SAM CLI will use docker to run the lambda container. We can either add docker to our AWS SAM CLI container, a Docker in Docker (DinD) solution, or mount the host docker into the container. DinD is bad practice, so we go the mount option by adding /var/run/docker.sock:/var/run/docker.sock
.
Next, the lambda container will be invoked using a network call. By default, this will be localhost
, which does not work from inside a container. Thankfully --container-host host.docker.internal solves this problem.
Lastly, the source files need to be mounted into the lambda container. We have no issue mounting the source files int our AWS SAM CLI container. When AWS SAM CLI constructs the mount it will use the file path from inside the AWS SAM CLI container. The problem is that the docker runtime is from the host system, and therefore can only mount files/directories that exist in the host system.
From the documentation, this value should tell AWS SAM CLI where the source code is on the system running docker. In this case, docker is on the local computer, so this value would need to be a valid path on the local computer.
However, this value is being ignored.
When the directory in the AWS SAM CLI container with the source code matches the directory on your computer, then the mount will be created by coincidence.
We configure this with a combination of setting working_dir
and mounting the source code using the same values. Thankfully docker-compose will create the directory paths in the container, otherwise we'd need to create the directory at runtime somehow.
For linux and mac systems, this is no problem since file paths will look the same, but there needs to be some translation on windows system.
Windows Example
C:\Some\Windows\Path
-> /c/Some/Windows/Path
The lambda container expects the source code at /var/task
so not all of these would work.
Use --volumes-from to copy mounts into the lambda container. The target directory is not controlable with this option, and the name of the source container may not be static.
Create a named volume and pass that around. This would probably be the best option.