By simulating Medistrano's Docker image build process on your own workstation, you get immediate feedback, plus use of the local Docker cache. This can help catch or reproduce problems much more quickly than by repeatedly committing changes to GitHub, then asking Medistrano to perform a build. Here are some instructions that should help you accomplish that.
Before runnig docker build
in your app's checkout directorty, you should make sure you can authenticate to any hosts that might require access during the build process.
In particular, your app should be built from a platform image in Amazon Elastic Container Registry so you'll need to login to ECR. (If it's still built from DockerHub, perhaps now would be a good time to test that change locally!) Authenticating to ECR means first installing version 2 of the AWS command-line utility, configuring it, then running the following command:
aws ecr get-login-password --region us-east-1 |\
docker login --username AWS --password-stdin \
767904627276.dkr.ecr.us-east-1.amazonaws.com
If you already have version 1 of the AWS CLI installed, either update to version >1, or run this command instead:
eval $(aws ecr get-login --region=us-east-1 --no-include-email)
You should see the message: Login Succeeded
. You should also be able to docker pull
any of the source images listed in your Dockerfile
's FROM
directives.
If you need to pull a private repository from GitHub during the build - that is, not the repo you already have checked out - then first make sure you can log into GitHub via SSH, with the following command:
ssh -T [email protected]
You should see:
Hi $USER! You've successfully authenticated, but GitHub does not provide shell access.
You also must use an ssh-agent
, because the docker build
command is going to mount your $SSH_AUTH_SOCK
into the build context. So make sure you're running the agent, with your key loaded. You really should be doing this already, beacause it allows you to store an encrypted private key on-disk, but decrypt it into memory, so you don't have to type the passphrase on each use.
Make sure you have a valid $ARTIFACTORY_USERNAME
and $ARTIFACTORY_PASSWORD
, or $MDSOL_NUGET_ACCESS_TOKEN
, as needed.
The full docker build
command generated by Medistrano's build toolchain is pretty long (just search your build log for "docker build" to see it), but if your build process does not have to pull from private sources, then most of what it does can be done with the simplest of Docker build commands.
If you've already pulled all your source images from ECR, and your Dockerfile
does not have to pull from private sources, you can just run:
docker build -t mdsol/myapp -f .12factor/Dockerfile .
from your app's codebase checkout directory. Since the resulting image is only for local use, feel free to use any name you like in place of mdsol/myapp
.
No magic there, right? We're just using the custom Dockerfile
that Medistrano uses, and uploading the entire codebase to the Docker server for processing (minus the contents of .dockerignore
). If your app is built from only its own codebase plus publicly-available sources, it should build just as reliably with this simple command as it does in Medistrano.
If your build process requires secrets, you'll need to enable Docker's Buildkit-based secrets mechanism, which involves two steps. First set an environment variable:
export DOCKER_BUILDKIT=1
and then put the following line at the top of your Dockerfile
:
# syntax=docker/dockerfile:experimental
now follow the steps in one of the following sections, depending on what type of secrets are used...
If your build requires access to GitHub, you'll need to forward your $SSH_AUTH_SOCK
into the build context, once you've ensured you've met the SSH-based prerequisites above. This is done with a simple adjustment to the build command:
docker build --ssh default -t mdsol/myapp -f .12factor/Dockerfile .
Adding --ssh default
will enable any command in your Dockerfile
prefaced with
RUN --mount=type=ssh
to inherit your $SSH_AUTH_SOCK
. If the command runs as root
, that should be sufficient.
However, bundling your app's dependencies as root
is not recommended. Hopefully you're downloading your app's dependencies as the app
user (uid=9999
) that's built into our mdsol/*
platform base images. In this case, you need to adjust the RUN
directive as follows:
RUN --mount=type=ssh,target=/home/app/.ssh/id_rsa,uid=9999,gid=9999 ...
Note that we control the path and ownership of the mount to ensure the app
user has access.
You can test SSH access from within the build using the same SSH testing command described above (maybe add -v
for debugging?). However, note that this command returns a "failed" exit code even if authentication is successful, so it will stop the Docker build either way. As above, just look for the "Hi X! You've successfully authenticated" message.
If your build pulls assets from Artifactory, you'll need to forward your $ARTIFACTORY_USERNAME
and $ARTIFACTORY_PASSWORD
into the build context, which is done by encoding them as a tiny shell script on your workstation, then mounting that file and reading it in during the build. Easy enough?
First create an artifactory.sh
script like this, somewhere safe on your local filesystem:
export ARTIFACTORY_USERNAME="[your artifactory username]"
export ARTIFACTORY_PASSWORD="[your artifactory password]"
(fill in your real values)
Next, make sure the relevant RUN
directive in your Dockerfile
writes the contents of the mount named artifactory
to the file env.sh
. Here we see an example directive that prints out the ARTIFACTORY_USERNAME
from within the build process, just to ensure it's working:
RUN --mount=type=secret,id=artifactory,target=env.sh bash -c \
'source env.sh && echo ARTIFACTORY_USERNAME = $ARTIFACTORY_USERNAME'
Note that in this example, bash
is required to read in the script, using source
(use .
for sh
), and to interpolate the $
.
Practically speaking, it's probably easier just to create a shell script to "wrap" your download command. If the command is already a shell script, you can just read in the env.sh
file at the beginning. Like this:
#!/usr/bin/env bash
# reads artifactory creds from env.sh, then runs: mvn install
set -eox pipefail
source env.sh
mvn install ...
Then replace the RUN
directive with:
RUN --mount=type=secret,id=artifactory,target=env.sh [your_script.sh]
Make sure you chmod 755
on the script before committing it to git so you can execute it directly!
The final step is to define the artifactory
secret when running docker build
:
docker build -t mdsol/myapp -f .12factor/Dockerfile \
--secret id=artifactory,src=/path/to/artifactory.sh .
Note that you have independent control over the location, ownership, and permissions of the secret file as saved on your workstation, and those same attributes as the file appears within the build! So keep this in mind if you have trouble reading the file.
If your build pulls packages from NuGet, you'll need to forward your $MDSOL_NUGET_ACCESS_TOKEN
into the build context, which is done by encoding it as a tiny shell script on your workstation, then mounting that file and reading it in during the build.
First create a nuget.sh
script like this, somewhere safe on your local filesystem:
export MDSOL_NUGET_ACCESS_TOKEN="[your nuget token]"
(fill in your real value)
Next, make sure the relevant RUN
directive in your Dockerfile
writes the contents of the mount named nuget
to the file env.sh
. Here we see an example directive that prints out the MDSOL_NUGET_ACCESS_TOKEN
from within the build process, just to ensure it's working:
RUN --mount=type=secret,id=nuget,target=env.sh bash -c \
'source env.sh && echo My token is $MDSOL_NUGET_ACCESS_TOKEN'
Note that in this example, bash
is required to read in the script, using source
(use .
for sh
), and to interpolate the $
. Also, don't commit this version to GitHub, because it may expose the token when built in some other context.
Practically speaking, it's probably easier just to create a shell script to "wrap" your download command. If the command is already a shell script, you can just read in the env.sh
file at the beginning. Like this:
#!/usr/bin/env bash
# reads nuget creds from env.sh, then runs: dotnet install
set -eox pipefail
source env.sh
dotnet install ...
Then replace the RUN
directive with:
RUN --mount=type=secret,id=nuget,target=env.sh [your_script.sh]
Make sure you chmod 755
on the script before committing it to git so you can execute it directly!
The final step is to define the nuget
secret when running docker build
:
docker build -t mdsol/myapp -f .12factor/Dockerfile \
--secret id=nuget,src=/path/to/nuget.sh .
Note that you have independent control over the location, ownership, and permissions of the secret file as saved on your workstation, and those same attributes as the file appears within the build! So keep this in mind if you have trouble reading the file.
Medistrano uses a specialized tool called build12
to generate the docker build
command, to name the resulting image, and to apply some company-specific metadata. You may find it more convenient to use this tool instead of running docker build
directly. Here's how...
First, ensure that you've met all the prerequisites above. Put either or both the artifactory.sh
and/or nuget.sh
secret files, if needed, into a single directory: ~/secrets/docker
for example.
Next, checkout and build the 12-factor tools. Go version >=13 is required (for module support).
git clone [email protected]:mdsol/12factor-tools.git
cd 12factor-tools
make bin/build12
this will compile the build12
binary into 12factor-tools/bin
. You can run it straight from there if you like.
Finally, change to your app directory, ensure you're using BuildKit, and point build12
at your the directory containing the build secrets.
cd ~projects/my_app
~/path/to/12factor-tools/bin/build12 -secretsdir ~/secrets/docker
As you can see, this command is usually shorter than the minimal docker build
commnd required to simulate it, as described above. And since build12
is actually the tool used by Medistrano, running it locally probably does an even better job of reproducing the same environment.