This document describes how to leverage docker compose for multiple use cases while minimizing duplication of services. The use cases covered are:
Type | Use Case | Description | Command |
---|---|---|---|
Demo | Quick start demo | As easily as possible run your app (with all services it depends on) for demonstration purposes. Repeat: this case must be as painless as possible. Uses a tagged image for all containers (no Dockerfile / docker build). | docker compose up |
Test | Integration testing | Leverage CI actions / custom scripts to run integration tests. Alternatively use favorite container test framework such as testcontainers.org to run automated integration tests. In Java interfaces with JUnit; in Python usually interfaces with pytest. Most container test frameworks support launching from compose file, but often limited so may need to use framework API to define which containers to launch for the tests. | Varies |
Develop | Development via docker build |
Leverage docker build - Restart the container in development to pick up local file changes. Use in conjunction with a Dockerfile that leverages COPY to maximize docker build caching. Using Context from GitHub tag is useful for production image builds. See: DockerBuildStrategy |
docker compose -f build.yml up |
Develop | Development via local app instance with container deps |
Run all dependent services in docker, but run your app (service) on the host machine directly. | docker compose -f deps.yml up |
Develop | Development via bind mount |
Use volumes to bind mount deployment artifacts and other files that can be modified during development, and reference the Dockerfile build instead of the image. | docker compose -f bind.yml up |
Develop | Development via local IDE proxy |
Local IDE or thin-client proxies via socket to dev container running IDE backend. Visual Studio Code and JetBrains (IntelliJ) support this mode of working. There are hosted solutions as well (dev container in the cloud) such as GitHub Codespaces and JetBrains Space. | - |
Develop | Development via exec |
Use docker exec to run commands that make changes inside the container (or obtain shell to run commands). This can be a hybrid mode to work around issues with bind mounts - use docker cp to move updated artifacts into the container or bind mount a staging directory that is NOT cleaned by host build and is not overwritten by container running and script copying to/from this with the extra layer of indirection as needed. |
- |
Dev Strategy | Pros | Cons |
---|---|---|
build | (1) Repeatable build with tooling in container | (1) Slow edit/build/review cycle |
deps | (1) Fast edit/build/review cycle | (1) Variable build with tooling on host (2) Networking can be tricky as you may need to configure both INTERNAL and EXTERNAL interfaces |
bind | (1) Fast edit/build/review cycle | (1) Variable build with tooling on host (2) Can be tricky, buggy, and fragile |
exec | (1) Custom | (1) Custom |
There are a few files used:
File | Purpose |
---|---|
deps.yml | This file defines all of the services that our app depends on, excluding the app itself (our code). The advantage of defining this separately is that it allows us to handle the run dependencies use case without duplication of services: we can run all dependencies in Docker and our app directly on the host machine. The docker-compose.yml file extends the services in deps.yml to complete the abstract service definitions. |
docker-compose.yml | The standard file that Docker looks for when you run up . We define all services in an abstract way here, such that other files will complete the service definitions depending on use case. |
docker-compose.override.yml | If you run docker compose up , that is without the -f argument, then Docker will automatically merge docker-compose.override.yml with docker-compose.yml. This is the quick start demo case, so the override file completes the abstract definition of our apps service by indicating a versioned image is to be used. Note: this means we can commit changes to the main git branch while we develop and not worry about breaking our demo. |
build.yml | This file provides an alternative service definition for our app service that leverages docker build instead of using a tagged image. |
bind.yml | This file provides an alternative service definition for our app service that bind mounts deployment artifacts and other resources that we would like to modify during development to enable a quick modify and verify result loop. Extends build.yml . Provides repeatable runtime environment (but not repeatable build environment). However, in practice this bind mount approach often is a giant struggle to get to work with all project scenarios. See Bind Mount Issues |
services:
dependency1:
... <full working definition>
dependency2:
... <full working defintion>
services:
dependency1:
extends:
file: docker-compose.deps.yml
service: dependency1
dependency2:
extends:
file: docker-compose.deps.yml
service: dependency2
myappservice:
... <abstract definition with NO image and NO build>
services:
myappservice:
image: myimage:<with-specific-version>
... <demo-specific definition>
services:
dependency1:
extends:
file: docker-compose.yml
service: dependency1
dependency2:
extends:
file: docker-compose.yml
service: dependency2
myappservice:
extends:
file: docker-compose.yml
service: myappservice
build:
context: .
dockerfile: Dockerfile
args:
... <build-specific definition>
services:
dependency1:
extends:
file: build.yml
service: dependency1
dependency2:
extends:
file: build.yml
service: dependency2
myappservice:
extends:
file: build.yml
service: myappservice
... <bind-specific definition; includes bind volumes>
See Also: Working Example
Note: You can skip re-declaring services with extends
at the cost of supplying additional -f
arguments to docker compose
. It's a trade-off.
When you run docker compose up
Docker will automatically merge docker-compose.override.yml with docker-compose.yml and then resolve the services that are defined in deps.yml.
There are various approaches to integration testing with containers, most of which are complicated by networking. Specifically consider: (1) Where are the containers running (local workstation vs remote CI server), (2) Do tests run inside the same network as the containers, (3) What API do you use to interact with containers (compose CLI vs wrappers). If you attempt to automate your integration testing on a CI server such as GitHub Actions, you'll need to contend with the fact that actions may already run in a container (at the moment appears our actions run in an Azure VM so we're good). If running in Docker already, that would means you'll either be attempting to use the wormhole pattern, else Docker-in-Docker pattern. Another consideration is whether to use testing frameworks such as testcontainers.org, which may help, else may get in the way. At the momment I have some projects leveraging testcontainers.org while others just rely on docker compose being started/stopped externally. The advantage of the later is simplicity as executing "docker compose up/down" from within GitHub actions or from a local workstation is easy and allows using familar compose files. The wrapper around the Docker API provided by testcontainers.org sometimes is both limited and onerous, plus sometimes it's quicker to execute tests over and over WITHOUT stopping and restarting all containers. Further reading: Integration Test Strategy
When using -f
flag to docker the docker-compose.override.yml is no longer automatically merged, only the files explicitly indicated with -f
(you can repeat that flag multiple times). However, files referenced with extends
still are resolved (the services in deps.yml). In the development scenario we explicitly reference the build.yml or bind.yml file.
In this scenario we explicitly invoke the deps.yml file, which excludes our app container. Therefore we only get the services that our app depends on, and we can run our app locally (and presumably communicate with exposed ports).
If you define profiles then you must explicitly indicate the profiles on the command line. There is no "by default run profile X" configuration. To have a service run by default you must not assign it to any profile. This interferes with our goal of making quick start as painless as possible. It is a viable approach however as the extra argument isn't that big a deal; something like: docker compose --profile quickstart up
. However, to support all three scenarios you'd likely need a separate deps
profile, which you'd then need to also include with quickstart on the command line, else keep the deps.yml as is for this purpose. Profiles could solve the issue of conditionally choosing between development using bind mounting and a Docker build vs a demo using a versioned image by creating separate services for each scenario: one for the image and one for the build, and factoring common pieces out with "extends" feature. The advantage would be to remove the docker-compose.override
, build.yml
, and bind.yml
files, though you'd likely need to stash the shared defintion in a new file like common.yml
.
This duplicates a lot of service definitions, that you've then got to maintain. There are a lot of files as is, but at least they share common definitions.
We could avoid the docker-compose.override.yml file if we defined the build directive in the docker-compose.yml file and instead replace build.yml with demo.yml. This has a few issues though: (1) all dev mounts and environment settings would then be inherited by demo scenario and they're difficult if not impossible to override. (2) The command to launch demo would not be as concise, it would be something like: docker compose -f demo.yml up
. It would be nice to use the image directive in the docker-compose.yml and keep the build directive in the build.yml file to address the second issue, but merge precendence is such that the build directive will never override the image directive (and we'd still have an issue similar to issue 1).
This is only an issue if you develop on Windows and use Linux containers (or wish to ensure your project supports this scenario).
The Docker for Windows Best Practices indicate that you should not bind mount a Linux container filesystem to a Windows filesystem. Not following the best practice results in slow performance and general bugginess. For example, if you attempt to bind mount a deployments directory of the Wildfly application server so that you can drop Java web application archives in as they're re-built you'll likely run into Stuck in re-deployment loop. This may be due to imperfect clock syncronization and timestamp precision shared between the Linux subsystem and Windows host.
The recommended approach is to use Windows Subsystem for Linux (WSL) and mount from there. This isn't free from problems either. There is increased complexity from having to delve deeper into WSL instead of having Docker Desktop handle it for you, you have to install git and your programming langauge of choice in WSL as well, often duplicating what you've already installed on the Windows filesystem. Crucially, it also means your IDE must support mounting WSL drives. Intellij and Visual Studio Code support this mode of operation. However, I've found this mode of operation is slower and buggier (in IntelliJ at least) vs using the IDE on the Windows filesystem. Sometimes IntelliJ becomes confused about the path to git (which is now in WSL). The gradle command to build is slower and sometimes fails. You also now get the worst of both worlds as you now have to deal with Linux-to-Linux container issues, see below: File system permissions.
This is only an issue if you develop on Linux and mount Linux containers (or wish to ensure your project supports this scenario).
More often than not the OS users defined in the Linux container differ from the users defined on the host machine. When you indicate a bind mount the mount will be created by the container if it does not already exist on the host, but in doing so the container will create it with a uid and gid likely not expected by the host machine. Conversely, if the bind mount directory already exists because it was created on host in advance, it often uses uid and gid not expected by the container. The workaround is often a note in the project README to create the directory in advance with the required uid and gid.
If you mount a host file or directory into a container then deleting or removing the file or directory either from inside the container or from the host causes unexpected behavior. Sometimes you get an error preventing the operation, sometimes the operation appears to succeed and silently file changes are not propogated. This is easy to accidentally do - if you bind mount the build
directory from Gradle for example and perform a clean
task you'll blow away the bind mount. If you bind mount a Wildfly standalone.xml
configuration file you'll discover that Wildfly renames / moves this file at runtime to standalone.xml.old
or something like that, and you'll have odd behaviors perhaps not immediately obvious afterwards. Choosing a good mount point on the host and inside the container turns out to be a little tricky sometimes.
docker compose -f build.yml up
docker compose rm -svf <container-name>
docker compose -f build.yml build --no-cache <container-name>
Note: Docker build cache sometimes does not notice changes and requires forced cache invalidation.
docker compose -f build.yml up -d <container-name>
docker cp <host-path> <container-name>:<container-path>
docker exec -it <container-name> bash
docker compose -f build.yml down
docker build --build-arg CUSTOM_CRT_URL=http://pki.jlab.org/JLabCA.crt https://github.com/<git user>/<project>.git#<tag> -t <docker user>/<project>:<tag>
Note: the CUSTOM_CRT_URL
is required when building some containers inside the JLab network due to the JLab intercepting proxy, but also required when running some containers inside JLab that fetch resources at runtime from the Internet. It's generally a good idea to just include it and avoid accidentally ending up with a container that doesn't run inside JLab.
docker push <docker user>/<project>:<tag>