Here is how dbuild work
- Build an image (by installing the scripts they have and install appropriate packages) from a base image downloaded from dockerhub (they use arguments for distribution etc to determine correct image)
- Create the container in each command/step (using the image from previous step)
- do docker run with the command in each step (its attaching /home/buildd/build directory from host to container where the builds will save the artifacts)
- Once done with the command, stop the container
- Then commit the container - which means create image out of that container
Problems in this approach
- docker base image (from dockerhub) may get updated on which we dont have any control
- each time image build is not necessary, all we need is to build once (or as and when we need to update the build environment) and save them in local docker registry
- Docker run on every command is not necessary I believe, all we need to script the steps (combine the steps we wanted to be single logical step - may be all in one logical step or may be more than one logical steps) and handle the failures in each steps by the script. The script should have taken the parameters or environment variables to customize the environmeent like git repo url/package name or whatever runtime configs.
- We may need to share the volumes (its just a directory or file which can be shared between host and container (bindmount)) if we may need to save the artifacts or any logs (you would get logs from docker console anyway, but it would be easier to save the logs itself in case you need to save them for long time) or something
- Also we may need to share the volumes in case git pull/populate debian directories etc is done on the host by our app (I see its true with our current code)
HEre is what I think the container related process should be
- We dont need to add our own application code to container, all we need is to run any build/operations specific code in container (better we reduce the number of docker run by combining individual steps to logical steps by scripting them - ideally in one step :) )
- when we build, we can copy the build/helper scripts which can be customized using either params or environment variables or both
- When docker run, provide the params and/or environment variables to customize above mentioned build/helper scripts and run the major build script.
- If we need, bindmount the directories from host (we can mount build specific directory to the container - doesnt need to be previously defined, we can just do the operations on a build specific directory from the host and whie doing docker run provide argument to docker to bindmount that directory to the container)
- When we are doing git pull, populate debian directory, make versions, detect deps etc from the host within the app
- In case we have to provide any special scripts or binary or something runtime (I am not sure if we will need such situation)