This document is meant to point out pain points with using Docker for development environments. It rates each pain point w/ a pain level of 1 to five stars (1 being low, 5 being high).
Each issue will highlight:
-
The pain level
-
A summary of the problem
-
A list of possible solutions. The solutions should include negative tradeoffs of choosing that solution and recommendations for managing them.
Docker containers are transient in nature and get created and destroyed quite often. As a result you can't rely on IP addresses for cross-container communication.
Docker & Docker Container provide some facilities to mitigate this
via linking
. Linking has some issues though:
-
A linked name can be most any arbitrary string... If you have code that validates URIs you may have issues unless you use link names that look like valid URIs.
-
Docker Container abstracts the linking away from you ever so slighly to allow for autoscaling. As a result containers end up with names like
postgres_1
,postgres_2
.
Service Discovery with Consul.
Docker containers don't have the same credentials as the host machine user. Dependencies like node libraries in private git repos aren't easily accesible. The generic (and painful) solution to this is to copy private SSH keys into Docker containers. This poses many security risks and is rather unweildy for a development tool.
For Node projects we can use a shared OAuth token w/ repo-only access. If we're ever concerned the token has been compromised it will be a pain to resolve but relatively simple. We revoke the OAuth token, find & replace all instances of it in node projects... This could potentially be automated...
There are questions around what Github user the OAuth token belongs to... who owns keeping tack of the credentials, getting people access, etc...
Here's a real set of commands I've run to get one tiny, simple, service.
docker run -v ./data:/data -p5432:5432 --name pg01.int -i postgres
docker run -v .:/app -p 8000:8001 --name issues.int --link pg01.int:pg01.int -i nwest/issues
You have to remember all that... And most of those things are configured elsewhere for things liketelling the application where to look for the DB.
This helps a lot... but Docker Composer commands can still be a bit verbose... see running a spec for a web application.
docker-compose run web npm run test
Abstract away with bash scripts, npm scripts, Makefiles, or similar. This can get hairy... Especially if too much about Docker is abstracted away and there are developers who don't understand what's going on under the hood. If one thing goes wrong you it can really become a blocker. Light wrappers, mostly to codify convention are better.