Why Should I Care (For Developers)
"Docker interests me because it allows simple environment isolation and repeatability. I can create a run-time environment once, package it up, then run it again on any other machine. Furthermore, everything that runs in that environment is isolated from the underlying host (much like a virtual machine). And best of all, everything is fast and simple."
Use Homebrew.
ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)"
Install VirtualBox and Vagrant using Brew Cask.
brew tap phinze/homebrew-cask
brew install brew-cask
brew cask install virtualbox
brew cask install vagrant
We use the pre-built vagrant box: http://blog.phusion.nl/2013/11/08/docker-friendly-vagrant-boxes/
mkdir mydockerbox
cd mydockerbox
vagrant init docker https://oss-binaries.phusionpassenger.com/vagrant/boxes/ubuntu-12.04.3-amd64-vbox.box
vagrant up
vagrant ssh
In the VM:
sudo su -
sh -c "curl https://get.docker.io/gpg | apt-key add -"
sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
apt-get update
apt-get install -y lxc-docker
Verify:
docker run -i -t ubuntu /bin/bash
That's it, you have a running Docker container.
Your basic isolated Docker process. Containers are to Virtual Machines as threads are to processes. Or you can think of them as chroots on steroids.
Some common misconceptions it's worth correcting:
- Containers are not transient.
docker run
doesn't do what you think. - Containers are not limited to running a single command or process. It's just encouraged.
docker run
creates a container.docker stop
stops it.docker start
will start it again.docker restart
restarts a container.docker rm
deletes a container.docker attach
will connect to a running container.docker wait
blocks until container stops.
If you want to run and then interact with a container, docker start
then docker attach
to get in.
If you truly want a transient container, docker run -rm
will remove the container after it stops.
If you just want to poke around in an image, docker run -rm -t -i <myshell>
to open a tty.
If you just want to map a directory on the host to a docker container, docker run -v $HOSTDIR:$DOCKERDIR
docker ps
shows running containers.docker inspect
looks at all the info on a container (including IP address).docker logs
gets logs from container.docker events
gets events from container.docker port
shows public facing port of container.docker top
shows running processes in container.
docker ps -a
shows running and stopped containers.
docker cp
copies into a container.docker export
turns container fs into tarball.
Images are just templates for docker containers.
docker images
shows all images.docker import
creates an image from a tarball.docker build
creates image from Dockerfile.docker commit
creates image from a container.docker rmi
removes an image.docker insert
inserts a file from URL into image. (kind of odd, you'd think images would be immutable after create)
docker import
and docker commit
only set up the filesystem, not Dockerfile info like CMD or ENTRYPOINT or EXPOSE. See bug.
docker history
shows history of image.docker tag
tags an image to a name (local or registry).
A repository is a hosted collection of tagged images that together create the file system for a container.
A registry is a host -- a server that stores repositories and provides an HTTP API for managing the uploading and downloading of repositories.
Docker.io hosts its own index to a central registry which contains a large number of repositories.
docker login
to login to a registry.docker search
searches registry for image.docker pull
pulls an image from registry to local machine.docker push
pushes an image to the registry from local machine.
The configuration file. Sets up a Docker container when you run docker build
on it. Vastly preferable to docker commit
.
Best to look at http://github.com/wsargent/docker-devenv and the best practices for more details.
The versioned filesystem in Docker is based on layers. They're like git commits or changesets for filesystems.
Links are how Docker containers talk to each other. Linking into Redis is the only real example.
If you have a docker container with the name CONTAINER (specified by docker run -name CONTAINER
) and in the Dockerfile, it has an exposed port:
EXPOSE 1337
Then if we create another container called LINKED like so:
docker run -d -link CONTAINER:ALIAS -name LINKED user/wordpress
Then the exposed ports and aliases of CONTAINER will show up in LINKED with the following environment variables:
$ALIAS_PORT_1337_TCP_PORT
$ALIAS_PORT_1337_TCP_ADDR
And you can connect to it that way.
Docker volumes are free-floating filesystems. They don't have to be connected to a particular container.
Volumes are useful in situations where you can't use links (which are TCP/IP only). For instance, if you need to have two docker instances communicate by leaving stuff on the filesystem.
You can mount them in several docker containers at once, using docker run -volume-from
See advanced volumes for more details.
Sources:
alias dl='docker ps -l -q'
docker run ubuntu echo hello world
docker commit `dl` helloworld
docker commit -run='{"Cmd":["postgres", "-too -many -opts"]}' `dl` postgres
docker inspect `dl` | grep IPAddress | cut -d '"' -f 4
or
wget http://stedolan.github.io/jq/download/source/jq-1.3.tar.gz
tar xzvf jq-1.3.tar.gz
cd jq-1.3
./configure && make && sudo make install
docker inspect `dl` | jq -r '.[0].NetworkSettings.IPAddress'
docker run -rm ubuntu env
docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs docker rm
docker rm `docker ps -a -q`
docker images -viz | dot -Tpng -o docker.png