- Install docker recommend to use docker toolbox (OSX) easy to update
remove all container
docker rm $(docker ps -aq)
if u found this message when remove all container
/var/run/docker.sock: permission denied
use this command to add user into group
sudo usermod -a -G docker <username>
and then logout login
reference: moby/moby#5314
if use docker to expose 3306 port and server has mysql service
- stop mysql
sudo stop mysql
reference: http://askubuntu.com/questions/82374/how-do-i-start-stop-mysql-server
- restart docker
sudo docker.io restart
reference: moby/moby#6476
- when run docker container use ip for specific port
docker run -p 127.0.0.1:3306:3306 asyncfi/magento-mysql
detach container by not close running container
Ctrl+p + Ctrl+q
reference: http://stackoverflow.com/questions/19688314/how-do-you-attach-and-detach-from-dockers-process
update docker
wget -qO- https://get.docker.com/ | sh
when check docker version
sudo docker version
will found error
FATA[0000] Error response from daemon: client and server don't have same version (client : 1.18, server: 1.12)
must restart docker.io
sudo service docker.io restart
sudo docker -d
On osx and docker-machine This Error
Error response from daemon: client is newer than server (client API version: 1.22, server API version: 1.21)
to fix
docker-machine upgrade <machine name>
to fix
Are you trying to connect to a TLS-enabled daemon without TLS?
for Ubuntu when first install
to fix
An error occurred trying to connect: Get https://192.168.59.103:2376/v1.19/containers/json: x509: certificate is valid for 127.0.0.1, 10.0.2.15, not 192.168.59.103
use this command
boot2docker ssh sudo /etc/init.d/docker restart
reference: boot2docker/boot2docker#938
to fix
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
execute this command
sudo docker -d
enter to container (for docker >= 1.3)
docker exec -it [container-id] bash
build image (-t is tag)
docker build -t <tag> .
list all images
docker images
remove image
docker rmi <image-name>
if docker can not resolve dns name
restart vagrant or boot2docker
https://github.com/docker/docker/issues/541
migrate boot2docker to docker machine
https://docs.docker.com/machine/migrate-to-machine/
- if you install previous 0.5.0v remove it
sudo rm /usr/local/bin/docker-machine*
- download docker-machine 0.5.1
curl -L https://github.com/docker/machine/releases/download/v0.5.1/docker-machine_darwin-amd64.zip >machine.zip && \
unzip machine.zip && \
rm machine.zip && \
sudo mv docker-machine* /usr/local/bin
- remove error machine
- migrate from boot2docker
docker-machine create -d virtualbox --virtualbox-import-boot2docker-vm boot2docker-vm docker-vm
- list machine
docker-machine ls
- start machine
docker-machine start {machine-name}
- tell docker to talk with that machine
eval "$(docker-machine env {machine-name})"
docker ps
- get ip machine
docker-machine ip {machine-name}
reference: http://stackoverflow.com/questions/20932357/docker-enter-running-container-with-new-tty
-f (to point to a Dockerfile anywhere in your file system.)
docker build -f /path/to/a/Dockerfile .
-t (tag at which to save the new image if the build succeeds.)
docker build -t shykes/myapp .
-t (tag multiple repositories)
docker build -t shykes/myapp:1.0.2 -t shykes/myapp:latest .
- The Docker daemon runs the instructions in the Dockerfile one-by-one, committing the result of each instruction to a new image if neccessary, before finally outputting the ID of your new image.
- Note that each instruction is run independently, and causes a new image to be created - so
RUN cd /tmp
will not have any effect on the next instructions. - Whenever possible, Docker will re-use the intermediate images(cache), to accelerate the
docker build
process significantly.
ENV variable value
to use variable syntax is ${variable}
This helps to avoid unnecessarily sending large or sensitive files and directories to the daemon and potentially adding them to images using ADD or COPY
sets the Base Image for subsequent instructions.
FROM <image>
FROM <image>:<tag>
FROM <image>@<digest>
MAINTAINER <name>
will execute any commands in a new layer on top of the current image and commit the result. The resulting committed image will be used for the next step in the Dockerfile.
the exec form makes it possible to avoid shell string munging, and to RUN commands using a base image that does not contain /bin/sh
In the shell form you can use a \ (backslash) to continue a single RUN instruction onto the next line.
RUN <command> (shell from, the command is run in a shell - /bin/sh -c)
RUN ["executable", "param1", "param2"] (exec form)
can only be one CMD instruction in a Dockerfile. If you list more than on CMD then only the last CMD will take effect.
- The main purpose of a CMD is to provide defaults for an executing container. These defaults can include an executable, or they can ommit the executable, in which case you must specify an ENTRYPOINT instruction as well.
CMD ["executable", "param1", "param2"] (exec form, this is the preferred form)
CMD ["param1", "param2"] (as default parameters to ENTRYPOINT)
CMD command param1 param2 (shell form)
LABEL <key>=<value> <key>=<value> <key>=<value>
informs Docker that the container listens on the specified network ports at runtime.
- EXPOSE does not make the ports of the container accessible to the host. To do that, you must use either the -p flag to publish a range of ports or the -P flag to publsh all of the exposed ports. You can expose one port number and publish it externally under another number.
EXPOSE <port> [<port>...]
ENV <key>=<value>
ADD <src>... <dest>
ADD ["<src>",... "<dest>"]
example
ADD test relativeDir/ # adds "test" to `WORKDIR`/relativeDir/
ADD test /absoluteDir/ # adds "test to /absoluteDir/
COPY <src>... <dest>
COPY ["<src>",... "<dest>"]
allows you to configure a container that will run as an executable
ENTRYPOINT ["executable", "param1", "param2"]
ENTRYPOINT command param1 param2
ONBUILD [INSTRUCTION]
sets the system call signal that will be sent to the container to exit.
STOPSIGNAL signal
Compose is a tool for defining and running multi-container
Docker applications.
Compose is great for development, testing and staging environments, as well as CI workflows.
Using Compose is basically a three-step process.
- Define your app's environment with a Dockerfile so it can be reproduced anywhere.
- Define the services that make up your app in
docker-compose.yml
so they can be run together in an isolated environment. - Lastly, run
docker-compose up
and Compose will start and run your entire app.
Compose has commands for managing the whole lifecycle of your application:
- Start, stop and rebuild services
- View the status of running services
- Stream the log output of running services
- Run a one-off command on a service
is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host. Because Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.
- Building images Swarm can build an image from a Dockerfile just like a single-host Docker instance can, but the resulting image will only live on a single node and won't be distributed to other nodes.
If you want to use Compose to scale the service in question to multiple nodes, you'll have to build it yourself, push it to a registry (e.g. the Docker Hub) and reference it from docker-compose.yml
- Multiple dependencies If a service has multiple dependencies of the type which force co-scheduling, it's possible that Swarm will schedule the dependencies on different nodes, making the dependent service impossible to schedule.