Skip to content

Instantly share code, notes, and snippets.

@rasheedamir
Last active September 17, 2015 12:39
Show Gist options
  • Save rasheedamir/42b17a1ddc8bd4c2cb22 to your computer and use it in GitHub Desktop.
Save rasheedamir/42b17a1ddc8bd4c2cb22 to your computer and use it in GitHub Desktop.
docker

Docker, the new trending containerization technique, is winning hearts with its lightweight, portable, “build once, configure once and run anywhere” functionalities.

Container vs Image

An instance of an image is called container. You have an image, which is a set of layers. If you start this image, you have a running container of this image. You can have many running containers of the same image.

So a running image is a container.

Cheat Sheet

  • Lists only running containers: sudo docker ps

  • Lists all containers: sudo docker ps -a

  • Quick cheat sheet

  • Remove Stopped Containers: docker ps -a | awk '/Exit/ {print $1}' | xargs docker rm

  • Remove all containers (including running containers): docker ps -q -a | xargs docker rm

  • List available: images sudo docker images

  • Restart the Docker daemon. If you are in Ubuntu 14.04, use docker.io instead of docker sudo service docker restart

  • To remove a container: docker rm <Container ID>

  • To remove all containers: docker rm $(docker ps -a -q)

  • To remove images: docker rmi <Container ID>

  • To remove all images: docker rmi $(docker ps -a -q)

Troubleshooting

  • Error: "dial unix /var/run/docker.sock: permission denied" on Linux 14.04
  • Solution: Adding myself to the docker group by sudo usermod -a -G docker <username> and rebooting (logout/login might have worked as well) did the job for me. I do not see the permission denied anymore when I issue docker commands.

OpenStack or Docker

Do I need OpenStack if I use Docker?

Chef

Chef: Chef is an automation platform that transforms infrastructure into code.

This is a configuration management software. Most of them use the same paradigm: they allow you to define the state you want a machine to be, with regards to configuration files, software installed, users, groups and many other resource types. Most of them also provide functionality to push changes onto specific machines, a process usually called orchestration.

Vagrant

Vagrant: Create and configure lightweight, reproducible, and portable development environments.

It provides a reproducible way to generate fully virtualized machines using either Oracle's VirtualBox or VMWare technology as providers. Vagrant can coordinate with a configuration management software to continue the process of installation where the operating system's installer finishes. This is known as provisioning.

Docker

Docker: An open source project to pack, ship and run any application as a lightweight container

The functionality of this software somewhat overlaps with that of Vagrant, in which it provides the means to define operating systems installations, but greatly differs in the technology used for this purpose. Docker uses Linux containers, which are not virtual machines per se, but isolated processes running in isolated filesystems. Docker can also use a configuration management system to provision the containers.

OpenStack

OpenStack: Open source software for building private and public clouds.

While it is true that OpenStack can be deployed on a single machine, such deployment is purely for proof-of-concept, probably not very functional due to resource constraints.

The primary target for OpenStack installations are bare metal multi-node environments, where the different components can be used in dedicated hardware to achieve better results.

A key functionality of OpenStack is its support for many virtualization technologies, from fully virtualized (VirtualBox, VMWare), to paravirtualized (KVM/Qemu) and also containers (LXC) and even User Mode Linux (UML).

MySQL in Docker

CoreOS

CoreOS is a minimal Linux-based operating system aimed at large-scale server deployments. CoreOS is written with scalability and security in mind. Next to that it is stongly biased towards Docker: every process running on a CoreOS server should be running in a Docker container. CoreOS comes with Docker and etcd pre-installed.

Docker

Docker is a platform to create light-weight, stand-alone, containers from any application. It allows you to run processes in a pseudo-VM that boots extremely fast (under 1 second) and isolates all its resources.

confd

confd is a configuration management tool built on top of etcd. confd can watch certain keys in etcd, and update the related configuration files as soon as the key changes. After that, confd can reload or restart applications related to the updated configuration files. This allows you to automate configuration changes to all the servers in your cluster, and makes sure all services are always looking at the latest configuration.

etcd

etcd is a highly available, distributed key/value store that is built to distribute configuration updates to all the servers in your cluster. Next to that it can be used for service discovery, or basically for any other distributed key/value based process that applies to your situation.

fleet

fleet is a layer on top of systemd, the well-known init system. Fleet basically lets you manage your services on any server in your cluster transparently, and gives you some convenient tools to inspect the state of your services.

@rasheedamir
Copy link
Author

stop and remove containers

docker stop $(docker ps -a -q)

docker rm $(docker ps -a -q)

@rasheedamir
Copy link
Author

Sometimes, you need a shell inside the container (to create test repositories, etc). docker provides an easy way to do that:
docker exec -i -t CONTAINER-ID bash

To check the server logs, you can do this:

docker exec -i -t CONTAINER-ID tail -f /var/log/go-server/go-server.log

You can find the container ID using docker ps.

@rasheedamir
Copy link
Author

If you start the container with the -P option, as mentioned above, and missed the messages about finding the correct ports for the server, the message is reproduced here:

If you're using docker on a Linux box, you can do this:

echo http://localhost:$(docker inspect --format='{{(index (index .NetworkSettings.Ports "8153/tcp") 0).HostPort}}' CONTAINER-ID)

@rasheedamir
Copy link
Author

install docker on gocd-server

sudo apt-get update

sudo apt-get install wget

sudo sh -c "wget -qO- https://get.docker.io/gpg | apt-key add -"

sudo sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"

sudo apt-get update

sudo apt-get -y install lxc-docker

sudo docker --version

@rasheedamir
Copy link
Author

build providing the dockerfile name:

docker build -f Dockerfile.gocd-server -t gocd-server .

@rasheedamir
Copy link
Author

http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/

docker-inside-docker

Let's take a step back here. Do you really want Docker-in-Docker? Or do you just want to be able to run Docker (specifically: build, run, sometimes push containers and images) from your CI system, while this CI system itself is in a container?

I'm going to bet that most people want the latter. All you want is a solution so that your CI system like Jenkins can start containers.

And the simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.

Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:

docker run -v /var/run/docker.sock:/var/run/docker.sock ...

Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.

If your CI makes use of the Docker binary in scripts, you can include it in your CI image, or bind-mount it from the host was well. Example:

docker run -v /var/run/docker.sock:/var/run/docker.sock
-v $(which docker):/bin/docker
-ti ubuntu

This looks like Docker-in-Docker, feels like Docker-in-Docker, but it's not Docker-in-Docker: when your CI container will create more containers, those containers will be created in the top-level Docker. You will not experience nesting side effects, and the build cache will be shared across multiple invocations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment