Skip to content

Instantly share code, notes, and snippets.

@arthurbergmz
Created May 13, 2021 13:04
Show Gist options
  • Save arthurbergmz/32f81ddce11fbb96d02dfa85698f6a25 to your computer and use it in GitHub Desktop.
Save arthurbergmz/32f81ddce11fbb96d02dfa85698f6a25 to your computer and use it in GitHub Desktop.
Docker

Docker studies

Docker is an engine that runs containers. Containers allow you to solve many challenges created in the growing DevOps trend. It is made of the Docker Engine, Docker CLI and Docker Registry.

Docker CLI is where the "business logic" is, establishing an interface so you can take advantage of the Docker Engine and Docker Registry from a terminal/console.

Why Docker is useful

When a server application needs to handle a higher usage than what a single server can handle, the solution is well-known, place a reverse proxy in front of it, and duplicate the server as many times as needed.

Commands

docker info

Info about the docker host.

docker build -t (image name) (directory)

Creates a local image based on the Dockerfile file located in the given directory.

Flags

  • --file, -f to specify a different file than Dockerfile.

docker run (image) [command to be ran inside the container]

Creates a container for that image (name:tag). Add the '--rm" to automatically remove the container when it exits. Same as "docker container run".

If the given image is not already present on your disk, Docker downloads it from a default Docker Registry - the Docker Hub.

Flags

  • -d to detach and maintain running in background
  • --rm to automatically remove the container when it exits
  • --env A=B to set an environment variable A with value B
  • --env-file "enviroments/development.env" to provide a file enviroments/development.env of environment variables
  • -v [host system directory]:[container directory] binds a volume to be mounted.
  • -it allows to stop the container using Ctrl+C when attached to a terminal/console.
  • -p [incoming port to open on the host machine]:[port to be mapped inside the container] allows the container to listen for incoming network connections
  • --name (custom unique name) assigns a custom and unique name to the container

Example

$ docker run -d --rm --env-file "enviroments/development.env" --env A=B (image) [command to run inside container]

Runs a (image) container that will not be attached to your terminal (-d), with the enviroment variables from the --env-file file and an enviroment variable A with value B, that will be automatically removed when the container exits (--rm).

docker container prune

Removes all stopped containers.

Flags

  • -f in order to remove without prompting for confirmation.

Example

$ docker container prune -f

docker ps

Lists the containers that are still running. Same as $ docker container ls.

Flags

  • -a in order to also see containers that have exited.

Example

$ docker ps -a

docker logs (container id or name)

Retrieves the logs of a container, even when it has exited.

Flags

  • --since (time with unit or timestamp) to retrieve logs in a given interval or timestamp.
  • --before (time with unit or timestamp) to retrieve logs before a given interval or timestamp.
  • --tail (n) to retrieve n lines from the end of the logs.
  • --timestamps to show timestamps
  • --follow, -f to keep live/follow log output on console

Example

$ docker logs -f --since 10s 717d46b6ec25

Keep showing the past 10 seconds log lines from the container 717d46b6ec25 on the console.

docker inspect (container id or name)

Retrieves detailed information about a running or stopped container.

docker stop (container id or name)

Deletes a container that is still running

docker rm (container id or name)

Deletes a container. The container must be stopped/exited already.

docker images

Lists local images. Same as $ docker image ls.

docker rmi (image tag)

Deletes an image.

Dockerfile

To create an image, you must have a Dockerfile file. This file contains instructions on how the image should be built.

The Dockerfile file can have any name. Naming it Dockerfile makes it easier for others to understand its purpose when they see that file in your project. It also means we don’t need to state the file name when using the docker build command.

A Dockerfile file should always begin with a FROM instruction because every image is based on another base image. This is a powerful feature since it allows you to extend images that may already be complex.

Example

FROM debian:8

CMD ["echo", "Hello, world!"]

Displays a "Hello, world!" message when the container runs.

Notes

The CMD instruction works the same as a given command in docker run: it specifies an executable to run when a container is created using your image and it accepts optional arguments.

CMD instruction's value ["echo", "Hello, world"] is parsed as a JSON array.

More detailed information about instructions on the Dockerfile reference.

Images

A Docker image is created using the docker build command and a Dockerfile file.

When published to a registry, the image name is made of <repository_name>/<name>:<tag>.

  • Tag is optional; when missing, it is considered to be latest by default.
  • repository_name can a registry DNS or the name of a registry in the Docker Hub.

In order to create an image from the directory's Dockerfile file, you need to run the docker build command. To do this, you must type the following command in a terminal located in the folder where the Dockerfile file lives:

$ docker build -t (image name) (directory)

The directory argument usually is a dot (.), specifying you want the directory from the terminal to be used, since you are already running the terminal inside the desired directory.

Images are created locally, initially.

Containers

Each container is created from an image you provide to the docker run command. They are isolated and brand new environments that get erased when they are powered off.

You can think of the docker run command as the equivalent of buying a new computer, executing some command on it, then throwing it away. Each time a container is created from an image, you get a new isolated and virgin environment to play with inside that container.

By default, a container runs in isolation and, as such, it does not listen for incoming connections on the machine where it is running. You must explicitly open a port on the host machine and map it to a port on the container.

Short-lived containers

Short-lived containers usually do some processing and display some output. Example:

$ docker run alpine printenv

When you execute this command, a new container will be generated from the alpine image and attached to your terminal, executing the printenv command and printing its output. After that, it will be powered off (exited) since it has no other process running.

If you run the command above three times, you will get the same 3 outputs but from 3 different containers.

Long-lived containers

Long-lived containers usually are server containers. Whether you want to host a web application, an API or a database, you want a container that listens for incoming network connections and is potentially long-lived.

But still it’s best not to think about containers as long-lived as default.

Suppose we need to run a NGINX web server. NGINX listens for incoming HTTP requests on port 80 by default.

$ docker run -d nginx

This command will create a new detached (-d) NGINX web server container, but since all containers, by default, are isolated, any requests made to the machine's port 80 will not work. So we need to map the machine's port to the container's port, using the -p flag on the docker run command.

Listening for incoming network connections

In the following example, all requests made to the port 8085 will be mapped to the container's port 80 - where NGINX will be listening for requests.

$ docker run -d -p 8085:80 nginx

Now, if you try hitting localhost:8085, you will get a page served through a dockerized NGINX in your machine.

Since it is detached, remember you can always follow the container's console logging running docker logs -f (container id or name).

Volumes

When a container writes files, it writes them inside of the container. Which means that when the container dies (the host machine restarts, the container is moved from one node to another in a cluster, it simply fails, etc.) all of that data is lost. It also means that if you run the same container several times in a load-balancing scenario, each container will have its own data, which may result in inconsistent user experience.

A rule of thumb for the sake of simplicity is to ensure that containers are stateless, for instance, storing their data in an external database (relational like an SQL Server or document-based like MongoDB) or distributed cache (like Redis). However, sometimes you want to store files in a place where they are persisted; this is done using volumes.

Suppose you run a MongoDB database with no volume:

$ docker run -d mongo

Any data stored in that database will be lost when the container is stopped or restarted. In order to avoid data loss, you can use a volume mount by running docker run with --volume, -v.

$ docker run -v /datasbases/mongodb:/data/db -d mongo

Runs a detached mongo container (-d) with a volume -v that will ensure any data written to the /data/db directory inside the container is actually written to the /databases/mongodb directory on the host system. This ensures that the data is not lost when the container is restarted.

The /data/db needs to be also specified in the image's Dockerfile, through the VOLUME ["/data/db"] instruction, then the container creates a mount point with the specified name /data/db and marks it as holding externally mounted volumes from the native host system or other containers.

When on Windows-based containers, the destination of a volume inside the container must be a non-existing or empty directory, or a drive other than C:/.

The list of directories is parsed as a JSON array, so you must enclose words with double quotes (") rather than single quotes (').

Networking

When your image hosts server software, it listens on one or several ports. For instance, an HTTP server generally listens on the TCP port 80.

You can make this explicit using an EXPOSE instruction on your Dockerfile file:

EXPOSE 80

The EXPOSE instruction is purely for reference purposes. It wil not open a port to the outside world when a container is created from that image, but it enables someone who wants to run a container from your image to know which container's ports they should map to the machine's ports through the -p flag of the docker run command.

More about port mapping on the past section Listening for incoming network connections.

IANA's ephemeral port range for reference: 49152 to 65535.

Extending images and copying files into them

...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment