Docker is an engine that runs containers. Containers allow you to solve many challenges created in the growing DevOps trend. It is made of the Docker Engine, Docker CLI and Docker Registry.
Docker CLI is where the "business logic" is, establishing an interface so you can take advantage of the Docker Engine and Docker Registry from a terminal/console.
When a server application needs to handle a higher usage than what a single server can handle, the solution is well-known, place a reverse proxy in front of it, and duplicate the server as many times as needed.
Info about the docker host.
Creates a local image based on the Dockerfile
file located in the given directory.
--file, -f
to specify a different file thanDockerfile
.
Creates a container for that image (name:tag). Add the '--rm" to automatically remove the container when it exits. Same as "docker container run".
If the given image is not already present on your disk, Docker downloads it from a default Docker Registry - the Docker Hub.
-d
to detach and maintain running in background--rm
to automatically remove the container when it exits--env A=B
to set an environment variableA
with valueB
--env-file "enviroments/development.env"
to provide a fileenviroments/development.env
of environment variables-v [host system directory]:[container directory]
binds a volume to be mounted.-it
allows to stop the container usingCtrl+C
when attached to a terminal/console.-p [incoming port to open on the host machine]:[port to be mapped inside the container]
allows the container to listen for incoming network connections--name (custom unique name)
assigns a custom and unique name to the container
$ docker run -d --rm --env-file "enviroments/development.env" --env A=B (image) [command to run inside container]
Runs a (image)
container that will not be attached to your terminal (-d
), with the enviroment variables from the --env-file
file and an enviroment variable A
with value B
, that will be automatically removed when the container exits (--rm
).
Removes all stopped containers.
-f
in order to remove without prompting for confirmation.
$ docker container prune -f
Lists the containers that are still running. Same as $ docker container ls
.
-a
in order to also see containers that have exited.
$ docker ps -a
Retrieves the logs of a container, even when it has exited.
--since (time with unit or timestamp)
to retrieve logs in a given interval or timestamp.--before (time with unit or timestamp)
to retrieve logs before a given interval or timestamp.--tail (n)
to retrieven
lines from the end of the logs.--timestamps
to show timestamps--follow
,-f
to keep live/follow log output on console
$ docker logs -f --since 10s 717d46b6ec25
Keep showing the past 10 seconds log lines from the container 717d46b6ec25
on the console.
Retrieves detailed information about a running or stopped container.
Deletes a container that is still running
Deletes a container. The container must be stopped/exited already.
Lists local images. Same as $ docker image ls
.
Deletes an image.
To create an image, you must have a Dockerfile
file. This file contains instructions on how the image should be built.
The Dockerfile file can have any name. Naming it Dockerfile makes it easier for others to understand its purpose when they see that file in your project. It also means we don’t need to state the file name when using the docker build command.
A Dockerfile
file should always begin with a FROM
instruction because every image is based on another base image. This is a powerful feature since it allows you to extend images that may already be complex.
FROM debian:8
CMD ["echo", "Hello, world!"]
Displays a "Hello, world!" message when the container runs.
The
CMD
instruction works the same as a given command indocker run
: it specifies an executable to run when a container is created using your image and it accepts optional arguments.
CMD
instruction's value["echo", "Hello, world"]
is parsed as a JSON array.
More detailed information about instructions on the Dockerfile reference.
A Docker image is created using the docker build
command and a Dockerfile
file.
When published to a registry, the image name is made of <repository_name>/<name>:<tag>
.
- Tag is optional; when missing, it is considered to be
latest
by default. repository_name
can a registry DNS or the name of a registry in the Docker Hub.
In order to create an image from the directory's Dockerfile
file, you need to run the docker build
command. To do this, you must type the following command in a terminal located in the folder where the Dockerfile
file lives:
$ docker build -t (image name) (directory)
The directory
argument usually is a dot (.
), specifying you want the directory from the terminal to be used, since you are already running the terminal inside the desired directory.
Images are created locally, initially.
Each container is created from an image you provide to the docker run
command. They are isolated and brand new environments that get erased when they are powered off.
You can think of the
docker run
command as the equivalent of buying a new computer, executing some command on it, then throwing it away. Each time a container is created from an image, you get a new isolated and virgin environment to play with inside that container.
By default, a container runs in isolation and, as such, it does not listen for incoming connections on the machine where it is running. You must explicitly open a port on the host machine and map it to a port on the container.
Short-lived containers usually do some processing and display some output. Example:
$ docker run alpine printenv
When you execute this command, a new container will be generated from the alpine
image and attached to your terminal, executing the printenv
command and printing its output. After that, it will be powered off (exited) since it has no other process running.
If you run the command above three times, you will get the same 3 outputs but from 3 different containers.
Long-lived containers usually are server containers. Whether you want to host a web application, an API or a database, you want a container that listens for incoming network connections and is potentially long-lived.
But still it’s best not to think about containers as long-lived as default.
Suppose we need to run a NGINX web server. NGINX listens for incoming HTTP requests on port 80
by default.
$ docker run -d nginx
This command will create a new detached (-d
) NGINX web server container, but since all containers, by default, are isolated, any requests made to the machine's port 80
will not work. So we need to map the machine's port to the container's port, using the -p
flag on the docker run
command.
In the following example, all requests made to the port 8085
will be mapped to the container's port 80
- where NGINX will be listening for requests.
$ docker run -d -p 8085:80 nginx
Now, if you try hitting localhost:8085
, you will get a page served through a dockerized NGINX in your machine.
Since it is detached, remember you can always follow the container's console logging running docker logs -f (container id or name)
.
When a container writes files, it writes them inside of the container. Which means that when the container dies (the host machine restarts, the container is moved from one node to another in a cluster, it simply fails, etc.) all of that data is lost. It also means that if you run the same container several times in a load-balancing scenario, each container will have its own data, which may result in inconsistent user experience.
A rule of thumb for the sake of simplicity is to ensure that containers are stateless, for instance, storing their data in an external database (relational like an SQL Server or document-based like MongoDB) or distributed cache (like Redis). However, sometimes you want to store files in a place where they are persisted; this is done using volumes.
Suppose you run a MongoDB database with no volume:
$ docker run -d mongo
Any data stored in that database will be lost when the container is stopped or restarted. In order to avoid data loss, you can use a volume mount by running docker run
with --volume, -v
.
$ docker run -v /datasbases/mongodb:/data/db -d mongo
Runs a detached mongo
container (-d
) with a volume -v
that will ensure any data written to the /data/db
directory inside the container is actually written to the /databases/mongodb
directory on the host system. This ensures that the data is not lost when the container is restarted.
The /data/db
needs to be also specified in the image's Dockerfile, through the VOLUME ["/data/db"]
instruction, then the container creates a mount point with the specified name /data/db
and marks it as holding externally mounted volumes from the native host system or other containers.
When on Windows-based containers, the destination of a volume inside the container must be a non-existing or empty directory, or a drive other than
C:/
.
The list of directories is parsed as a JSON array, so you must enclose words with double quotes (
"
) rather than single quotes ('
).
When your image hosts server software, it listens on one or several ports. For instance, an HTTP server generally listens on the TCP port 80.
You can make this explicit using an EXPOSE
instruction on your Dockerfile
file:
EXPOSE 80
The EXPOSE
instruction is purely for reference purposes. It wil not open a port to the outside world when a container is created from that image, but it enables someone who wants to run a container from your image to know which container's ports they should map to the machine's ports through the -p
flag of the docker run
command.
More about port mapping on the past section Listening for incoming network connections.
IANA's ephemeral port range for reference: 49152 to 65535.
...
https://twitter.com/sidpalas/status/1634194026500096000?t=8e1_RddG-jDCi-JnpBm04g&s=03