The use of Linux containers to deploy applications is called containerization. Docker is a containerization tool used for spinning up isolated, reproducible application environments. Containerization is increasingly popular because containers are:
Flexible: Even the most complex applications can be containerized. Lightweight: Containers leverage and share the host kernel. Interchangeable: You can deploy updates and upgrades on-the-fly. Portable: You can build locally, deploy to the cloud, and run anywhere. Scalable: You can increase and automatically distribute container replicas. Stackable: You can stack services vertically and on-the-fly.
- Code locally on a feature branch
- Open a pull request on Github against the master branch
- Run automated tests against the Docker container
- If the tests pass, manually merge the pull request into master
- Once merged, the automated tests run again
- If the second round of tests pass, a build is created on Docker Hub
- Once the build is created, it’s then automatically (err, automagically) deployed to production
- A
Dockerfile
is a file that contains a set of instructions used to create an image. - An
image
is used to build and save snapshots (the state) of an environment.An image is an executable package that includes everything needed to run an application--the code, a runtime, libraries, environment variables, and configuration files. - A
container
is an instantiated, live image that runs a collection of processes. A container is launched by running an image.
$ docker --version
Docker version 18.09.2, build 6247962
$ docker-compose --version
docker-compose version 1.23.2, build 1110ad01
$ docker-machine --version
docker-machine version 0.16.1, build cce350d7
docker info
$ docker-machine create -d virtualbox dev;
Let the Docker client point to the new machine via:
$ eval $(docker-machine env dev)
Run the following command to view the currently running Machines:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
dev * virtualbox Running tcp://192.168.99.100:2376 v18.09.3
Get IP
$ docker-machine ip
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
If you don’t want to preface the docker command with sudo, create a Unix group called docker
and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.
To create the docker group and add your user:
Create the docker group.
$ sudo groupadd docker
Add your user to the docker group.
sudo gpasswd -a $USER docker
Log out and log back in so that your group membership is re-evaluated.
If testing on a virtual machine, it may be necessary to restart the virtual machine for changes to take effect.
On a desktop Linux environment such as X Windows, log out of your session completely and then log back in.
On Linux, you can also run the following command to activate the changes to groups:
$ newgrp docker
Verify that you can run docker commands without sudo.
$ docker run hello-world
A Dockerfile
defines and builds docker images. Images are built using the build command and can include tags to name and
version your image.
docker build --tag=friendlyhello .
Tag defaults to latest. The full syntax for the tag option would be something like --tag=friendlyhello:v0.0.1.
List all docker images:
docker image ls
docker run -p 4000:80 friendlyhello
maps your machine’s port 4000 to the container’s published port.
Run in the background in detached mode
docker run -d -p 4000:80 friendlyhello
List all running containers
docker container ls
List all containers
docker container ls --all
List all containers running in quiet mode
docker container ls --aq
To end the container process
docker container stop 1fa4ab2cf395
Summary
docker build -t friendlyhello . # Create image using this directory's Dockerfile
docker run -p 4000:80 friendlyhello # Run "friendlyhello" mapping port 4000 to 80
docker run -d -p 4000:80 friendlyhello # Same thing, but in detached mode
docker container ls # List all running containers
docker container ls -a # List all containers, even those not running
docker container stop <hash> # Gracefully stop the specified container
docker container kill <hash> # Force shutdown of the specified container
docker container rm <hash> # Remove specified container from this machine
docker container rm $(docker container ls -a -q) # Remove all containers
docker image ls -a # List all images on this machine
docker image rm <image id> # Remove specified image from this machine
docker image rm $(docker image ls -a -q) # Remove all images from this machine
docker login # Log in this CLI session using your Docker credentials
docker tag <image> username/repository:tag # Tag <image> for upload to registry
docker push username/repository:tag # Upload tagged image to registry
docker run username/repository:tag # Run image from a registry
Docker Compose is an orchestration framework that handles the building and running of multiple services (via separate containers)
using a simple .yml
file. It makes it super easy to link services together running in different containers.
With one simple command we can build the image and run the container:
$ docker-compose up --build
To run the process in the background use the -d
flag for detached mode.
$ docker-compose up -d
View the currently running processes
$ docker-compose ps
Execute commands in a docker container running in detached mode
$ docker-compose exec web python manage.py migrate --noinput
Log in to a postgres instance running in a container
$ docker-compose exec db psql --username=hello_django --dbname=hello_django_dev
psql (11.2)
Type "help" for help.
hello_django_dev=# \l #This lists the databases
hello_django_dev=# \c hello_django_dev # Connect to hello_django_dev database as "hello_django"
hello_django_dev=# \dt # List all relations
hello_django_dev=# \q # Quit
Check that a volume was created
$ docker volume inspect django-on-docker_postgres_data
Update file permissions
RUN chmod +x app/entrypoint.sh
Build images and spin up the containers from a specific file:
$ docker-compose -f docker-compose.prod.yml up -d --build
Kill the processes via
docker-compose down
Bring down containers plus the volumes created from a specific file and restart:
$ docker-compose -f docker-compose.prod.yml down -v
$ docker-compose -f docker-compose.prod.yml up -d --build
$ docker-compose -f docker-compose.prod.yml exec web python manage.py migrate --noinput
$ docker-compose -f docker-compose.prod.yml exec web python manage.py startapp upload
Delete all stopped containers and remove the corresponding images
$ docker-system prune
Once all the containers are deleted, you can delete all the Docker volumes on your computer using the following command
docker volume prune
For more shortcuts visit
https://docs.docker.com/engine/reference/commandline/docker/