👤 Shivansh Thapliyal
⭐ Star this Gist
(Photo by Dominik Lückmann)
- Docker Installation
- Docker Commands
- Containers
- Running containers
- Docker run command
- Naming Containers
- List running containers
- List all containers (Even if not running)
- Stop container
- Stop all running containers
- Remove container (Can not remove running containers, must stop first)
- To remove a running container use force(-f)
- Remove multiple containers
- Remove all containers
- Get logs (Use name or ID)
- List processes running in container
- Images
- Networking
- Image tagging & pushing
- Using Amazon ECR as repository
- Volumes
- Bind mounts
- Containers
-
Update the installed packages and package cache on your instance.
sudo yum update -y
-
Install the most recent Docker Community Edition package.
sudo amazon-linux-extras install docker
-
Start the Docker service.
sudo service docker start
-
Add the ec2-user to the docker group so you can execute Docker commands without using sudo.
sudo usermod -a -G docker ec2-user
-
Log out and log back in again to pick up the new docker group permissions. You can accomplish this by closing your current SSH terminal window and reconnecting to your instance in a new one. Your new SSH session will have the appropriate docker group permissions.
-
Verify that the ec2-user can run Docker commands without sudo.
docker info
-
Step 1: Update Software Repositories. It’s a good idea to update the local database of software to make sure you’ve got access to the latest revisions.
sudo apt-get update
-
Step 2: Uninstall Old Versions of Docker
sudo apt-get remove docker docker-engine docker.io
-
Step 3: Install Docker
sudo apt install docker.io
-
Step 4: Start and Automate Docker : The Docker service needs to be setup to run at startup. To do so, type in each command followed by enter:
sudo systemctl start docker sudo systemctl enable docker
-
Step 5 (Optional): Check Docker Version
docker --version
-
Step 1: Update Local Database
sudo apt-get update
-
Step 2: Download Dependencies
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common - apt-transport-https: Allows the package manager to transfer files and data over https - ca-certificates: Allows the system (and web browser) to check security certificates - curl: This is a tool for transferring data - software-properties-common: Adds scripts for managing software
-
Step 3: Add Docker’s GPG Key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
-
Step 4: Install the Docker Repository
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
-
Step 5: Update Repositories
sudo apt-get update
-
Step 6: Install Latest Version of Docker
sudo apt-get install docker-ce
-
Step 7 (Optional): Install Specific Version of Docker List the available versions of Docker by entering the following in a terminal window:
apt-cache madison docker-ce The system should return a list of available versions as in the image above. At this point, type the command: sudo apt-get install docker-ce=<VERSION>
Run this command to download the current stable release of Docker Compose:
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
To install a different version of Compose, substitute 1.29.2 with the version of Compose you want to use.
Apply executable permissions to the binary:
sudo chmod +x /usr/local/bin/docker-compose
Add to /usr/bin/ if necessary:
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
docker container run -it -p 80:80 nginx
In foreground mode (the default when -d is not specified), docker run can start the process in the container and attach the console to the process’s standard input, output, and standard error. It can even pretend to be a TTY
docker container run -d -p 80:80 nginx
INFO: By design, containers started in detached mode exit when the root process used to run the container exits, unless you also specify the --rm option. If you use -d with --rm, the container is removed when it exits or when the daemon exits, whichever happens first.
- Docker runs processes in isolated containers.
- When an operator executes docker run, the container process that runs, is isolated in that it has its own file system, its own networking, and its own isolated process tree separate from the host.
- When we ran run command:
- It looked for image called nginx in image cache
- If not found in cache, it looks to the default image repo on Dockerhub
- Pulls it down (latest version), stores in the image cache
- Starts it in a new container
More info on docker docs
docker container run -d -p 80:80 --name nginx-container nginx
docker container ls
OR
docker ps
docker container ls -a
docker container stop [ID]
docker stop $(docker ps -aq)
docker container rm [ID]
docker container rm -f [ID]
docker container rm [ID] [ID] [ID]
docker rm $(docker ps -aq)
docker container logs [NAME]
docker container top [NAME]
docker image ls
docker pull [IMAGE_NAME]
docker image rm [IMAGE_NAME]
docker rmi $(docker images -a -q)
docker container run -d -p 80:80 --name nginx nginx (-p 80:80 is optional as it runs on 80 by default)
docker container run -d -p 8080:80 --name apache httpd
docker container run -d -p 27017:27017 --name mongo mongo
docker container run -d -p 3306:3306 --name mysql --env MYSQL_ROOT_PASSWORD=123456 mysql
The Hue Editor is a mature open source SQL Assistant for querying any Databases & Data Warehouses.
docker run -it -p 8888:8888 gethue/hue:latest
Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:
- bridge: The default network driver.
- host: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly.
- overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other.
- macvlan: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network.
- none: For this container, disable all networking. Usually used in conjunction with a custom network driver. none is not available for swarm services.
More info on:
Published ports
docker container port [NAME]
docker network ls
docker network inspect [NETWORK_NAME]
("bridge" is default)
docker network create [NETWORK_NAME]
docker container run -d --name [NAME] --network [NETWORK_NAME] nginx
docker network connect [NETWORK_NAME] [CONTAINER_NAME]
docker network disconnect [NETWORK_NAME] [CONTAINER_NAME]
docker network disconnect
docker image push username/image
docker login
aws ecr get-login-password --region <REGION_ID> | docker login --username AWS --password-stdin <AWS_ACC_NO>.dkr.ecr.<REGION_ID>.amazonaws.com
docker tag hadoop:latest <AWS_ACC_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<REPO_NAME>:hadoop-1.0.0
docker tag hadoop:latest hadoop:hadoop-1.0.0
Refer Installing, updating, and uninstalling the AWS CLI version 2.
touch Dockerfile
FROM ubuntu:18.04
# Install dependencies
RUN apt-get update && \
apt-get -y install apache2
# Install apache and write hello world message
RUN echo 'Hello World!' > /var/www/html/index.html
# Configure apache
RUN echo '. /etc/apache2/envvars' > /root/run_apache.sh && \
echo 'mkdir -p /var/run/apache2' >> /root/run_apache.sh && \
echo 'mkdir -p /var/lock/apache2' >> /root/run_apache.sh && \
echo '/usr/sbin/apache2 -D FOREGROUND' >> /root/run_apache.sh && \
chmod 755 /root/run_apache.sh
EXPOSE 80
CMD /root/run_apache.sh
docker build -t hello-world .
docker images --filter reference=hello-world
docker run -t -i -p 80:80 hello-world
To authenticate Docker to an Amazon ECR registry with get-login-password, run the aws ecr get-login-password command.
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
(Get-ECRLoginCommand).Password | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
aws ecr create-repository \
--repository-name hello-world \
--image-scanning-configuration scanOnPush=true \
--region us-east-1
docker tag hello-world:latest aws_account_id.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest
docker push aws_account_id.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest
# After docker login
docker pull aws_account_id.dkr.ecr.us-east-1.amazonaws.com/hello-world:latest
aws ecr batch-delete-image \
--repository-name hello-world \
--image-ids imageTag=latest
aws ecr delete-repository \
--repository-name hello-world \
--force
Refer AWS Docs for more.
Volume - Makes special location outside of container UFS. Used for databases Bind Mount -Link container path to host path
docker volume ls
docker volume prune
docker pull mysql
docker image inspect mysql
docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True mysql
docker container inspect mysql
- You will also see the volume under mounts
- Container gets its own uniqe location on the host to store that data
- Source: xxx is where it lives on the host
docker volume ls
There is no way to tell volumes apart for instance with 2 mysql containers, so we used named volumes
docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql-db:/var/lib/mysql mysql
docker volume inspect mysql-db
- Can not use in Dockerfile, specified at run time (uses -v as well)
- ... run -v /home/shivansh/path/:/path/container (Mac/Linux)
- ... run -v //c/Users/user/stuff:/path/container (Windows)
docker container run -p 80:80 -v $(pwd):/usr/share/nginx/html nginx
docker container exec -it nginx bash
cd /usr/share/nginx/html
ls -al
touch test.txt