docker-compose -f docker-compose.yml up --build
# docker build -t template_name:version_tag .
docker build -t micro_service:1.2.1 .
Then change version tag in yaml file
nano micro_service.yml
Then deploy the image using:
docker stack deploy -c micro_service.yml prod1
to declare external network called network_name
, by creating a swarm-scoped network before stack deployed
docker network create --driver overlay network_name
to update services under mirco_service.yml
file
docker service update --force prod1_web
docker service update --force prod1_celery
docker service update --force prod1_redis
to scale service without restarting:
docker service scale prod1_web=0
to check the services status:
docker service ls | grep prod1
to remove service prod1_celery
use:
docker service rm prod1_celery
to check docker servce process
docker service ps <service_id> --no-trunc
to check docker processes use:
docker ps | grep prod1
# then you will see list of processes
to check docker processes in pretty format:
docker ps -a --format 'table {{.ID}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}'
to check process log
docker logs -f --tail 20 <container_id>
to log into docker container as ssh:
docker exec -ti <container_id> bash
For example to connect container service flask_app
to your localhost postgresql, on your local machine try this command
ip addr show docker0
you will see output like this:
docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:4f:cd:6c:06 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
you neet to grap IP network 172.17.0.1/16
and set it on white list in pg_hba.conf
as:
host all all 172.19.0.0/16 md5
host all all 172.18.0.0/16 md5
host all all 172.17.0.0/16 md5
host all all 127.0.0.1/32 md5
and postgresql.conf
as:
listen_addresses = '*'
on docker_local_env
file:
DATABASE_HOST=172.17.0.1
source: here
check your docker_gwbridge on server using command: ip a
if your ufw enabled make sure you have docker_gwbridge allowed on it
ufw allow from 172.18.0.0/16 proto tcp to any port 5432
The problem is how to copy Docker images from one host to another without using a repository
To save docker image as file with gzip
docker save <image_name> | gzip > image_file.tgz
to load docker image in remote host
docker load -i image_file.tgz
you may need to rename and re-tag the image as it used in source host
docker image tag <image_id> <image_path_name>:<version>
source: here
docker system prune
source: here
FROM python:3.9-slim
# suppress pip upgrade warning
ARG PIP_DISABLE_PIP_VERSION_CHECK=1
# disable cache directory, image size 2.1GB to 1.9GB
ARG PIP_NO_CACHE_DIR=1
RUN pip3 install -r requirements.txt
source: here
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
03tmvqz4sdfrprtlpelger5t demo1_web.1 web:version_1.0 Running Pending about a minute ago "no suitable node (1 node not available for new tasks)"
To solve this issue, you need to re-initialize docker swarm:
systemctl stop docker
rm -Rf /var/lib/docker/swarm
systemctl start docker
docker swarm init
To delete all containers including its volumes use,
docker rm -vf $(docker ps -a -q)
To delete all the images,
docker rmi -f $(docker images -a -q)
Remember, you should remove all the containers before removing all the images from which those containers were created.
In case you are working on Windows (Powershell),
$images = docker images -a -q
foreach ($image in $images) { docker image rm $image -f }
Based on the comment from CodeSix, one liner for Windows Powershell,
docker images -a -q | % { docker image rm $_ -f }
For Windows using command line,
for /F %i in ('docker images -a -q') do docker rmi -f %i
source: here