-
Running your first container
docker container run hello-world
-
Docker Images
docker image pull alpine docker image ls
-
Docker Container Run
docker container run alpine ls -l
docker container run alpine echo "hello from alpine"
docker container run alpine /bin/sh
Ничего не произошло. Объяснение:
Wait, nothing happened! Is that a bug? No! In fact, something did happen. You started a 3rd instance of the alpine container and it ran the command /bin/sh and then exited. You did not supply any additional commands to /bin/sh so it just launched the shell, exited the shell, and then stopped the container.
docker container run -it alpine /bin/sh
Открылась оболочка Linux:
docker container ls
дал пустую таблицу
docker container ls -a
-
Container Isolation
docker container run -it alpine /bin/ash
и затем в оболочке
echo "hello world" > hello.txt ls
Проверка изоляции:
docker container run alpine ls
Как видно из скриншота,
hello.txt
отсутствует.docker container ls -a
Объяснение:
Запускаем изолированный контейнер:
docker container start b1414 docker container ls
docker container exec b1414 ls
Отсюда видим, что
hello.txt
сохранился.
-
Image creation from a container
docker container run -ti ubuntu bash
и в оболочке:
apt-get update apt-get install -y figlet figlet "hello docker"
Данное задание не получается сделать в песочнице, зависает на
apt-get update
(0% прогресс), так что делаю локально:docker container ls -a
docker container commit 6679
docker image ls
Здесь еще есть мои образы, но нас интересуют
<none>
иubuntu
:docker image tag 673e ourfiglet docker image ls
docker container run ourfiglet figlet hello
-
Image creation using a Dockerfile
Создаем файл
index.js
:var os = require("os"); var hostname = os.hostname(); console.log("hello from " + hostname);
Создаем файл
Dockerfile
:FROM alpine RUN apk update && apk add nodejs COPY . /app WORKDIR /app CMD ["node","index.js"]
Собираем изображение:
docker image build -t hello:v0.1 .
docker container run hello:v0.1
-
Image layers
docker image history 3b50
Добавляем строчку в
index.js
:echo "console.log(\"this is v0.2\");" >> index.js
Собираем и проверяем вторую версию:
docker image build -t hello:v0.2 . docker container run hello:v0.2
-
Image Inspection
docker image inspect alpine
Результат:
[ { "Id": "sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b", "RepoTags": [ "alpine:latest" ], "RepoDigests": [ "alpine@sha256:beefdbd8a1da6d2915566fde36db9db0b524eb737fc57cd1367effd16dc0d06d" ], "Parent": "", "Comment": "", "Created": "2024-09-06T22:20:07.972381771Z", "DockerVersion": "23.0.11", "Author": "", "Config": { "Hostname": "", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": [ "/bin/sh" ], "Image": "sha256:2b00b4bd27e9e55889516b87471798d04fafb613bbbfc4c46589b7ce7f7e75e4", "Volumes": null, "WorkingDir": "", "Entrypoint": null, "OnBuild": null, "Labels": null }, "Architecture": "amd64", "Os": "linux", "Size": 7797760, "GraphDriver": { "Data": { "MergedDir": "/var/lib/docker/overlay2/3f4bef1efdda8d6be6048c9c9f7cdab3e72ec5c6400d79ad72503bbfe61e6778/merged", "UpperDir": "/var/lib/docker/overlay2/3f4bef1efdda8d6be6048c9c9f7cdab3e72ec5c6400d79ad72503bbfe61e6778/diff", "WorkDir": "/var/lib/docker/overlay2/3f4bef1efdda8d6be6048c9c9f7cdab3e72ec5c6400d79ad72503bbfe61e6778/work" }, "Name": "overlay2" }, "RootFS": { "Type": "layers", "Layers": [ "sha256:63ca1fbb43ae5034640e5e6cb3e083e05c290072c5366fcaa9d62435a4cced85" ] }, "Metadata": { "LastTagTime": "0001-01-01T00:00:00Z" } } ]
docker image inspect --format "{{ json .RootFS.Layers }}" alpine
Результат:
["sha256:63ca1fbb43ae5034640e5e6cb3e083e05c290072c5366fcaa9d62435a4cced85"]
То же самое с hello образом по id
d5c3
(v2):["sha256:63ca1fbb43ae5034640e5e6cb3e083e05c290072c5366fcaa9d62435a4cced85","sha256:57f3c3f9afa2e6749704982e4b18da1b52c70a8526d599ddd5fc48cbe934f4e5","sha256:70cbc335a21aa755351d8220906db7bb9af02fc8ade70b21189597c199ceff4b","sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"]
Видим 4 слоя (из
Dockerfile
).
-
Initialize Your Swarm
docker swarm init --advertise-addr $(hostname -i)
Со 2 ноды присоединяюсь:
docker swarm join --token SWMTKN-1-1kdz656m8ubunf0r9ifpr1phv876dpozayrcqjth32mlcxtq9n-chqcam6fmrim2iohfzmue619z 192.168.0.27:2377
-
Show Swarm Members
docker node ls
-
Clone the Voting App
git clone https://github.com/docker/example-voting-app cd example-voting-app
-
Deploy a Stack
cat docker-stack.yml
Результат:
# this file is meant for Docker Swarm stacks only # trying it in compose will fail because of multiple replicas trying to bind to the same port # Swarm currently does not support Compose Spec, so we'll pin to the older version 3.9 version: "3.9" services: redis: image: redis:alpine networks: - frontend db: image: postgres:15-alpine environment: POSTGRES_USER: "postgres" POSTGRES_PASSWORD: "postgres" volumes: - db-data:/var/lib/postgresql/data networks: - backend vote: image: dockersamples/examplevotingapp_vote ports: - 8080:80 networks: - frontend deploy: replicas: 2 result: image: dockersamples/examplevotingapp_result ports: - 8081:80 networks: - backend worker: image: dockersamples/examplevotingapp_worker networks: - frontend - backend deploy: replicas: 2 networks: frontend: backend: volumes: db-data:
docker stack deploy --compose-file=docker-stack.yml voting_stack docker stack ls
docker stack services voting_stack
docker service ps voting_stack_vote
-
Scaling An Application
docker service scale voting_stack_vote=5
-
Step 1: Clone the labs GitHub repo
git clone https://github.com/docker/labs cd labs/security/seccomp
-
Step 2: Test a seccomp profile
docker run --rm -it --cap-add ALL --security-opt apparmor=unconfined --security-opt seccomp=seccomp-profiles/deny.json alpine sh
Получаем
docker: Error response from daemon: cannot start a stopped process: unknown.
cat seccomp-profiles/deny.json
Результат:
{ "defaultAction": "SCMP_ACT_ERRNO", "architectures": [ "SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32" ], "syscalls": [ ] }
-
Step 3: Run a container with no seccomp profile
docker run --rm -it --security-opt seccomp=unconfined debian:jessie sh
и в оболочке:
whoami # ... unshare --map-root-user --user whoami
apk add --update strace strace -c -f -S name whoami 2>&1 1>/dev/null | tail -n +3 | head -n -2 | awk '{print $(NF)}'
strace whoami
-
Step 4: Selectively remove syscalls
docker run --rm -it --security-opt seccomp=./seccomp-profiles/default-no-chmod.json alpine sh
и внутри оболочки:
chmod 777 / -v
Получаем
chmod: /: Operation not permitted
.То же самое с профилем
default.json
:docker run --rm -it --security-opt seccomp=./seccomp-profiles/default.json alpine sh
cat ./seccomp-profiles/default.json | grep chmod # ... cat ./seccomp-profiles/default-no-chmod.json | grep chmod
Далее нет упражнений, только структура профиля seccomp и полезная информация, так что пропускаем шаги 5 и 6.
Пропускаем шаг 1 (вступление).
-
Step 2: Working with Docker and capabilities
Все команды завершились с ошибкой:
docker run --rm -it --cap-drop $CAP alpine sh docker run --rm -it --cap-add $CAP alpine sh docker run --rm -it --cap-drop ALL --cap-add $CAP alpine sh
-
Step 3: Testing Docker capabilities
Запускаем контейнер:
docker run --rm -it alpine chown nobody /
Далее:
docker run --rm -it --cap-drop ALL --cap-add CHOWN alpine chown nobody /
Далее:
docker run --rm -it --cap-drop CHOWN alpine chown nobody /
Получаем
chown: /: Operation not permitted
.Далее:
docker run --rm -it --cap-add chown -u nobody alpine chown nobody /
Получаем
chown: /: Operation not permitted
.Объяснение:
The above command fails because Docker does not yet support adding capabilities to non-root users.
-
Step 4: Extra for experts
docker run --rm -it alpine sh -c 'apk add -U libcap; capsh --print'
Падает в песочнице, выполнил локально:
docker run --rm -it alpine sh -c 'apk add -U libcap;capsh --help'
Аналогично, выполнил локально:
fetch https://dl-cdn.alpinelinux.org/alpine/v3.20/main/x86_64/APKINDEX.tar.gz fetch https://dl-cdn.alpinelinux.org/alpine/v3.20/community/x86_64/APKINDEX.tar.gz (1/5) Installing libcap2 (2.70-r0) (2/5) Installing libcap-getcap (2.70-r0) (3/5) Installing libcap-setcap (2.70-r0) (4/5) Installing libcap-utils (2.70-r0) (5/5) Installing libcap (2.70-r0) Executing busybox-1.36.1-r29.trigger OK: 8 MiB in 19 packages usage: capsh [args ...] --addamb=xxx add xxx,... capabilities to ambient set --cap-uid=<n> use libcap cap_setuid() to change uid --caps=xxx set caps as per cap_from_text() --chroot=path chroot(2) to this path --current show current caps and IAB vectors --decode=xxx decode a hex string to a list of caps --delamb=xxx remove xxx,... capabilities from ambient --drop=xxx drop xxx,... caps from bounding set --explain=xxx explain what capability xxx permits --forkfor=<n> fork and make child sleep for <n> sec --gid=<n> set gid to <n> (hint: id <username>) --groups=g,... set the supplemental groups --has-a=xxx exit 1 if capability xxx not ambient --has-b=xxx exit 1 if capability xxx not dropped --has-ambient exit 1 unless ambient vector supported --has-i=xxx exit 1 if capability xxx not inheritable --has-p=xxx exit 1 if capability xxx not permitted --has-no-new-privs exit 1 if privs not limited --help, -h this message (or try 'man capsh') --iab=... use cap_iab_from_text() to set iab --inh=xxx set xxx,.. inheritable set --inmode=<xxx> exit 1 if current mode is not <xxx> --is-uid=<n> exit 1 if uid != <n> --is-gid=<n> exit 1 if gid != <n> --keep=<n> set keep-capability bit to <n> --killit=<n> send signal(n) to child --license display license info --mode display current libcap mode --mode=<xxx> set libcap mode to <xxx> --modes list libcap named modes --no-new-privs set sticky process privilege limiter --noamb reset (drop) all ambient capabilities --noenv no fixup of env vars (for --user) --print display capability relevant state --quiet if first argument skip max cap check --secbits=<n> write a new value for securebits --shell=/xx/yy use /xx/yy instead of /bin/bash for -- --strict toggle --caps, --drop and --inh fixups --suggest=text search cap descriptions for text --supports=xxx exit 1 if capability xxx unsupported --uid=<n> set uid to <n> (hint: id <username>) --user=<name> set uid,gid and groups to that of user == re-exec(capsh) with args as for -- =+ cap_launch capsh with args as for -+ -- remaining arguments are for /bin/bash -+ cap_launch /bin/bash with remaining args (without -- [capsh] will simply exit(0))
Далее не получилось выполнить из-за того, что в песочнице не выполнялись команды выше.
-
Step 1: The Docker Network Command
docker network
-
Step 2: List networks
docker network ls
-
Step 3: Inspect a network
docker network inspect bridge
Результат:
[ { "Name": "bridge", "Id": "c875fc50f93e1d06493278bd8f72903cfee00ac36d6379f54d29f9002410fb4e", "Created": "2024-11-07T12:16:25.338668294Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.17.0.0/16", "Gateway": "172.17.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": { "com.docker.network.bridge.default_bridge": "true", "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500" }, "Labels": {} } ]
-
Step 4: List network driver plugins
docker info
Результат:
Client: Version: 27.3.1 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.17.1 Path: /usr/local/libexec/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.29.7 Path: /usr/local/libexec/docker/cli-plugins/docker-compose scout: Docker Scout (Docker Inc.) Version: v1.0.9 Path: /usr/lib/docker/cli-plugins/docker-scout Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 27.3.1 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: cgroupfs Cgroup Version: 1 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c runc version: v1.1.14-0-g2c9f560 init version: de40ad0 Security Options: apparmor seccomp Profile: builtin Kernel Version: 4.4.0-210-generic Operating System: Alpine Linux v3.20 (containerized) OSType: linux Architecture: x86_64 CPUs: 8 Total Memory: 31.42GiB Name: node1 ID: 60ea64c4-664c-4ee9-9bba-50771881ab2f Docker Root Dir: /var/lib/docker Debug Mode: true File Descriptors: 27 Goroutines: 48 System Time: 2024-11-07T12:18:01.593365424Z EventsListeners: 0 Experimental: true Insecure Registries: 127.0.0.1 127.0.0.0/8 Registry Mirrors: https://mirror.gcr.io/ Live Restore Enabled: false Product License: Community Engine [DEPRECATION NOTICE]: API is accessible on http://0.0.0.0:2375 without encryption. Access to the remote API is equivalent to root access on the host. Refer to the 'Docker daemon attack surface' section in the documentation for more information: https://docs.docker.com/go/attack-surface/ In future versions this will be a hard failure preventing the daemon from starting! Learn more at: https://docs.docker.com/go/api-security/ WARNING: No swap limit support WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled
-
Step 1: The Basics
docker network ls
Вывод тот же, что в секции 1 в пункте 2.
apk update apk add bridge
brctl show
ip a
-
Step 2: Connect a container
docker run -dt ubuntu sleep infinity
docker ps
brctl show
docker network inspect bridge
Результат:
[ { "Name": "bridge", "Id": "c875fc50f93e1d06493278bd8f72903cfee00ac36d6379f54d29f9002410fb4e", "Created": "2024-11-07T12:16:25.338668294Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.17.0.0/16", "Gateway": "172.17.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "49650042389bcc1c250aa6e00b2f8a5158508d75428fd2e04083737c9cd7c1a1": { "Name": "happy_turing", "EndpointID": "1a04c48d8b2208cc732765228cefcb7fe5353fa563a36fbe4cf5a4df957fe5d7", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "172.17.0.2/16", "IPv6Address": "" } }, "Options": { "com.docker.network.bridge.default_bridge": "true", "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500" }, "Labels": {} } ]
-
Step 3: Test network connectivity
ping -c5 172.17.0.2
docker exec -it 4965 /bin/bash
и в оболочке:
apt-get update && apt-get install -y iputils-ping ping -c5 www.github.com
Получаем:
PING www.docker.com (104.239.220.248) 56(84) bytes of data. 64 bytes from 104.239.220.248: icmp_seq=1 ttl=45 time=38.1 ms 64 bytes from 104.239.220.248: icmp_seq=2 ttl=45 time=37.3 ms 64 bytes from 104.239.220.248: icmp_seq=3 ttl=45 time=37.5 ms 64 bytes from 104.239.220.248: icmp_seq=4 ttl=45 time=37.5 ms 64 bytes from 104.239.220.248: icmp_seq=5 ttl=45 time=37.5 ms --- www.docker.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4003ms rtt min/avg/max/mdev = 37.372/37.641/38.143/0.314 ms
Останавливаем командой
docker stop 4965
. -
Step 4: Configure NAT for external connectivity
docker run --name web1 -d -p 8080:80 nginx docker ps
curl 127.0.0.1:8080
Результат:
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
-
Step 1: The Basics
На 1 ноде:
docker swarm init --advertise-addr $(hostname -i)
На 2 ноде:
docker swarm join --token SWMTKN-1-2x6cocnxe35pbh3yirziltq45cdtbgfrzls1idl1t9tmiwiua1-89qp9r6owochmv471gs1q6qqj 192.168.0.12:2377
docker node ls
-
Step 2: Create an overlay network
docker network create -d overlay overnet
Результат:
lsnly17ojzcka81zmydvf90b3
.В обеих нодах выполяем:
docker network ls
docker network inspect overnet
Результат:
[ { "Name": "overnet", "Id": "lsnly17ojzcka81zmydvf90b3", "Created": "2024-11-07T12:30:47.145611357Z", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "10.0.1.0/24", "Gateway": "10.0.1.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": null, "Options": { "com.docker.network.driver.overlay.vxlanid_list": "4097" }, "Labels": null } ]
-
Step 3: Create a service
docker service create --name myservice \ --network overnet \ --replicas 2 \ ubuntu sleep infinity
docker service ls
docker service ps myservice
Теперь
overnet
появилась на второй ноде:docker network ls
На второй же ноде:
docker network inspect overnet
Результат:
[ { "Name": "overnet", "Id": "lsnly17ojzcka81zmydvf90b3", "Created": "2024-11-07T12:33:31.394699103Z", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "10.0.1.0/24", "Gateway": "10.0.1.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "b3e520207ce34282c72f7e2cb2531bea337a3d5f1647e5dd3a9f78c2d7e5525d": { "Name": "myservice.2.pjlxv1tfutlwt2xkgl7d0ggpf", "EndpointID": "f7f8af6c89597fa1995a092d03012c8606a8f37cfc2ee0ada9b0041da3063219", "MacAddress": "02:42:0a:00:01:04", "IPv4Address": "10.0.1.4/24", "IPv6Address": "" }, "lb-overnet": { "Name": "overnet-endpoint", "EndpointID": "cbc9e38e538cc87cd05d5b466d7f2a49413c526f4606df7fcdb22e261acf300e", "MacAddress": "02:42:0a:00:01:06", "IPv4Address": "10.0.1.6/24", "IPv6Address": "" } }, "Options": { "com.docker.network.driver.overlay.vxlanid_list": "4097" }, "Labels": {}, "Peers": [ { "Name": "b40f4d5d441b", "IP": "192.168.0.13" }, { "Name": "8e75a77948eb", "IP": "192.168.0.12" } ] } ]
-
Step 4: Test the network
На первой ноде:
docker network inspect overnet
Результат:
[ { "Name": "overnet", "Id": "lsnly17ojzcka81zmydvf90b3", "Created": "2024-11-07T12:33:31.395356408Z", "Scope": "swarm", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "10.0.1.0/24", "Gateway": "10.0.1.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "e887f28ea4e02b19bbe113103c039f28b9b7adf9b0ff27b5312e57b27b94f872": { "Name": "myservice.1.7ah79rsn6g392rzxyl2xtsmij", "EndpointID": "703074d8ef81517bc220d1ee23e6782fa11ebf1e4e9ce54f4c75e223b21e5bb6", "MacAddress": "02:42:0a:00:01:03", "IPv4Address": "10.0.1.3/24", "IPv6Address": "" }, "lb-overnet": { "Name": "overnet-endpoint", "EndpointID": "abb5d9cbc69663bdcb92b3024c5a8f113a84bc3483723812cc553dc2a34d233a", "MacAddress": "02:42:0a:00:01:05", "IPv4Address": "10.0.1.5/24", "IPv6Address": "" } }, "Options": { "com.docker.network.driver.overlay.vxlanid_list": "4097" }, "Labels": {}, "Peers": [ { "Name": "8e75a77948eb", "IP": "192.168.0.12" }, { "Name": "b40f4d5d441b", "IP": "192.168.0.13" } ] } ]
docker ps
Далее не получается установить утилиты для команды
ping
, так что не получается проверить пинг между нодами. -
Step 5: Test service discovery
cat /etc/resolv.conf
Результат:
# Generated by Docker Engine. # This file can be edited; Docker Engine will not make further changes once it # has been modified. nameserver 127.0.0.11 options ndots:0 # Based on host file: '/etc/resolv.conf' (internal resolver) # ExtServers: [host(127.0.0.11)] # Overrides: [] # Option ndots from: host
Пинг также не удался.
docker service inspect myservice
Результат:
[ { "ID": "6bk832nlxs1g45b5tlcwkw9h4", "Version": { "Index": 20 }, "CreatedAt": "2024-11-07T12:33:31.228257439Z", "UpdatedAt": "2024-11-07T12:33:31.230740856Z", "Spec": { "Name": "myservice", "Labels": {}, "TaskTemplate": { "ContainerSpec": { "Image": "ubuntu:latest@sha256:99c35190e22d294cdace2783ac55effc69d32896daaa265f0bbedbcde4fbe3e5", "Args": [ "sleep", "infinity" ], "Init": false, "StopGracePeriod": 10000000000, "DNSConfig": {}, "Isolation": "default" }, "Resources": { "Limits": {}, "Reservations": {} }, "RestartPolicy": { "Condition": "any", "Delay": 5000000000, "MaxAttempts": 0 }, "Placement": { "Platforms": [ { "Architecture": "amd64", "OS": "linux" }, { "Architecture": "unknown", "OS": "unknown" }, { "OS": "linux" }, { "Architecture": "unknown", "OS": "unknown" }, { "Architecture": "arm64", "OS": "linux" }, { "Architecture": "unknown", "OS": "unknown" }, { "Architecture": "ppc64le", "OS": "linux" }, { "Architecture": "unknown", "OS": "unknown" }, { "Architecture": "riscv64", "OS": "linux" }, { "Architecture": "unknown", "OS": "unknown" }, { "Architecture": "s390x", "OS": "linux" }, { "Architecture": "unknown", "OS": "unknown" } ] }, "Networks": [ { "Target": "lsnly17ojzcka81zmydvf90b3" } ], "ForceUpdate": 0, "Runtime": "container" }, "Mode": { "Replicated": { "Replicas": 2 } }, "UpdateConfig": { "Parallelism": 1, "FailureAction": "pause", "Monitor": 5000000000, "MaxFailureRatio": 0, "Order": "stop-first" }, "RollbackConfig": { "Parallelism": 1, "FailureAction": "pause", "Monitor": 5000000000, "MaxFailureRatio": 0, "Order": "stop-first" }, "EndpointSpec": { "Mode": "vip" } }, "Endpoint": { "Spec": { "Mode": "vip" }, "VirtualIPs": [ { "NetworkID": "lsnly17ojzcka81zmydvf90b3", "Addr": "10.0.1.2/24" } ] } } ]
-
Cleaning Up
docker service rm myservice docker ps
На обеих нодах:
docker swarm leave --force
Пропускаем первую секцию (введение)
docker run -dt ubuntu sleep infinity
docker ps
-
Step 2.1 - Create a Manager node
docker swarm init --advertise-addr $(hostname -i) docker info
Результат:
Client: Version: 27.3.1 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.17.1 Path: /usr/local/libexec/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.29.7 Path: /usr/local/libexec/docker/cli-plugins/docker-compose scout: Docker Scout (Docker Inc.) Version: v1.0.9 Path: /usr/lib/docker/cli-plugins/docker-scout Server: Containers: 1 Running: 1 Paused: 0 Stopped: 0 Images: 1 Server Version: 27.3.1 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: cgroupfs Cgroup Version: 1 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog Swarm: active NodeID: w20d42326gamn3rqovusr35tu Is Manager: true ClusterID: sh5oibcrku0o13w6zsh09h1lh Managers: 1 Nodes: 1 Data Path Port: 4789 Orchestration: Task History Retention Limit: 5 Raft: Snapshot Interval: 10000 Number of Old Snapshots to Retain: 0 Heartbeat Tick: 1 Election Tick: 10 Dispatcher: Heartbeat Period: 5 seconds CA Configuration: Expiry Duration: 3 months Force Rotate: 0 Autolock Managers: false Root Rotation In Progress: false Node Address: 192.168.0.26 Manager Addresses: 192.168.0.26:2377 Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c runc version: v1.1.14-0-g2c9f560 init version: de40ad0 Security Options: apparmor seccomp Profile: builtin Kernel Version: 4.4.0-210-generic Operating System: Alpine Linux v3.20 (containerized) OSType: linux Architecture: x86_64 CPUs: 8 Total Memory: 31.42GiB Name: node1 ID: acc524b5-74cb-40e3-a138-01d2dd1511df Docker Root Dir: /var/lib/docker Debug Mode: true File Descriptors: 48 Goroutines: 173 System Time: 2024-11-07T12:55:02.118908059Z EventsListeners: 0 Experimental: true Insecure Registries: 127.0.0.1 127.0.0.0/8 Registry Mirrors: https://mirror.gcr.io/ Live Restore Enabled: false Product License: Community Engine [DEPRECATION NOTICE]: API is accessible on http://0.0.0.0:2375 without encryption. Access to the remote API is equivalent to root access on the host. Refer to the 'Docker daemon attack surface' section in the documentation for more information: https://docs.docker.com/go/attack-surface/ In future versions this will be a hard failure preventing the daemon from starting! Learn more at: https://docs.docker.com/go/api-security/ WARNING: No swap limit support WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled
-
Step 2.2 - Join Worker nodes to the Swarm
На 2 и 3 нодах:
docker swarm join --token SWMTKN-1-2664lcgfxe03ilu8x1y4suaosjhas8ndeooixbt9m90om7coda-0729rbthdb2fbd90zunl8rg6u 192.168.0.26:2377
docker node ls
-
Step 3.1 - Deploy the application components as Docker services
docker service create --name sleep-app ubuntu sleep infinity docker service ls
docker service update --replicas 7 sleep-app
docker service ps sleep-app
Теперь уменьшаем количество реплик:
docker service update --replicas 4 sleep-app
Проверяем docker service ps sleep-app
:
docker node ls
На 2 ноде:
docker ps
На 1 ноде:
docker node update --availability drain ftdq1z24qk2b7r9fbi30dl2a8
docker node ls
Со 2 ноды:
docker ps
На 1 ноде:
docker service ps sleep-app
docker service rm sleep-app
docker ps
docker kill 3ae4
Со всех нод:
docker swarm leave --force
Здесь представлены только ресурсы, упражнений нет.
-
Task 0: Prerequisites
git clone https://github.com/dockersamples/linux_tweet_app
-
Task 1: Run some simple Docker containers
-
Run a single task in an Alpine Linux container
docker container run alpine hostname docker container ls --all
-
Run an interactive Ubuntu container
docker container run --interactive --tty --rm ubuntu bash
и в оболочке:
ls / ps aux cat /etc/issue
На хосте:
cat /etc/issue
-
Run a background MySQL container
docker container run \ --detach \ --name mydb \ -e MYSQL_ROOT_PASSWORD=my-secret-pw \ mysql:latest docker container ls
docker container logs mydb
Результат:
2024-11-07 13:18:46+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 9.1.0-1.el9 started. 2024-11-07 13:18:47+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 2024-11-07 13:18:47+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 9.1.0-1.el9 started. 2024-11-07 13:18:47+00:00 [Note] [Entrypoint]: Initializing database files 2024-11-07T13:18:47.698946Z 0 [System] [MY-015017] [Server] MySQL Server Initialization - start. 2024-11-07T13:18:47.702360Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 9.1.0) initializing of server inprogress as process 80 2024-11-07T13:18:47.789829Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2024-11-07T13:18:48.255485Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2024-11-07T13:18:52.190134Z 6 [Warning] [MY-010453] [Server] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option. 2024-11-07T13:18:55.276921Z 0 [System] [MY-015018] [Server] MySQL Server Initialization - end. 2024-11-07 13:18:55+00:00 [Note] [Entrypoint]: Database files initialized 2024-11-07 13:18:55+00:00 [Note] [Entrypoint]: Starting temporary server 2024-11-07T13:18:55.600490Z 0 [System] [MY-015015] [Server] MySQL Server - start. 2024-11-07T13:18:56.080906Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 9.1.0) starting as process 121 2024-11-07T13:18:56.177573Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2024-11-07T13:18:56.731441Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2024-11-07T13:18:57.587481Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2024-11-07T13:18:57.587554Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. 2024-11-07T13:18:57.599470Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. 2024-11-07T13:18:57.650569Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: /var/run/mysqld/mysqlx.sock 2024-11-07T13:18:57.651090Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '9.1.0' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server - GPL. 2024-11-07 13:18:57+00:00 [Note] [Entrypoint]: Temporary server started. '/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock' Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/leapseconds' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/tzdata.zi' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it. 2024-11-07 13:19:03+00:00 [Note] [Entrypoint]: Stopping temporary server 2024-11-07T13:19:04.018565Z 10 [System] [MY-013172] [Server] Received SHUTDOWN from user root. Shutting down mysqld (Version: 9.1.0). 2024-11-07T13:19:04.685335Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 9.1.0) MySQLCommunity Server - GPL. 2024-11-07T13:19:04.685394Z 0 [System] [MY-015016] [Server] MySQL Server - end. 2024-11-07 13:19:05+00:00 [Note] [Entrypoint]: Temporary server stopped 2024-11-07 13:19:05+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up. 2024-11-07T13:19:05.063418Z 0 [System] [MY-015015] [Server] MySQL Server - start. 2024-11-07T13:19:05.418145Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 9.1.0) starting as process 1 2024-11-07T13:19:05.447496Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2024-11-07T13:19:06.140884Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2024-11-07T13:19:06.990915Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2024-11-07T13:19:06.991005Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. 2024-11-07T13:19:07.002013Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. 2024-11-07T13:19:07.065253Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock 2024-11-07T13:19:07.065678Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '9.1.0' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
docker container top mydb
docker exec -it mydb \ mysql --user=root --password=$MYSQL_ROOT_PASSWORD --version
docker exec -it mydb sh
и в оболочке:
mysql --user=root --password=$MYSQL_ROOT_PASSWORD --version
-
-
Task 2: Package and run a custom app using Docker
-
Build a simple website image
cd ~/linux_tweet_app cat Dockerfile
Результат:
FROM nginx:latest COPY index.html /usr/share/nginx/html COPY linux.png /usr/share/nginx/html EXPOSE 80 443 CMD ["nginx", "-g", "daemon off;"]
export DOCKERID=lilnikky echo $DOCKERID docker image build --tag $DOCKERID/linux_tweet_app:1.0 .
docker container run \ --detach \ --publish 80:80 \ --name linux_tweet_app \ $DOCKERID/linux_tweet_app:1.0
docker container rm --force linux_tweet_app
-
-
Task 3: Modify a running website
-
Start our web app with a bind mount
docker container run \ --detach \ --publish 80:80 \ --name linux_tweet_app \ --mount type=bind,source="$(pwd)",target=/usr/share/nginx/html \ $DOCKERID/linux_tweet_app:1.0
-
Modify the running website
cp index-new.html index.html
docker rm --force linux_tweet_app docker container run \ --detach \ --publish 80:80 \ --name linux_tweet_app \ $DOCKERID/linux_tweet_app:1.0
docker rm --force linux_tweet_app
-
Update the image
docker image build --tag $DOCKERID/linux_tweet_app:2.0 . docker image ls
-
Test the new version
docker container run \ --detach \ --publish 80:80 \ --name linux_tweet_app \ $DOCKERID/linux_tweet_app:2.0
Опять оранжевый фон:
Запускаем еще один контейнер на другом порту с другим названием (старая версия):
docker container run \ --detach \ --publish 8080:80 \ --name old_linux_tweet_app \ $DOCKERID/linux_tweet_app:1.0
И теперь по другой ссылке старая версия с голубым фоном:
-
Push your images to Docker Hub
docker image ls -f reference="$DOCKERID/*"
docker login -u lilnikky # enter password
docker image push $DOCKERID/linux_tweet_app:1.0 docker image push $DOCKERID/linux_tweet_app:2.0
Проверить можно тут.
-
-
Stage Setup
git clone https://github.com/ibnesayeed/linkextractor.git cd linkextractor git checkout demo
-
Step 0: Basic Link Extractor Script
git checkout step0 tree
Результат:
. ├── README.md └── linkextractor.py 1 directory, 2 files
cat linkextractor.py
Результат:
#!/usr/bin/env python import sys import requests from bs4 import BeautifulSoup res = requests.get(sys.argv[-1]) soup = BeautifulSoup(res.text, "html.parser") for link in soup.find_all("a"): print(link.get("href"))
./linkextractor.py http://example.com/
Результат:
bash: ./linkextractor.py: Permission denied
.ls -l linkextractor.py
Результат:
-rw-r--r-- 1 root root 220 Nov 8 13:35 linkextractor.py
python3 linkextractor.py
Результат:
Traceback (most recent call last): File "/root/linkextractor/linkextractor.py", line 4, in <module> import requests ModuleNotFoundError: No module named 'requests'
-
Step 1: Containerized Link Extractor Script
git checkout step1 tree
Результат:
. ├── Dockerfile ├── README.md └── linkextractor.py 1 directory, 3 files
cat Dockerfile
Результат:
FROM python:3 LABEL maintainer="Sawood Alam <@ibnesayeed>" RUN pip install beautifulsoup4 RUN pip install requests WORKDIR /app COPY linkextractor.py /app/ RUN chmod a+x linkextractor.py ENTRYPOINT ["./linkextractor.py"]
docker image build -t linkextractor:step1 . docker image ls
Не удается выполнить в песочнице, зависает на
pip install
, выполняю локально (далее работа продолжается также локально). Вdocker image ls
видно другие образы, нам интересен толькоlinkextractor
.docker container run -it --rm linkextractor:step1 http://example.com/
Результат:
https://www.iana.org/domains/example
.docker container run -it --rm linkextractor:step1 https://training.play-with-docker.com/
Результат:
/ /about/ #ops #dev /ops-stage1 /ops-stage2 /ops-stage3 /dev-stage1 /dev-stage2 /dev-stage3 /alacart https://twitter.com/intent/tweet?text=Play with Docker Classroom&url=https://training.play-with-docker.com/&via=docker&related=docker https://facebook.com/sharer.php?u=https://training.play-with-docker.com/ https://plus.google.com/share?url=https://training.play-with-docker.com/ http://www.linkedin.com/shareArticle?mini=true&url=https://training.play-with-docker.com/&title=Play%20with%20Docker%20Classroom&source=https://training.play-with-docker.com https://www.docker.com/dockercon/ https://www.docker.com/dockercon/ https://dockr.ly/slack https://www.docker.com/legal/docker-terms-service https://www.docker.com https://www.facebook.com/docker.run https://twitter.com/docker https://www.github.com/play-with-docker/play-with-docker.github.io
-
Step 2: Link Extractor Module with Full URI and Anchor Text
git checkout step2 tree
Результат:
. ├── Dockerfile ├── linkextractor.py └── README.md 0 directories, 3 files
cat linkextractor.py
Результат:
#!/usr/bin/env python import sys import requests from bs4 import BeautifulSoup from urllib.parse import urljoin def extract_links(url): res = requests.get(url) soup = BeautifulSoup(res.text, "html.parser") base = url # TODO: Update base if a <base> element is present with the href attribute links = [] for link in soup.find_all("a"): links.append({ "text": " ".join(link.text.split()) or "[IMG]", "href": urljoin(base, link.get("href")) }) return links if __name__ == "__main__": if len(sys.argv) != 2: print("\nUsage:\n\t{} <URL>\n".format(sys.argv[0])) sys.exit(1) for link in extract_links(sys.argv[-1]): print("[{}]({})".format(link["text"], link["href"]))
docker image build -t linkextractor:step2 . docker image ls
docker container run -it --rm linkextractor:step2 https://training.play-with-docker.com/
Результат:
[Play with Docker classroom](https://training.play-with-docker.com/) [About](https://training.play-with-docker.com/about/) [IT Pros and System Administrators](https://training.play-with-docker.com/#ops) [Developers](https://training.play-with-docker.com/#dev) [Stage 1: The Basics](https://training.play-with-docker.com/ops-stage1) [Stage 2: Digging Deeper](https://training.play-with-docker.com/ops-stage2) [Stage 3: Moving to Production](https://training.play-with-docker.com/ops-stage3) [Stage 1: The Basics](https://training.play-with-docker.com/dev-stage1) [Stage 2: Digging Deeper](https://training.play-with-docker.com/dev-stage2) [Stage 3: Moving to Staging](https://training.play-with-docker.com/dev-stage3) [Full list of individual labs](https://training.play-with-docker.com/alacart) [[IMG]](https://twitter.com/intent/tweet?text=Play with Docker Classroom&url=https://training.play-with-docker.com/&via=docker&related=docker) [[IMG]](https://facebook.com/sharer.php?u=https://training.play-with-docker.com/) [[IMG]](https://plus.google.com/share?url=https://training.play-with-docker.com/) [[IMG]](http://www.linkedin.com/shareArticle?mini=true&url=https://training.play-with-docker.com/&title=Play%20with%20Docker%20Classroom&source=https://training.play-with-docker.com) [[IMG]](https://www.docker.com/dockercon/) [Sign up today](https://www.docker.com/dockercon/) [Register here](https://dockr.ly/slack) [here](https://www.docker.com/legal/docker-terms-service) [[IMG]](https://www.docker.com) [[IMG]](https://www.facebook.com/docker.run) [[IMG]](https://twitter.com/docker) [[IMG]](https://www.github.com/play-with-docker/play-with-docker.github.io)
При этом старый контейнер выдает старый результат:
docker container run -it --rm linkextractor:step1 https://training.play-with-docker.com/
Результат:
/ /about/ #ops #dev /ops-stage1 /ops-stage2 /ops-stage3 /dev-stage1 /dev-stage2 /dev-stage3 /alacart https://twitter.com/intent/tweet?text=Play with Docker Classroom&url=https://training.play-with-docker.com/&via=docker&related=docker https://facebook.com/sharer.php?u=https://training.play-with-docker.com/ https://plus.google.com/share?url=https://training.play-with-docker.com/ http://www.linkedin.com/shareArticle?mini=true&url=https://training.play-with-docker.com/&title=Play%20with%20Docker%20Classroom&source=https://training.play-with-docker.com https://www.docker.com/dockercon/ https://www.docker.com/dockercon/ https://dockr.ly/slack https://www.docker.com/legal/docker-terms-service https://www.docker.com https://www.facebook.com/docker.run https://twitter.com/docker https://www.github.com/play-with-docker/play-with-docker.github.io
-
Step 3: Link Extractor API Service
git checkout step3 tree
Результат:
. ├── Dockerfile ├── linkextractor.py ├── main.py ├── README.md └── requirements.txt 0 directories, 5 files
cat Dockerfile
Результат:
FROM python:3 LABEL maintainer="Sawood Alam <@ibnesayeed>" WORKDIR /app COPY requirements.txt /app/ RUN pip install -r requirements.txt COPY *.py /app/ RUN chmod a+x *.py CMD ["./main.py"]
cat main.py
Результат:
#!/usr/bin/env python from flask import Flask from flask import request from flask import jsonify from linkextractor import extract_links app = Flask(__name__) @app.route("/") def index(): return "Usage: http://<hostname>[:<prt>]/api/<url>" @app.route("/api/<path:url>") def api(url): qs = request.query_string.decode("utf-8") if qs != "": url += "?" + qs links = extract_links(url) return jsonify(links) app.run(host="0.0.0.0")
docker image build -t linkextractor:step3 . docker container run -d -p 5000:5000 --name=linkextractor linkextractor:step3 docker container ls
curl -i http://localhost:5000/api/http://example.com/
Результат:
HTTP/1.1 200 OK Server: Werkzeug/3.1.3 Python/3.13.0 Date: Fri, 08 Nov 2024 16:52:09 GMT Content-Type: application/json Content-Length: 79 Connection: close [{"href":"https://www.iana.org/domains/example","text":"More information..."}]
docker container logs linkextractor
Результат:
* Serving Flask app 'main' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:5000 * Running on http://172.17.0.2:5000 Press CTRL+C to quit 172.17.0.1 - - [08/Nov/2024 16:52:09] "GET /api/http://example.com/ HTTP/1.1" 200 -
Далее удаление контейнера командой
docker container rm -f linkextractor
. -
Step 4: Link Extractor API and Web Front End Services
git checkout step4 tree
Результат:
. ├── api │ ├── Dockerfile │ ├── linkextractor.py │ ├── main.py │ └── requirements.txt ├── docker-compose.yml ├── README.md └── www └── index.php 2 directories, 7 files
cat docker-compose.yml
Результат:
version: '3' services: api: image: linkextractor-api:step4-python build: ./api ports: - "5000:5000" web: image: php:7-apache ports: - "80:80" environment: - API_ENDPOINT=http://api:5000/api/ volumes: - ./www:/var/www/html
cat www/index.php
Результат:
<!DOCTYPE html> <?php $api_endpoint = $_ENV["API_ENDPOINT"] ?: "http://localhost:5000/api/"; $url = ""; if(isset($_GET["url"]) && $_GET["url"] != "") { $url = $_GET["url"]; $json = @file_get_contents($api_endpoint . $url); if($json == false) { $err = "Something is wrong with the URL: " . $url; } else { $links = json_decode($json, true); $domains = []; foreach($links as $link) { array_push($domains, parse_url($link["href"], PHP_URL_HOST)); } $domainct = @array_count_values($domains); arsort($domainct); } } ?> <html> <head> <meta charset="utf-8"> <title>Link Extractor</title> <style media="screen"> html { background: #EAE7D6; font-family: sans-serif; } body { margin: 0; } h1 { padding: 10px; margin: 0 auto; color: #EAE7D6; max-width: 600px; } h1 a { text-decoration: none; color: #EAE7D6; } h2 { background: #082E41; color: #EAE7D6; margin: -10px; padding: 10px; } p { margin: 25px 5px 5px; } section { max-width: 600px; margin: 10px auto; padding: 10px; border: 1px solid #082E41; } div.header { background: #082E41; margin: 0; } div.footer { background: #082E41; margin: 0; padding: 5px; } .footer p { margin: 0 auto; max-width: 600px; color: #EAE7D6; text-align: center; } .footer p a { color: #24C2CB; text-decoration: none; } .error { color: #DA2536; } form { display: flex; } input { font-size: 20px; padding: 3px; height: 40px; } input.text { box-sizing:border-box; flex-grow: 1; border-color: #082E41; } input.button { width: 150px; background: #082E41; border-color: #082E41; color: #EAE7D6; } table { width: 100%; text-align: left; margin-top: 10px; } table th, table td { padding: 3px; } table th:last-child, table td:last-child { width: 70px; text-align: right; } table th { border-top: 1px solid #082E41; border-bottom: 1px solid #082E41; } table tr:last-child td { border-top: 1px solid #082E41; border-bottom: 1px solid #082E41; } </style> </head> <body> <div class="header"> <h1><a href="/">Link Extractor</a></h1> </div> <section> <form action="/"> <input class="text" type="text" name="url" placeholder="http://example.com/" value="<?php echo $url; ?>"> <input class="button" type="submit" value="Extract Links"> </form> </section> <?php if(isset($err)): ?> <section> <h2>Error</h2> <p class="error"><?php echo $err; ?></p> </section> <?php endif; ?> <?php if($url != "" && !isset($err)): ?> <section> <h2>Summary</h2> <p> <strong>Page:</strong> <?php echo "<a href=\"" . $url . "\">" . $url . "</a>"; ?> </p> <table> <tr> <th>Domain</th> <th># Links</th> </tr> <?php foreach($domainct as $key => $value) { echo "<tr>"; echo "<td>" . $key . "</td>"; echo "<td>" . $value . "</td>"; echo "</tr>"; } ?> <tr> <td><strong>Total</strong></td> <td><strong><?php echo count($links); ?></strong></td> </tr> </table> </section> <section> <h2>Links</h2> <ul> <?php foreach($links as $link) { echo "<li><a href=\"" . $link["href"] . "\">" . $link["text"] . "</a></li>"; } ?> </ul> </section> <?php endif; ?> <div class="footer"> <p><a href="https://github.com/ibnesayeed/linkextractor">Link Extractor</a> by <a href="https://twitter.com/ibnesayeed">@ibnesayeed</a> from <a href="https://ws-dl.cs.odu.edu/">WS-DL, ODU</a> </p> </div> </body> </html>
docker-compose up -d --build
docker container ls
curl -i http://localhost:5000/api/http://example.com/
Результат:
HTTP/1.1 200 OK Server: Werkzeug/3.1.3 Python/3.13.0 Date: Fri, 08 Nov 2024 16:59:57 GMT Content-Type: application/json Content-Length: 79 Connection: close [{"href":"https://www.iana.org/domains/example","text":"More information..."}]
sed -i 's/Link Extractor/Super Link Extractor/g' www/index.php
После выполнения этой команды изменились все заголовки, хэдеры и футеры: добавилось слово Super.
Сброс git и остановка приложения:
git reset --hard docker-compose down
-
Step 5: Redis Service for Caching
git checkout step5 tree
Результат:
. ├── api │ ├── Dockerfile │ ├── linkextractor.py │ ├── main.py │ └── requirements.txt ├── docker-compose.yml ├── README.md └── www ├── Dockerfile └── index.php 2 directories, 8 files
cat www/Dockerfile
Результат:
FROM php:7-apache LABEL maintainer="Sawood Alam <@ibnesayeed>" ENV API_ENDPOINT="http://localhost:5000/api/" COPY . /var/www/html/
cat api/main.py
Результат:
#!/usr/bin/env python import os import json import redis from flask import Flask from flask import request from linkextractor import extract_links app = Flask(__name__) redis_conn = redis.from_url(os.getenv("REDIS_URL", "redis://localhost:6379")) @app.route("/") def index(): return "Usage: http://<hostname>[:<prt>]/api/<url>" @app.route("/api/<path:url>") def api(url): qs = request.query_string.decode("utf-8") if qs != "": url += "?" + qs jsonlinks = redis_conn.get(url) if not jsonlinks: links = extract_links(url) jsonlinks = json.dumps(links, indent=2) redis_conn.set(url, jsonlinks) response = app.response_class( status=200, mimetype="application/json", response=jsonlinks ) return response app.run(host="0.0.0.0")
cat docker-compose.yml
Результат:
version: '3' services: api: image: linkextractor-api:step5-python build: ./api ports: - "5000:5000" environment: - REDIS_URL=redis://redis:6379 web: image: linkextractor-web:step5-php build: ./www ports: - "80:80" environment: - API_ENDPOINT=http://api:5000/api/ redis: image: redis
docker-compose up -d --build
docker-compose exec redis redis-cli monitor
Результат:
OK 1731085699.020872 [0 172.19.0.2:35238] "SET" "https://google.com" "[\n {\n \"text\": \"\\u041a\\u0430\\u0440\\u0442\\u0438\\u043d\\u043a\\u0438\",\n \"href\": \"https://www.google.com/imghp?hl=ru&tab=wi\"\n },\n {\n \"text\": \"\\u041a\\u0430\\u0440\\u0442\\u044b\",\n \"href\": \"https://maps.google.ru/maps?hl=ru&tab=wl\"\n },\n {\n \"text\": \"Play\",\n \"href\": \"https://play.google.com/?hl=ru&tab=w8\"\n },\n {\n \"text\": \"YouTube\",\n \"href\": \"https://www.youtube.com/?tab=w1\"\n },\n {\n \"text\": \"\\u041d\\u043e\\u0432\\u043e\\u0441\\u0442\\u0438\",\n \"href\": \"https://news.google.com/?tab=wn\"\n },\n {\n \"text\": \"\\u041f\\u043e\\u0447\\u0442\\u0430\",\n \"href\": \"https://mail.google.com/mail/?tab=wm\"\n },\n {\n \"text\": \"\\u0414\\u0438\\u0441\\u043a\",\n \"href\": \"https://drive.google.com/?tab=wo\"\n },\n {\n \"text\": \"\\u0415\\u0449\\u0451 \\u00bb\",\n \"href\": \"https://www.google.ru/intl/ru/about/products?tab=wh\"\n },\n {\n \"text\": \"\\u0418\\u0441\\u0442\\u043e\\u0440\\u0438\\u044f \\u0432\\u0435\\u0431-\\u043f\\u043e\\u0438\\u0441\\u043a\\u0430\",\n \"href\": \"http://www.google.ru/history/optout?hl=ru\"\n },\n {\n \"text\": \"\\u041d\\u0430\\u0441\\u0442\\u0440\\u043e\\u0439\\u043a\\u0438\",\n \"href\": \"https://google.com/preferences?hl=ru\"\n },\n {\n \"text\": \"\\u0412\\u043e\\u0439\\u0442\\u0438\",\n \"href\": \"https://accounts.google.com/ServiceLogin?hl=ru&passive=true&continue=https://www.google.com/&ec=GAZAAQ\"\n },\n {\n \"text\": \"\\u0420\\u0430\\u0441\\u0448\\u0438\\u0440\\u0435\\u043d\\u043d\\u044b\\u0439 \\u043f\\u043e\\u0438\\u0441\\u043a\",\n \"href\": \"https://google.com/advanced_search?hl=ru&authuser=0\"\n },\n {\n \"text\": \"\\u0420\\u0435\\u043a\\u043b\\u0430\\u043c\\u0430\",\n \"href\": \"https://google.com/intl/ru/ads/\"\n },\n {\n \"text\": \"\\u0420\\u0435\\u0448\\u0435\\u043d\\u0438\\u044f \\u0434\\u043b\\u044f \\u0431\\u0438\\u0437\\u043d\\u0435\\u0441\\u0430\",\n \"href\": \"http://www.google.ru/intl/ru/services/\"\n },\n {\n \"text\": \"\\u0412\\u0441\\u0451 \\u043e Google\",\n \"href\": \"https://google.com/intl/ru/about.html\"\n },\n {\n \"text\": \"Google.ru\",\n \"href\": \"https://www.google.com/setprefdomain?prefdom=RU&prev=https://www.google.ru/&sig=K_JXH_iNfSAp5dL1b0E-RodlwC-0o%3D\"\n },\n {\n \"text\": \"\\u041a\\u043e\\u043d\\u0444\\u0438\\u0434\\u0435\\u043d\\u0446\\u0438\\u0430\\u043b\\u044c\\u043d\\u043e\\u0441\\u0442\\u044c\",\n \"href\": \"https://google.com/intl/ru/policies/privacy/\"\n },\n {\n \"text\": \"\\u0423\\u0441\\u043b\\u043e\\u0432\\u0438\\u044f\",\n \"href\": \"https://google.com/intl/ru/policies/terms/\"\n }\n]" 1731085708.845819 [0 172.19.0.2:35238] "GET" "https://google.com"
При
sed -i 's/Link Extractor/Super Link Extractor/g' www/index.php
теперь данные не меняются.Очистка:
git reset --hard docker-compose down
-
Step 6: Swap Python API Service with Ruby
git checkout step6 tree
Результат:
. ├── api │ ├── Dockerfile │ ├── Gemfile │ └── linkextractor.rb ├── docker-compose.yml ├── logs ├── README.md └── www ├── Dockerfile └── index.php 3 directories, 7 files
cat api/linkextractor.rb
Результат:
#!/usr/bin/env ruby # encoding: utf-8 require "sinatra" require "open-uri" require "uri" require "nokogiri" require "json" require "redis" set :protection, :except=>:path_traversal redis = Redis.new(url: ENV["REDIS_URL"] || "redis://localhost:6379") Dir.mkdir("logs") unless Dir.exist?("logs") cache_log = File.new("logs/extraction.log", "a") get "/" do "Usage: http://<hostname>[:<prt>]/api/<url>" end get "/api/*" do url = [params['splat'].first, request.query_string].reject(&:empty?).join("?") cache_status = "HIT" jsonlinks = redis.get(url) if jsonlinks.nil? cache_status = "MISS" jsonlinks = JSON.pretty_generate(extract_links(url)) redis.set(url, jsonlinks) end cache_log.puts "#{Time.now.to_i}\t#{cache_status}\t#{url}" status 200 headers "content-type" => "application/json" body jsonlinks end def extract_links(url) links = [] doc = Nokogiri::HTML(open(url)) doc.css("a").each do |link| text = link.text.strip.split.join(" ") begin links.push({ text: text.empty? ? "[IMG]" : text, href: URI.join(url, link["href"]) }) rescue end end links end
cat api/Dockerfile
Результат:
FROM ruby:2.6 LABEL maintainer="Sawood Alam <@ibnesayeed>" ENV LANG C.UTF-8 ENV REDIS_URL="redis://localhost:6379" WORKDIR /app COPY Gemfile /app/ RUN bundle install COPY linkextractor.rb /app/ RUN chmod a+x linkextractor.rb CMD ["./linkextractor.rb", "-o", "0.0.0.0"]
cat docker-compose.yml
Результат:
version: '3' services: api: image: linkextractor-api:step6-ruby build: ./api ports: - "4567:4567" environment: - REDIS_URL=redis://redis:6379 volumes: - ./logs:/app/logs web: image: linkextractor-web:step6-php build: ./www ports: - "80:80" environment: - API_ENDPOINT=http://api:4567/api/ redis: image: redis
Запускаем:
docker-compose up -d --build
Проверяем:
curl -i http://localhost:4567/api/http://example.com/
Результат:
HTTP/1.1 200 OK Content-Type: application/json Content-Length: 97 X-Content-Type-Options: nosniff Server: WEBrick/1.4.4 (Ruby/2.6.10/2022-04-12) Date: Fri, 08 Nov 2024 17:13:02 GMT Connection: Keep-Alive [ { "text": "More information...", "href": "https://www.iana.org/domains/example" } ]
Можно смотреть логи командой
tail -f logs/extraction.log
. Или:docker-compose down cat logs/extraction.log
Результат:
1731085982 MISS http://example.com/ 1731086041 MISS http://example.com 1731086049 HIT http://example.com 1731086050 HIT http://example.com 1731086050 HIT http://example.com 1731086064 MISS http://google.com 1731086064 MISS http://google.com
Не выполняю данное задание, поскольку оно дублирует одно из заданий выше (с voting app и docker swarm).
Здесь содержится справочная информация, пропускаем.
Здесь также содержится справочная информация, пропускаем.