Skip to content

Instantly share code, notes, and snippets.

@jamshid
Last active November 12, 2019 04:35
Show Gist options
  • Save jamshid/bf5dcdb0ae1b505a636b33ca5ebfba4b to your computer and use it in GitHub Desktop.
Save jamshid/bf5dcdb0ae1b505a636b33ca5ebfba4b to your computer and use it in GitHub Desktop.
Sample docker-compose.yml voting app to test the weave network plugin with Docker Swarm
# Copied from https://github.com/dockersamples/example-voting-app, to test the
# "weave" Docker Swarm network plugin https://www.weave.works/docs/net/latest/install/plugin/plugin-v2/
# on a two node network (docker-server-manager and docker-server-worker).
# While both provide a cross-server container network, weave supports multicast and encryption.
#
# The sample app deploys fine with the standard "overlay" network:
#
# docker stack deploy -c ./docker-compose.yml voteapp
#
# The below curls work as expected -- in the container network the service names resolve
# and server the internal container ports.
# Outside the container network the ports are published on every Docker Swarm node.
#
# But I cannot figure out how to make these curls work with "weave! I'd swear this worked
# under prevous versions of weave or Docker / Swarm.
#
# 1. Run this curl on both of your Docker Swarm nodes -- it should succeed with 200 responses for each service:
#
# docker run -ti --network voteapp_voteapp centos curl --head http://result-app:80 http://voting-app:80
#
# 2. "docker stack rm voteapp" and verify network and containers are removed from both Docker Swarm nodes.
#
# 3. Now bring up the environment with "weave", using the default "endpoint_mode: vip".
#
# NETWORK_DRIVER=weaveworks/net-plugin:latest_release docker stack deploy -c ./docker-compose.yml voteapp
#
# Unfortunately neither service name is usable on either Swarm node:
# docker run -ti --network voteapp_voteapp centos curl --head http://result-app:80 http://voting-app:80
# curl: (7) Failed to connect to result-app port 80: No route to host
# curl: (7) Failed to connect to voting-app port 80: No route to host
#
# 4. Remove the stack (step 2) then bring it back up with the "dnsrr" workaround:
#
# NETWORK_DRIVER=weaveworks/net-plugin:latest_release ENDPOINT_MODE=dnsrr PORT_MODE=host docker stack deploy -c ./docker-compose.yml voteapp
#
# One container endpoint works, but the other still fails if the container is not
# running on the same docker server:
# curl: (7) Failed connect to result-app:80; No route to host
# HTTP/1.1 200 OK
# ...
#
# 5. Also, "dnsrr "with "weave" breaks the publishing of both ports on both Docker Swarm nodes:
#
# curl --head http://DOCKER-SERVER-MANAGER:6000 http://DOCKER-SERVER-MANAGER:6001 # ports published on swarm manager node
# curl --head http://DOCKER-SERVER-WORKER:6000 http://DOCKER-SERVER-WORKER:6001 # ports published on swarm worker node
#
# This works with weave "endpoint_mode: vip".
version: "3.7"
services:
redis:
image: redis:3.2-alpine
ports:
- "6379"
networks:
- voteapp
deploy:
placement:
constraints: [node.role == manager]
db:
image: postgres:9.4
volumes:
- db-data:/var/lib/postgresql/data
networks:
- voteapp
deploy:
placement:
constraints: [node.role == manager]
voting-app:
image: gaiadocker/example-voting-app-vote:good
ports:
- target: 80
published: 6000
protocol: tcp
mode: ${PORT_MODE-ingress}
networks:
- voteapp
depends_on:
- redis
deploy:
endpoint_mode: ${ENDPOINT_MODE-vip}
mode: replicated
replicas: 2
labels: [APP=VOTING]
placement:
constraints: [node.role == worker]
result-app:
image: gaiadocker/example-voting-app-result:latest
ports:
- target: 80
published: 6001
protocol: tcp
mode: ${PORT_MODE-ingress}
networks:
- voteapp
depends_on:
- db
deploy:
endpoint_mode: ${ENDPOINT_MODE-vip}
worker:
image: gaiadocker/example-voting-app-worker:latest
networks:
voteapp:
aliases:
- workers
depends_on:
- db
- redis
# service deployment
deploy:
mode: replicated
replicas: 2
labels: [APP=VOTING]
# service resource management
resources:
# Hard limit - Docker does not allow to allocate more
limits:
cpus: '0.25'
memory: 512M
# Soft limit - Docker makes best effort to return to it
reservations:
cpus: '0.25'
memory: 256M
# service restart policy
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
# service update configuration
update_config:
parallelism: 1
delay: 10s
failure_action: continue
monitor: 60s
max_failure_ratio: 0.3
# placement constraint - in this case on 'worker' nodes only
placement:
constraints: [node.role == worker]
networks:
voteapp:
driver: ${NETWORK_DRIVER-overlay}
attachable: true
volumes:
db-data:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment