Skip to content

Instantly share code, notes, and snippets.

@itxx00
Forked from jpetazzo/README.md
Created October 13, 2013 09:08
Show Gist options
  • Save itxx00/6960073 to your computer and use it in GitHub Desktop.
Save itxx00/6960073 to your computer and use it in GitHub Desktop.

Unionize: network superpowers for your docker containers

Unionize lets you connect together docker containers in arbitrarily complex scenarios.

Note: I recommend to use https://github.com/jpetazzo/pipework instead.

  • pipework is a better name than unionize
  • it's hosted on a "real" github repo instead of a small gist :-)

Now if you want Unionize, it's still here. Just check those examples.

LAMP stack with a private network between the MySQL and Apache containers

Let's create two containers, running the web tier and the database tier:

APACHE=$(docker run -d apache /usr/sbin/httpd -D FOREGROUND)
MYSQL=$(docker run -d mysql /usr/sbin/mysqld_safe)

Now, bring superpowers to the web tier:

unionize.sh br1 $APACHE 192.168.1.1

This will:

  • create a bridge named br1 in the docker host;
  • add an interface named eth1 to the $APACHE container;
  • assign IP address 192.168.1.1 to this interface,
  • connect said interface to br1.

Now (drum roll), let's do this:

unionize.sh br1 $MYSQL 192.168.1.2

This will:

  • not create a bridge named br1, since it already exists;
  • add an interface named eth1 to the $MYSQL container;
  • assign IP address 192.168.1.2 to this interface,
  • connect said interface to br1.

Now, both containers can ping each other on the 192.168.1.0/24 subnet.

unionize.sh can also be given multiple containers, so you can actually do this:

unionize.sh br1 $(docker run -d apache /usr/sbin/httpd -D FOREGROUND) 192.168.1.1 \
                $(docker run -d mysql /usr/sbin/mysqld_safe) 192.168.1.2

Peeking inside the private network

Want to connect to those containers using their private addresses? Easy:

ifconfig br1 192.168.1.254

Voilà!

Connect a container to a local physical interface

Let's pretend that you want to run two Hipache instances, listening on real interfaces eth2 and eth3, using specific (public) IP addresses. Easy!

unionize.sh breth2 $(docker run -d hipache /usr/sbin/hipache) 50.19.169.157
brctl addif breth2 eth2
ifconfig eth2 up

unionize.sh breth3 $(docker run -d hipache /usr/sbin/hipache) 107.22.140.5
brctl addif breth3 eth3
ifconfig eth3 up

Connect multiple containers running on different docker hosts

Consider the following scenario (typical on production servers):

  • you have a bunch of docker hosts
  • on each docker host, eth0 is the admin interface that you use for SSH;
  • on each docker host, eth1 is the interface for production traffic; it has no IP address configured.

On each host, do this:

unionize.sh br1
brctl addif br1 eth1
ifconfig eth1 up

Then just start your containers. Yup. That's it. Nothing more:

dockerhost-alice$ unionize.sh br1 $(docker run -d apache /usr/sbin/httpd) 192.168.1.1
dockerhost-alice$ unionize.sh br1 $(docker run -d apache /usr/sbin/httpd) 192.168.1.2
dockerhost-bob$ unionize.sh br1 $(docker run -d apache /usr/sbin/httpd) 192.168.1.3
dockerhost-bob$ unionize.sh br1 $(docker run -d apache /usr/sbin/httpd) 192.168.1.4
dockerhost-bob$ unionize.sh br1 $(docker run -d mysql /usr/sbin/mysqld_safe) 192.168.1.101

Cleanup

When a container is terminated (the last process of the net namespace exits), the network interfaces are garbage collected. The interface in the container is automatically destroyed, and the interface in the docker host (part of the bridge) is then destroyed as well.

Future improvement: AVAHI / DHCP auto-configuration

I'm considering providing a "network configurator" docker image. This image will let you configure a container extra interface (eth1) using DHCP or AVAHI, without actually having a DHCP client or AVAHI daemon in the container itself. MAGIC!

Future improvement: macvlan

I'm considering adding a macvlan option to unionize. This will let you bypass the bridge layer.

Example:

unionize.sh $CONTAINERID eth2

This will allocate a macvlan sub-interface on eth2, and hand it over to the container designated by $CONTAINERID. It will require extra configuration of course.

#!/bin/bash
set -e
CGROUP=/sys/fs/cgroup/devices/lxc
[ -d $CGROUP ] || {
echo "Please set CGROUP in $0 first."
exit 1
}
BRIDGE=$1
[ "$BRIDGE" ] || {
echo "$0 <bridge> [container=ip] [container=ip] [...]"
exit 1
}
brctl show $BRIDGE >/dev/null || {
echo "Creating bridge $BRIDGE."
brctl addbr $BRIDGE
ifconfig $BRIDGE up
}
shift
while true
do
SHORTID=$1
IPADDR=$2
LONGID=$(docker inspect $SHORTID | grep ID | cut -d\" -f4)
[ "$LONGID" ] || {
echo "WARNING: could not find container $SHORTID."
continue
}
NSPID=$(head -n 1 $CGROUP/$LONGID/tasks)
[ "$NSPID" ] || {
echo "WARNING: could not find PID inside container $LONGID."
continue
}
mkdir -p /var/run/netns
rm -f /var/run/netns/$LONGID
ln -s /proc/$NSPID/ns/net /var/run/netns/$LONGID
R=$RANDOM
IF_LOCAL_NAME=pvnetl$R
IF_REMOTE_NAME=pvnetr$R
ip link add name $IF_LOCAL_NAME type veth peer name $IF_REMOTE_NAME
brctl addif $BRIDGE $IF_LOCAL_NAME
ifconfig $IF_LOCAL_NAME up
ip link set $IF_REMOTE_NAME netns $NSPID
ip netns exec $LONGID ip link set $IF_REMOTE_NAME name eth1
ip netns exec $LONGID ifconfig eth1 $IPADDR
shift 2
[ "$1" ] || break
done
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment