This is a toy example of using docker-compose to run a classic ROS master, talker and listener nodes in separate containers, yet linked together via a common project network.
For the host to simply connect to the containers (while maintaining transparent domain resolution to preserve the methodology of ROS's messaging network), we'll include a service in the compose file (named "resolvable") that serves as a Docker DNS resolver for the host.
- docker: 1.10 or later
Docker version 1.10.2, build c3959b1
- needed for improved networking
- provides proper DNS server for network rather than moding /ect/hosts
- docker-compose: 1.6.2 or later
docker-compose version 1.6.2, build 4d72027
- compatible with new network description syntax implicitly described with
version: '2'
With in this directory foo
(that should contain just the files from this gist), start the example by running:
docker-compose up
You'll see the ROS nodes come online and wait for roscore to spin up. Then you'll begin to watch the simple exchange of the talker and listener through the output printed to the screen from each container. You should see docker-compose form an output like so:
$ docker-compose up
Creating network "foo_default" with the default driver
Creating foo_listener_1
Creating foo_resolvable_1
Creating foo_master_1
Creating foo_talker_1
Attaching to foo_listener_1, foo_resolvable_1, foo_master_1, foo_talker_1
resolvable_1 | 2016/02/27 23:32:39 systemd: disabled, cannot read /tmp/systemd: stat /tmp/systemd: no such file or directory
resolvable_1 | 2016/02/27 23:32:39 Starting resolvable 0.2 ...
listener_1 | Couldn't find an AF_INET address for [master]
listener_1 | [ERROR] [1456615959.280474304]: [registerPublisher] Failed to contact master at [master:11311]. Retrying...
resolvable_1 | 2016/02/27 23:32:39 got local address: 172.19.0.3
resolvable_1 | 2016/02/27 23:32:39 error adding container 346defa24d76: unknown network mode%!(EXTRA string=foo_default)
resolvable_1 | 2016/02/27 23:32:39 error adding container cf97eea28d61: unknown network mode%!(EXTRA string=foo_default)
resolvable_1 | 2016/02/27 23:32:39 error adding container 386cd86f293a: unknown network mode%!(EXTRA string=foo_default)
resolvable_1 | 2016/02/27 23:32:39 error adding container cae3e8403517: unknown network mode%!(EXTRA string=foo_default)
talker_1 | [ERROR] [1456615959.883463763]: [registerPublisher] Failed to contact master at [master:11311]. Retrying...
listener_1 | [ INFO] [1456615960.125779633]: Connected to master at [master:11311]
listener_1 | [ INFO] [1456615960.770708014]: I heard: [hello world 5]
listener_1 | [ INFO] [1456615960.869880401]: I heard: [hello world 6]
listener_1 | [ INFO] [1456615960.969871119]: I heard: [hello world 7]
listener_1 | [ INFO] [1456615961.070332777]: I heard: [hello world 8]
listener_1 | [ INFO] [1456615961.169844244]: I heard: [hello world 9]
listener_1 | [ INFO] [1456615961.269840899]: I heard: [hello world 10]
...
talker_1 | [ INFO] [1456615960.134128460]: Connected to master at [master:11311]
talker_1 | [ INFO] [1456615960.269443782]: hello world 0
talker_1 | [ INFO] [1456615960.369479058]: hello world 1
talker_1 | [ INFO] [1456615960.469555821]: hello world 2
talker_1 | [ INFO] [1456615960.569496882]: hello world 3
talker_1 | [ INFO] [1456615960.669614571]: hello world 4
talker_1 | [ INFO] [1456615960.769794699]: hello world 5
talker_1 | [ INFO] [1456615960.869631906]: hello world 6
talker_1 | [ INFO] [1456615960.969488459]: hello world 7
talker_1 | [ INFO] [1456615961.069706720]: hello world 8
talker_1 | [ INFO] [1456615961.169495736]: hello world 9
talker_1 | [ INFO] [1456615961.269491553]: hello world 10
...
^CGracefully stopping... (press Ctrl+C again to force)
Stopping foo_talker_1 ... done
Stopping foo_master_1 ... done
Stopping foo_resolvable_1 ... done
Stopping foo_listener_1 ... done
Now with the project launched by compose, we can ping each container from the host using its domain name (by default, the name of the service in this case).
$ ping master
PING master (172.19.0.4) 56(84) bytes of data.
64 bytes from foo_master_1.foo_default (172.19.0.4): icmp_seq=1 ttl=64 time=0.070 ms
64 bytes from foo_master_1.foo_default (172.19.0.4): icmp_seq=2 ttl=64 time=0.132 ms
64 bytes from foo_master_1.foo_default (172.19.0.4): icmp_seq=3 ttl=64 time=0.064 ms
^C
--- master ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.064/0.088/0.132/0.032 ms
$ ping talker
PING talker (172.19.0.5) 56(84) bytes of data.
64 bytes from foo_talker_1.foo_default (172.19.0.5): icmp_seq=1 ttl=64 time=0.072 ms
64 bytes from foo_talker_1.foo_default (172.19.0.5): icmp_seq=2 ttl=64 time=0.105 ms
64 bytes from foo_talker_1.foo_default (172.19.0.5): icmp_seq=3 ttl=64 time=0.075 ms
^C
--- talker ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.072/0.084/0.105/0.014 ms
$ ping listener
PING listener (172.19.0.2) 56(84) bytes of data.
64 bytes from foo_listener_1.foo_default (172.19.0.2): icmp_seq=1 ttl=64 time=0.168 ms
64 bytes from foo_listener_1.foo_default (172.19.0.2): icmp_seq=2 ttl=64 time=0.068 ms
64 bytes from foo_listener_1.foo_default (172.19.0.2): icmp_seq=3 ttl=64 time=0.077 ms
^C
--- listener ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.068/0.104/0.168/0.045 ms
Now on the host, we should also be able to query the master about topics, and once more, even subscribe to messages directly from nodes inside the containers!
$ ROS_MASTER_URI=http://master:11311
$ rostopic list
/chatter
/rosout
/rosout_agg
$ rostopic echo /chatter
data: hello world 4482
---
data: hello world 4483
---
data: hello world 4484
---
data: hello world 4485
---
data: hello world 4486
---
data: hello world 4487
---
^Cdata: hello world 4488
---
Ok, lets play around with this example as see what all the parts do. We'll resume by stopping our example and cleaning the containers from the project:
^CGracefully stopping... (press Ctrl+C again to force)
Stopping foo_talker_1 ... done
Stopping foo_master_1 ... done
Stopping foo_resolvable_1 ... done
Stopping foo_listener_1 ... done
$ docker-compose rm -f
Going to remove foo_talker_1, foo_master_1, foo_resolvable_1, foo_listener_1
Removing foo_talker_1 ... done
Removing foo_master_1 ... done
Removing foo_resolvable_1 ... done
Removing foo_listener_1 ... done
Now lets comment out the resolvable
service from the compose file like so:
# resolvable:
# image: mgood/resolvable
# volumes:
# - /var/run/docker.sock:/tmp/docker.sock
# - /etc/resolv.conf:/tmp/resolv.conf
Restart the project again via docker-compose up
, and in a new terminal on the host, take a look and the network by inspecting the project network. Notice the default bridge driver is engaged here, and the all our project services can be found under this network.
$ docker network inspect foo_default
[
{
"Name": "foo_default",
"Id": "5f5df3f0308b398df26d0d6930db9845aea2cd45a36fd4e751838c7854664402",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1/16"
}
]
},
"Containers": {
"785df4bb3126955ab255ea46ac98be4a35adc575b513c15adecbfa872d5ffc78": {
"Name": "foo_talker_1",
"EndpointID": "79ea3ceade4b43170fc2373f5a699bdea6a0cdf0b60bfcf6976a215aef74e853",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"c506a02bf22998337a06d4670853d56f09367ccb9558a93751c18c671a2c1606": {
"Name": "foo_listener_1",
"EndpointID": "18d8ac64dd03e2988fedc47e61d492606cceadc52a2d2bc93fc12edd39797ce9",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"c5438289b3a3c2294ab029348fa9fa376d3e16a5f9dffd490435f1ab9e1ec87a": {
"Name": "foo_master_1",
"EndpointID": "99cdd03240042e6e4cf7ac2c551b7d4c58cda49a7af8313c6f01edd6c5e5b188",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
}
},
"Options": {}
}
]
Above you'll notice that the Gateway
is set to 172.19.0.1
, this happens to point to the host. You can see this by running ifconfig
on the host and finding the interface with the same hash id as the project network.
$ ifconfig
br-5f5df3f0308b Link encap:Ethernet HWaddr 02:42:f1:b3:b3:c7
inet addr:172.19.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:f1ff:feb3:b3c7/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:1397 errors:0 dropped:0 overruns:0 frame:0
TX packets:1247 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:181132 (181.1 KB) TX bytes:176656 (176.6 KB)
...
To check this we can try out a simple netcat
example by listening with the container and transmitting from the host:
<run on container first>
$ docker exec -it foo_master_1 sh -c 'netcat -lv 4444'
Listening on [0.0.0.0] (family 0, port 4444)
Connection from [172.19.0.1] port 4444 [tcp/*] accepted (family 2, sport 48399)
Hello World
<run on host second>
$ echo Hello World | netcat 172.19.0.3 4444
Now lets swap the roles here to verify this is bidirectional:
<run on host first>
$ netcat -lv 4444
Listening on [0.0.0.0] (family 0, port 4444)
Connection from [172.19.0.3] port 4444 [tcp/*] accepted (family 2, sport 36315)
Hello World
<run on container second>
$ docker exec -it foo_master_1 sh -c 'echo Hello World | netcat 172.19.0.1 4444'
Great, now let see what happens if we try and list and subscribe to topics from the host:
<run on host first>
$ export ROS_IP=72.19.0.1
$ export ROS_MASTER_URI=http://172.19.0.3:11311
$ rostopic list
/chatter
/rosout
/rosout_agg
$ rostopic echo /chatter
<run on container second>
$ docker exec -it foo_master_1 bash -c 'source /ros_entrypoint.sh && rostopic info /chatter'
Type: std_msgs/String
Publishers:
* /talker (http://talker:54447/)
Subscribers:
* /listener (http://listener:47977/)
* /rostopic_18288_1457162895238 (http://72.19.0.1:39937/)
Notice the echo's subscription never comes through, yet is registered just fine under the subscriber list. So, even if the two can reach each other, it doesn't yet seem enough to receive subscribed ros topics. This is because the URI for the talker publisher http://listener:47977
is still unresolvable from the host without the DNS for the bridge network interface.
- What we first considered was the domain resolution within the project network, making sure the containers could resolve the [
master
,talker
,listener
] host names between themselves by setting theROS_HOSTNAME
environmental variable. - What I didn't consider was that this was that same domain name used used in generating the publisher's URI not only need to be resolvable by the ros master node, but as well as any subscriber (i.e. the host). Looking back at the wiki page, this was made clear. but while switching from using domain names to IP address we forgot this relationship pertains to more that just the just (master, publisher) and (master, subscriber), but also (subscriber, publisher) as well.
- There must be complete, bi-directional connectivity between all pairs of machines, on all ports.
- Each machine must advertise itself by a name that all other machines can resolve..
If instead, we where to use a more rigid setup, we can avoid the need of providing a DNS server for bridge network by setting ROS_IP
; this of course comes at the cost of losing a more flexible domain name assignment.
version: '2'
services:
master:
build: .
environment:
- "ROS_IP=172.19.0.3"
command: roscore
talker:
build: .
environment:
- "ROS_IP=172.19.0.4"
- "ROS_MASTER_URI=http://master:11311"
command: rosrun roscpp_tutorials talker
listener:
build: .
environment:
- "ROS_IP=172.19.0.2"
- "ROS_MASTER_URI=http://master:11311"
command: rosrun roscpp_tutorials listener
On the host:
export ROS_IP=172.19.0.1
export ROS_MASTER_URI=http://172.19.0.3:11311
$ rostopic list
/chatter
/rosout
/rosout_agg
$ rostopic echo /chatter
data: hello world 84
---
data: hello world 85
---
data: hello world 86
...
Note that the compose file above is still quite fragile, as we are assuming the containers will be assigned the same IP as before on our docker engine, such that ROS_IP
remains correctly corresponding. Specifying a custom IPAM config for the project's default network (setting each services IP) would be a slight improvement, yet still rigid and possibly conflicting with other networks.
Now lets switch back to what was previously working. Stop and remove the project's containers as we did before, uncomment the resolvable
service and revert the use of ROS_IP
for ROS_HOSTNAME
, and restart the project. Lets try the same thing that failed above, but now with our little host-level DNS gateway for docker running on the same project network. Note our mater service container may have changed it's IP address, check with docker network inspect foo_default
again to see whats where.
<run on host first>
$ export ROS_IP=72.19.0.1
$ export ROS_MASTER_URI=http://172.19.0.4:11311
$ rostopic list
/chatter
/rosout
/rosout_agg
$ rostopic echo /chatter
data: hello world 1016
---
data: hello world 1017
---
data: hello world 1018
...
<run on container second>
$ docker exec -it foo_master_bash -c 'source /ros_entrypoint.sh && rostopic info /chatter'
Type: std_msgs/String
Publishers:
* /talker (http://talker:40592/)
Subscribers:
* /listener (http://listener:49742/)
* /rostopic_20125_1457165329066 (http://72.19.0.1:45769/)
Yes, now we have what we wanted. Note that when we first tried this in the beginning, we didn't even need to set the ROS_IP
or ROS_HOSTNAME
, and we also got away with using the hostname master
in ROS_MASTER_URI
, not the specific startup IP address assigned with the container joined the network. This is all made easy again thanks to the running gateway container. In fact, if we opened a second fresh terminal, leaving the one on the host above still echoing, we'll see our single host machine subscribed using two different URIs.
<run on host first>
$ export ROS_MASTER_URI=http://master:11311
$ rostopic list
/chatter
/rosout
/rosout_agg
$ rostopic echo /chatter
data: hello world 13179
---
data: hello world 13180
---
data: hello world 13181
...
<run on container second>
$ docker exec -it foo_master_1 bash -c 'source /ros_entrypoint.sh && rostopic info /chatter'
Type: std_msgs/String
Publishers:
* /talker (http://talker:40592/)
Subscribers:
* /listener (http://listener:49742/)
* /rostopic_20347_1457165676578 (http://72.19.0.1:47423/)
* /rostopic_20377_1457165722070 (http://<your_host's_hostname_here>:56479/)
Resolvable is providing a DNS entry for the Docker bridge interface address, by default for docker0
, but really the br-5f5df3f0308b
in this case. This is used to communicate with services with a known port bound to the Docker bridge. What we've seen here is how we need to remain mindful while specifying ROS variables with respect to facilities of the network between the nodes and host. As a recommendation, using a DNS entry for the Docker bridge interface perhaps the more flexible and simpler approach than using IP address.
As a related note, if you start running multiple ros released compose projects simultaneously (i.e. with multiple roscores running and what not), to keep them isolated while using the a resolvable docker bridge DNS for each project network; be sure to utilize the project's network name as a postfix to make sure domains/URI don't collide. E.g:
...
environment:
- "ROS_HOSTNAME=talker.foo_default"
...
I have yet to have more time to test any failing conner cases, but if you do find one, please let me know by creating an issue.
- resolvable
- Host-level DNS gateway for Docker
- ROS
- Docker
- OpenCog Container Design