-
-
Save eegilbert/aeb272eadf681fcfe0c4caba022f76b7 to your computer and use it in GitHub Desktop.
Docker Notes
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
http://cloud-mechanic.blogspot.co.uk/2014/10/storage-concepts-in-docker.html | |
http://developerblog.redhat.com/2014/09/30/overview-storage-scalability-docker/ | |
https://docs.docker.com/ | |
https://www.digitalocean.com/community/tutorials/docker-explained-how-to-containerize-python-web-applications | |
http://blog.flux7.com/blogs/docker/docker-tutorial-series-part-1-an-introduction | |
http://blog.flux7.com/blogs/docker/docker-tutorial-series-part-2-the-15-commands | |
http://blog.flux7.com/blogs/docker/docker-tutorial-series-part-3-automation-is-the-word-using-dockerfile | |
http://nginx.com/blog/deploying-nginx-nginx-plus-docker/ | |
http://www.hokstad.com/docker/patterns | |
By default, docker must be run as root, or via sudo privileges. You may also allow a user to run docker directly by adding the user to the 'docker' group. | |
attachment:ArtWork/WikiDesign/icon-admonition-alert.png | |
Be aware that this may allow for privilege escalation for that user, should they escape the container. | |
usermod -a -G docker <your-user> | |
for centos 6 - needs EPEL repo | |
NOTE for CentOS users | |
You can install EPEL by running yum install epel-release. The package is included in the CentOS Extras repository, enabled by default. | |
If you already have the (unrelated) docker package installed, it will conflict with docker-io. There's a bug report filed for it. To proceed with docker-io installation, please remove docker first. | |
install the docker-io package which will install Docker on our host. | |
$ sudo yum install docker-io | |
and then | |
sudo service docker start | |
If we want Docker to start at boot, we should also: | |
$ sudo chkconfig docker on | |
Now let's verify that Docker is working. First we'll need to get the latest centos image. | |
$ sudo docker pull centos | |
Next we'll make sure that we can see the image by running: | |
$ sudo docker images centos | |
3CNCJ-Q3XWK-FKRM3-HRKXV-G6KM2 | |
[root@docker-test ~]# docker run -i -t centos:centos7 /bin/bash | |
bash-4.2# | |
bash-4.2# | |
bash-4.2# | |
bash-4.2# | |
bash-4.2# ls | |
bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin selinux srv sys tmp usr var | |
https://docs.oracle.com/cd/E52668_01/E54669/html/ol7-docker.html | |
sudo docker build -t nx-nginx . | |
Sending build context to Docker daemon 11.59 GB | |
Sending build context to Docker daemon | |
Step 0 : FROM ubuntu:14.04 | |
---> 9bd07e480c5b | |
Step 1 : RUN ["apt-get", "update"] | |
---> Using cache | |
---> fc8e200c5558 | |
Step 2 : RUN ["apt-get", "install", "-y", "nginx"] | |
---> Using cache | |
---> 6bb3bfdaa696 | |
Step 3 : EXPOSE 80 | |
---> Using cache | |
---> 0bebbf545cb5 | |
Successfully built 0bebbf545cb5 | |
dnf install docker-engine --allowerasing | |
[root@fed22-docker-registry ~]# lsscsi | |
[0:0:0:0] cd/dvd QEMU QEMU DVD-ROM 1.5. /dev/sr0 | |
[2:0:0:0] disk NUTANIX VDISK 0 /dev/sda | |
[2:0:1:0] disk NUTANIX VDISK 0 /dev/sdb | |
[root@fed22-docker-registry ~]# fdisk -l /dev/sdb | |
Disk /dev/sdb: 250 GiB, 268435456000 bytes, 524288000 sectors | |
Units: sectors of 1 * 512 = 512 bytes | |
Sector size (logical/physical): 512 bytes / 512 bytes | |
I/O size (minimum/optimal): 4096 bytes / 4096 bytes | |
[root@fed22-docker-registry ~]# pvcreate /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg | |
Physical volume "/dev/sdb" successfully created | |
Physical volume "/dev/sdc" successfully created | |
Physical volume "/dev/sdd" successfully created | |
Physical volume "/dev/sde" successfully created | |
Physical volume "/dev/sdf" successfully created | |
Physical volume "/dev/sdg" successfully created | |
[root@fed22-docker-registry ~]# vgcreate registry /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg | |
Volume group "registry" successfully created | |
[root@fed22-docker-registry ~]# lvcreate -i 6 -l 100%VG -n registry registry | |
Using default stripesize 64.00 KiB. | |
Logical volume "registry" created. | |
[root@fed22-docker-registry ~]# lvs | |
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert | |
home fedora -wi-ao---- 47.50g | |
root fedora -wi-ao---- 50.00g | |
swap fedora -wi-ao---- 2.00g | |
registry registry -wi-a----- 599.98g | |
[root@fed22-docker-registry ~]# | |
[root@fed22-docker-registry ~]# | |
[root@fed22-docker-registry ~]# mkfs.xfs /dev/mapper/registry-registry | |
meta-data=/dev/mapper/registry-registry isize=256 agcount=32, agsize=4914992 blks | |
= sectsz=512 attr=2, projid32bit=1 | |
= crc=0 finobt=0 | |
data = bsize=4096 blocks=157279744, imaxpct=25 | |
= sunit=16 swidth=96 blks | |
naming =version 2 bsize=4096 ascii-ci=0 ftype=0 | |
log =internal log bsize=4096 blocks=76800, version=2 | |
= sectsz=512 sunit=16 blks, lazy-count=1 | |
realtime =none extsz=4096 blocks=0, rtextents=0 | |
[root@fed22-docker-registry ~]# useradd ray | |
[root@fed22-docker-registry ~]# usermod -aG docker ray | |
[root@fed22-docker-registry ~]# su - ray | |
[ray@fed22-docker-registry ~]$ id -a | |
uid=1000(ray) gid=1000(ray) groups=1000(ray),988(docker) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 | |
[ray@fed22-docker-registry ~]$ docker run hello-world | |
Unable to find image 'hello-world:latest' locally | |
latest: Pulling from library/hello-world | |
b901d36b6f2f: Pull complete | |
0a6ba66e537a: Pull complete | |
Digest: sha256:517f03be3f8169d84711c9ffb2b3235a4d27c1eb4ad147f6248c8040adb93113 | |
Status: Downloaded newer image for hello-world:latest | |
Hello from Docker. | |
This message shows that your installation appears to be working correctly. | |
To generate this message, Docker took the following steps: | |
1. The Docker client contacted the Docker daemon. | |
2. The Docker daemon pulled the "hello-world" image from the Docker Hub. | |
3. The Docker daemon created a new container from that image which runs the | |
executable that produces the output you are currently reading. | |
4. The Docker daemon streamed that output to the Docker client, which sent it | |
to your terminal. | |
To try something more ambitious, you can run an Ubuntu container with: | |
$ docker run -it ubuntu bash | |
Share images, automate workflows, and more with a free Docker Hub account: | |
https://hub.docker.com | |
For more examples and ideas, visit: | |
https://docs.docker.com/userguide/ | |
[ray@fed22-docker-registry ~]$ docker images | |
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE | |
hello-world latest 0a6ba66e537a 13 days ago 960 B | |
[ray@fed22-docker-registry ~]$ | |
[ray@fed22-docker-registry ~]$ sudo systemctl status -l docker.service | |
? docker.service - Docker Application Container Engine | |
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) | |
Active: active (running) since Tue 2015-10-27 17:11:12 CET; 12s ago | |
Docs: https://docs.docker.com | |
Main PID: 18555 (docker) | |
CGroup: /system.slice/docker.service | |
+-18555 /usr/bin/docker daemon -H fd:// | |
Oct 27 17:11:12 fed22-docker-registry docker[18555]: time="2015-10-27T17:11:12.398963722+01:00" level=info msg="Option DefaultDriver: bridge" | |
Oct 27 17:11:12 fed22-docker-registry docker[18555]: time="2015-10-27T17:11:12.399208809+01:00" level=info msg="Option DefaultNetwork: bridge" | |
Oct 27 17:11:12 fed22-docker-registry docker[18555]: time="2015-10-27T17:11:12.407344901+01:00" level=info msg="Firewalld running: true" | |
Oct 27 17:11:12 fed22-docker-registry docker[18555]: time="2015-10-27T17:11:12.497316264+01:00" level=info msg="Loading containers: start." | |
Oct 27 17:11:12 fed22-docker-registry | |
[18555]: .. | |
Oct 27 17:11:12 fed22-docker-registry docker[18555]: time="2015-10-27T17:11:12.498464669+01:00" level=info msg="Loading containers: done." | |
Oct 27 17:11:12 fed22-docker-registry docker[18555]: time="2015-10-27T17:11:12.498493898+01:00" level=info msg="Daemon has completed initialization" | |
Oct 27 17:11:12 fed22-docker-registry docker[18555]: time="2015-10-27T17:11:12.498521007+01:00" level=info msg="Docker daemon" commit=f4bf5c7 execdriver=native-0.2 graphdriver=devicemapper version=1.8.3 | |
Oct 27 17:11:12 fed22-docker-registry systemd[1]: Started Docker Application Container Engine. | |
Oct 27 17:11:12 fed22-docker-registry docker[18555]: time="2015-10-27T17:11:12.499643748+01:00" level=info msg="Listening for HTTP on fd ()" | |
https://bugzilla.redhat.com/show_bug.cgi?id=1207308 | |
docker-registry.service | |
contains | |
ExecStart=/usr/bin/gunicorn --debug | |
which fails with: | |
gunicorn: error: unrecognized arguments: --debug | |
s/--debug/--log-level debug/ | |
fixes the problem for me | |
[root@fed22-docker-registry ~]# systemctl daemon-reload | |
[root@fed22-docker-registry ~]# systemctl start docker-registry.service | |
[root@fed22-docker-registry ~]# systemctl status docker-registry.service -l | |
? docker-registry.service - Registry server for Docker | |
Loaded: loaded (/usr/lib/systemd/system/docker-registry.service; enabled; vendor preset: disabled) | |
Active: active (running) since Wed 2015-10-28 17:49:16 CET; 4s ago | |
Main PID: 12964 (gunicorn) | |
CGroup: /system.slice/docker-registry.service | |
+-12964 /usr/bin/python /usr/bin/gunicorn --access-logfile - --log-level debug --max-requests 100 --graceful-timeout 3600 -t 3600 -k gevent -b 0.0.0.0:5000 -w 4 docker_registry.wsgi:application | |
+-12969 /usr/bin/python /usr/bin/gunicorn --access-logfile - --log-level debug --max-requests 100 --graceful-timeout 3600 -t 3600 -k gevent -b 0.0.0.0:5000 -w 4 docker_registry.wsgi:application | |
+-12970 /usr/bin/python /usr/bin/gunicorn --access-logfile - --log-level debug --max-requests 100 --graceful-timeout 3600 -t 3600 -k gevent -b 0.0.0.0:5000 -w 4 docker_registry.wsgi:application | |
+-12971 /usr/bin/python /usr/bin/gunicorn --access-logfile - --log-level debug --max-requests 100 --graceful-timeout 3600 -t 3600 -k gevent -b 0.0.0.0:5000 -w 4 docker_registry.wsgi:application | |
+-12972 /usr/bin/python /usr/bin/gunicorn --access-logfile - --log-level debug --max-requests 100 --graceful-timeout 3600 -t 3600 -k gevent -b 0.0.0.0:5000 -w 4 docker_registry.wsgi:application | |
Oct 28 17:49:17 fed22-docker-registry gunicorn[12964]: 28/Oct/2015:17:49:17 +0000 WARNING: LRU cache disabled! | |
Oct 28 17:49:17 fed22-docker-registry gunicorn[12964]: 28/Oct/2015:17:49:17 +0000 WARNING: Cache storage disabled! | |
Oct 28 17:49:17 fed22-docker-registry gunicorn[12964]: 28/Oct/2015:17:49:17 +0000 WARNING: LRU cache disabled! | |
Oct 28 17:49:17 fed22-docker-registry gunicorn[12964]: 28/Oct/2015:17:49:17 +0000 WARNING: Cache storage disabled! | |
Oct 28 17:49:17 fed22-docker-registry gunicorn[12964]: 28/Oct/2015:17:49:17 +0000 WARNING: Cache storage disabled! | |
Oct 28 17:49:17 fed22-docker-registry gunicorn[12964]: 28/Oct/2015:17:49:17 +0000 WARNING: LRU cache disabled! | |
Oct 28 17:49:17 fed22-docker-registry gunicorn[12964]: 28/Oct/2015:17:49:17 +0000 WARNING: LRU cache disabled! | |
Oct 28 17:49:17 fed22-docker-registry gunicorn[12964]: [2015-10-28 17:49:17 +0000] [12964] [DEBUG] 4 workers | |
Oct 28 17:49:18 fed22-docker-registry gunicorn[12964]: [2015-10-28 17:49:18 +0000] [12964] [DEBUG] 4 workers | |
Oct 28 17:49:19 fed22-docker-registry gunicorn[12964]: [2015-10-28 17:49:19 +0000] [12964] [DEBUG] 4 workers | |
[root@fed22-docker-registry ~]# docker images | |
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE | |
nginx latest 914c82c5a678 18 hours ago 132.7 MB | |
redis latest a193103919bc 5 days ago 109.1 MB | |
[root@fed22-docker-registry registry]# docker images | |
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE | |
nginx latest 914c82c5a678 19 hours ago 132.7 MB | |
localhost:5000/nginx latest 914c82c5a678 19 hours ago 132.7 MB | |
redis latest a193103919bc 5 days ago 109.1 MB | |
[root@fed22-docker-registry registry]# docker push localhost:5000/nginx | |
The push refers to a repository [localhost:5000/nginx] (len: 1) | |
Sending image list | |
Pushing repository localhost:5000/nginx (1 tags) | |
d0ca40da9e35: Image successfully pushed | |
d1f66aef36c9: Image successfully pushed | |
192997133528: Image successfully pushed | |
c4b09a941684: Image successfully pushed | |
4174aa7c7be8: Image successfully pushed | |
0620b22b5443: Image successfully pushed | |
87c3b9f58480: Image successfully pushed | |
7d984375a5e7: Image successfully pushed | |
e491c4f10eb2: Image successfully pushed | |
edeba58b4ca7: Image successfully pushed | |
a96311efcda8: Image successfully pushed | |
914c82c5a678: Image successfully pushed | |
Pushing tag for rev [914c82c5a678] on {http://localhost:5000/v1/repositories/nginx/tags/latest} | |
[root@fed22-docker-registry registry]# docker images | |
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE | |
nginx latest 914c82c5a678 19 hours ago 132.7 MB | |
localhost:5000/nginx latest 914c82c5a678 19 hours ago 132.7 MB | |
redis latest a193103919bc 5 days ago 109.1 MB | |
...on docker client, add additioanl disk for LVM prov | |
http://www.projectatomic.io/docs/docker-storage-recommendation/ | |
<acropolis> vm.disk_create fed22-docker-client create_size=50g container=DEFAULT-CTR | |
DiskCreate: complete | |
[root@localhost sysconfig]# lsscsi | |
[0:0:0:0] cd/dvd QEMU QEMU DVD-ROM 1.5. /dev/sr0 | |
[2:0:0:0] disk NUTANIX VDISK 0 /dev/sda | |
[2:0:1:0] disk NUTANIX VDISK 0 /dev/sdb | |
[root@localhost sysconfig]# cat /etc/sysconfig/docker-storage-setup | |
# Edit this file to override any configuration options specified in | |
# /usr/lib/docker-storage-setup/docker-storage-setup. | |
# | |
# For more details refer to "man docker-storage-setup" | |
DEVS=/dev/sdb | |
VG=docker | |
[root@localhost sysconfig]# docker-storage-setup | |
Checking that no-one is using this disk right now ... OK | |
Disk /dev/sdb: 46.6 GiB, 50000000000 bytes, 97656250 sectors | |
Units: sectors of 1 * 512 = 512 bytes | |
Sector size (logical/physical): 512 bytes / 512 bytes | |
I/O size (minimum/optimal): 4096 bytes / 4096 bytes | |
>>> Script header accepted. | |
>>> Created a new DOS disklabel with disk identifier 0xc57675f7. | |
Created a new partition 1 of type 'Linux LVM' and of size 46.6 GiB. | |
/dev/sdb2: | |
New situation: | |
Device Boot Start End Sectors Size Id Type | |
/dev/sdb1 2048 97656249 97654202 46.6G 8e Linux LVM | |
The partition table has been altered. | |
Calling ioctl() to re-read partition table. | |
Syncing disks. | |
Physical volume "/dev/sdb1" successfully created | |
Volume group "docker" successfully created | |
Rounding up size to full physical extent 48.00 MiB | |
Logical volume "docker-poolmeta" created. | |
Logical volume "docker-pool" created. | |
WARNING: Converting logical volume docker/docker-pool and docker/docker-poolmeta to pool's data and metadata volumes. | |
THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) | |
Converted docker/docker-pool to thin pool. | |
Logical volume "docker-pool" changed. | |
[root@localhost sysconfig]# cat docker-storage | |
DOCKER_STORAGE_OPTIONS=--storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/docker-docker--pool | |
<acropolis> vm.disk_create fed22-docker-client create_size=100g container=DEFAULT-CTR | |
DiskCreate: complete | |
<acropolis> vm.disk_create fed22-docker-client create_size=5g container=DEFAULT-CTR | |
pvcreate /dev/sdb /dev/sdc | |
vgcreate direct-lvm /dev/sdb /dev/sdc | |
# lvcreate --wipesignatures y -n data direct-lvm -l 95%VG | |
# lvcreate --wipesignatures y -n metadata direct-lvm -l 5%VG | |
[root@localhost direct-lvm]# cat /etc/sysconfig/docker | |
... | |
--storage-opt dm.datadev=/dev/direct-lvm/data \ | |
--storage-opt dm.metadatadev=/dev/direct-lvm/metadata \ | |
--storage-opt dm.fs=xfs | |
[root@localhost system]# docker info | |
Containers: 0 | |
Images: 0 | |
Storage Driver: devicemapper | |
Pool Name: docker-253:1-33883287-pool | |
Pool Blocksize: 65.54 kB | |
Backing Filesystem: xfs | |
Data file: /dev/loop0 | |
Metadata file: /dev/loop1 | |
Data Space Used: 1.821 GB | |
Data Space Total: 107.4 GB | |
Data Space Available: 50.2 GB | |
Metadata Space Used: 1.479 MB | |
Metadata Space Total: 2.147 GB | |
Metadata Space Available: 2.146 GB | |
Udev Sync Supported: true | |
Deferred Removal Enabled: false | |
Data loop file: /var/lib/docker/devicemapper/devicemapper/data | |
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata | |
Library Version: 1.02.93 (2015-01-30) | |
Execution Driver: native-0.2 | |
Logging Driver: json-file | |
Kernel Version: 4.2.5-201.fc22.x86_64 | |
Operating System: Fedora 22 (Twenty Two) | |
CPUs: 1 | |
Total Memory: 993.5 MiB | |
Name: localhost.localdomain | |
ID: VHCA:JO3X:IRF5:44RG:CFZ6:WETN:YBJ2:6IL5:BNDT:FK32:KH6E:UZED | |
[root@localhost sysconfig]# docker info | |
Containers: 0 | |
Images: 0 | |
Storage Driver: devicemapper | |
Pool Name: docker-253:1-33883287-pool | |
Pool Blocksize: 65.54 kB | |
Backing Filesystem: xfs | |
Data file: /dev/direct-lvm/data | |
Metadata file: /dev/direct-lvm/metadata | |
Data Space Used: 53.74 MB | |
Data Space Total: 199.5 GB | |
Data Space Available: 199.4 GB | |
Metadata Space Used: 1.479 MB | |
Metadata Space Total: 10.5 GB | |
Metadata Space Available: 10.5 GB | |
Udev Sync Supported: true | |
Deferred Removal Enabled: false | |
Library Version: 1.02.93 (2015-01-30) | |
Execution Driver: native-0.2 | |
Logging Driver: json-file | |
Kernel Version: 4.2.5-201.fc22.x86_64 | |
Operating System: Fedora 22 (Twenty Two) | |
CPUs: 1 | |
Total Memory: 993.5 MiB | |
Name: localhost.localdomain | |
ID: VHCA:JO3X:IRF5:44RG:CFZ6:WETN:YBJ2:6IL5:BNDT:FK32:KH6E:UZED | |
[root@localhost mapper]# ps -ef | grep docker | |
root 27258 1 0 12:13 ? 00:00:00 /usr/bin/docker daemon --selinux-enabled --storage-opt dm.datadev=/dev/direct-lvm/data --storage-opt dm.metadatadev=/dev/direct-lvm/metadata --storage-opt dm.fs=xfs | |
For best performance the metadata should be on a different spindle than the data, or even better on an SSD. | |
If using a block device for device mapper storage, it is best to use lvm to create and manage the thin-pool volume. This volume is then handed to Docker to exclusively create snapshot volumes needed for images and containers. | |
Managing the thin-pool outside of Docker makes for the most feature-rich method of having Docker utilize device mapper thin provisioning as the backing storage for Docker’s containers. The highlights of the lvm-based thin-pool management feature include: automatic or interactive thin-pool resize support, dynamically changing thin-pool features, automatic thinp metadata checking when lvm activates the thin-pool, etc. | |
[root@docker-thinp docker]# pvcreate /dev/sdd | |
Physical volume "/dev/sdd" successfully created | |
[root@docker-thinp docker]# pvdisplay -m | |
--- Physical volume --- | |
PV Name /dev/sda2 | |
VG Name fedora | |
PV Size 99.51 GiB / not usable 3.00 MiB | |
Allocatable yes | |
PE Size 4.00 MiB | |
Total PE 25474 | |
Free PE 1 | |
Allocated PE 25473 | |
PV UUID wi6mqj-Gn3J-tkl8-Cboa-yB0q-YpCw-eLRQBX | |
--- Physical Segments --- | |
Physical extent 0 to 511: | |
Logical volume /dev/fedora/swap | |
Logical extents 0 to 511 | |
Physical extent 512 to 12672: | |
Logical volume /dev/fedora/home | |
Logical extents 0 to 12160 | |
Physical extent 12673 to 25472: | |
Logical volume /dev/fedora/root | |
Logical extents 0 to 12799 | |
Physical extent 25473 to 25473: | |
FREE | |
"/dev/sdd" is a new physical volume of "186.26 GiB" | |
--- NEW Physical volume --- | |
PV Name /dev/sdd | |
VG Name | |
PV Size 186.26 GiB | |
Allocatable NO | |
PE Size 0 | |
Total PE 0 | |
Free PE 0 | |
Allocated PE 0 | |
PV UUID tIHe95-iVwD-yTEF-QF73-7A0j-49qW-hhTFdQ | |
[root@docker-thinp docker]# vgcreate docker /dev/sdd | |
Volume group "docker" successfully created | |
[root@docker-thinp sysconfig]# cat /etc/sysconfig/docker-storage-setup | |
# Edit this file to override any configuration options specified in | |
# /usr/lib/docker-storage-setup/docker-storage-setup. | |
# | |
# For more details refer to "man docker-storage-setup" | |
VG="docker" | |
[root@docker-thinp sysconfig]# docker-storage-setup | |
Rounding up size to full physical extent 192.00 MiB | |
Logical volume "docker-poolmeta" created. | |
Logical volume "docker-pool" created. | |
WARNING: Converting logical volume docker/docker-pool and docker/docker-poolmeta to pool's data and metadata volumes. | |
THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) | |
Converted docker/docker-pool to thin pool. | |
Logical volume "docker-pool" changed. | |
[root@docker-thinp sysconfig]# systemctl status docker -l | |
? docker.service - Docker Application Container Engine | |
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) | |
Active: active (running) since Tue 2015-11-17 17:54:19 CET; 16s ago | |
Docs: http://docs.docker.com | |
Main PID: 1874 (docker) | |
CGroup: /system.slice/docker.service | |
+-1874 /usr/bin/docker daemon --selinux-enabled --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/docker-docker--pool | |
Nov 17 17:54:16 docker-thinp systemd[1]: Starting Docker Application Container Engine... | |
Nov 17 17:54:16 docker-thinp docker[1874]: time="2015-11-17T17:54:16.841944917+01:00" level=info msg="Listening for HTTP on unix (/var/run/docker.sock)" | |
Nov 17 17:54:19 docker-thinp docker[1874]: time="2015-11-17T17:54:19.670495214+01:00" level=info msg="Option DefaultDriver: bridge" | |
Nov 17 17:54:19 docker-thinp docker[1874]: time="2015-11-17T17:54:19.671135310+01:00" level=info msg="Option DefaultNetwork: bridge" | |
Nov 17 17:54:19 docker-thinp docker[1874]: time="2015-11-17T17:54:19.680331917+01:00" level=info msg="Firewalld running: true" | |
Nov 17 17:54:19 docker-thinp docker[1874]: time="2015-11-17T17:54:19.777455564+01:00" level=info msg="Loading containers: start." | |
Nov 17 17:54:19 docker-thinp docker[1874]: time="2015-11-17T17:54:19.777874968+01:00" level=info msg="Loading containers: done." | |
Nov 17 17:54:19 docker-thinp docker[1874]: time="2015-11-17T17:54:19.778146581+01:00" level=info msg="Daemon has completed initialization" | |
Nov 17 17:54:19 docker-thinp docker[1874]: time="2015-11-17T17:54:19.778418317+01:00" level=info msg="Docker daemon" commit="cb216be/1.8.2" execdriver=native-0.2 graphdriver=devicemapper version=1.8.2-fc22 | |
Nov 17 17:54:19 docker-thinp systemd[1]: Started Docker Application Container Engine. | |
[root@docker-thinp sysconfig]# docker info | |
Containers: 0 | |
Images: 0 | |
Storage Driver: devicemapper | |
Pool Name: docker-docker--pool | |
Pool Blocksize: 524.3 kB | |
Backing Filesystem: xfs | |
Data file: | |
Metadata file: | |
Data Space Used: 62.39 MB | |
Data Space Total: 79.92 GB | |
Data Space Available: 79.86 GB | |
Metadata Space Used: 90.11 kB | |
Metadata Space Total: 201.3 MB | |
Metadata Space Available: 201.2 MB | |
Udev Sync Supported: true | |
Deferred Removal Enabled: false | |
Library Version: 1.02.93 (2015-01-30) | |
Execution Driver: native-0.2 | |
Logging Driver: json-file | |
Kernel Version: 4.2.5-201.fc22.x86_64 | |
Operating System: Fedora 22 (Twenty Two) | |
CPUs: 1 | |
Total Memory: 993.5 MiB | |
Name: docker-thinp | |
ID: VHCA:JO3X:IRF5:44RG:CFZ6:WETN:YBJ2:6IL5:BNDT:FK32:KH6E:UZED | |
[ray@docker-thinp ~]$ lsblk | |
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT | |
sda 8:0 0 100G 0 disk | |
+-sda1 8:1 0 500M 0 part /boot | |
+-sda2 8:2 0 99.5G 0 part | |
+-fedora-swap 253:0 0 2G 0 lvm [SWAP] | |
+-fedora-root 253:1 0 50G 0 lvm / | |
+-fedora-home 253:4 0 47.5G 0 lvm /home | |
sdd 8:48 0 186.3G 0 disk | |
+-docker-docker--pool_tmeta 253:5 0 192M 0 lvm | |
¦ +-docker-docker--pool 253:7 0 74.4G 0 lvm | |
+-docker-docker--pool_tdata 253:6 0 74.4G 0 lvm | |
+-docker-docker--pool 253:7 0 74.4G 0 lvm | |
As a fallback if no thin pool is provided, loopback files will be created. Loopback is very slow, but can be used without any pre-configuration of storage. It is strongly recommended that you do not use loopback in production. Ensure your Docker daemon has a --storage-opt dm.thinpooldev argument provided. | |
If setting up a new metadata pool it is required to be valid. This can be achieved by zeroing the first 4k to indicate empty metadata, like this: | |
dd if=/dev/zero of=$metadata_dev bs=4096 count=1 | |
--storage-opt dm.use_deferred_deletion=true \ | |
--storage-opt dm.use_deferred_removal=true | |
[root@docker-thinp ~]# lvdisplay | egrep "Allocated pool data" ; du -sh /var/lib/docker/ ; docker pull centos:6 ; du -sh /var/lib/docker ; lvdisplay | egrep "Allocated pool data" | |
Allocated pool data 0.08% | |
28K /var/lib/docker/ | |
Trying to pull repository docker.io/library/centos ... 6: Pulling from library/centos | |
47d44cb6f252: Pull complete | |
2c2557968d48: Pull complete | |
91e6f84b8fe8: Pull complete | |
fea77d2fd61e: Pull complete | |
3bbbf0aca359: Pull complete | |
library/centos:6: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security. | |
Digest: sha256:7d1c9d44f0b3b81c3aa4e77b744782b021af795478e163723b34a40176bbff2a | |
Status: Downloaded newer image for docker.io/centos:6 | |
640K /var/lib/docker | |
Allocated pool data 0.38% | |
[root@fed22-docker-registry ~]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt | |
Generating a 4096 bit RSA private key | |
.......................................................................................................................................................................................++ | |
...............................++ | |
writing new private key to 'certs/domain.key' | |
----- | |
You are about to be asked to enter information that will be incorporated | |
into your certificate request. | |
What you are about to enter is what is called a Distinguished Name or a DN. | |
There are quite a few fields but you can leave some blank | |
For some fields there will be a default value, | |
If you enter '.', the field will be left blank. | |
----- | |
Country Name (2 letter code) [XX]:UK | |
State or Province Name (full name) []: | |
Locality Name (eg, city) [Default City]: | |
Organization Name (eg, company) [Default Company Ltd]:Nutanix Ltd | |
Organizational Unit Name (eg, section) []:Solutions Eng | |
Common Name (eg, your name or your server's hostname) []:10.68.64.156 | |
Email Address []:[email protected] | |
ExecStart=/usr/bin/docker daemon --insecure-registry 10.68.64.156:5000 -H fd:// | |
[root@fed22-docker-registry system]# systemctl status docker -l -l | |
? docker.service - Docker Application Container Engine | |
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) | |
Active: active (running) since Wed 2015-11-25 15:38:08 CET; 15s ago | |
Docs: https://docs.docker.com | |
Main PID: 18518 (docker) | |
Memory: 444.0K | |
CGroup: /system.slice/docker.service | |
+-18518 /usr/bin/docker daemon --insecure-registry 10.68.64.156:5000 -H fd:// | |
+-18582 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 5000 -container-ip 172.17.0.1 -container-port 5000 | |
[root@docker-directlvm ~]# docker login 10.68.64.156:5000 | |
Username: ray | |
Password: | |
Email: [email protected] | |
WARNING: login credentials saved in /root/.docker/config.json | |
Login Succeeded | |
time="2015-11-25T15:12:33Z" level=warning msg="error authorizing context: basic authentication challenge: htpasswd.challenge{realm:\"Registry Realm\", err:(*errors.errorString)(0xc20802ac80)}" go.version=go1.4.3 http.request.host="10.68.64.156:5000" http.request.id=362b9721-6d0f-4fa3-b5a1-d764638b92be http.request.method=GET http.request.remoteaddr="10.68.64.161:34768" http.request.uri="/v2/" http.request.useragent="docker/1.8.2-fc22 go/go1.5.1 kernel/4.2.5-201.fc22.x86_64 os/linux arch/amd64" instance.id=74d99687-2483-4a9e-a046-cab8056fe3a1 version=v2.2.0 | |
10.68.64.161 - - [25/Nov/2015:15:12:33 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/1.8.2-fc22 go/go1.5.1 kernel/4.2.5-201.fc22.x86_64 os/linux arch/amd64" | |
time="2015-11-25T15:12:33Z" level=info msg="response completed" go.version=go1.4.3 http.request.host="10.68.64.156:5000" http.request.id=391b689c-2096-4225-a716-10be3f7f1e1c http.request.method=GET http.request.remoteaddr="10.68.64.161:34770" http.request.uri="/v2/" http.request.useragent="docker/1.8.2-fc22 go/go1.5.1 kernel/4.2.5-201.fc22.x86_64 os/linux arch/amd64" http.response.contenttype="application/json; charset=utf-8" http.response.duration=4.954155ms http.response.status=200 http.response.written=2 instance.id=74d99687-2483-4a9e-a046-cab8056fe3a1 version=v2.2.0 | |
10.68.64.161 - - [25/Nov/2015:15:12:33 +0000] "GET /v2/ HTTP/1.1" 200 2 "" "docker/1.8.2-fc22 go/go1.5.1 kernel/4.2.5-201.fc22.x86_64 os/linux arch/amd64" | |
docker-machine | |
-------------- | |
]# docker inspect --format '{{.Config.Volumes}}' postgres01 | |
map[/var/lib/postgresql/data:{}] | |
docker-machine -create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' --nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.64' postgres01 | |
docker pull orionapps/vol-plugin | |
Once machine is created , you can do docker-machine ssh <machine-name> and run script | |
./start-volume-plugin.sh | |
docker run -d --name postgres01 -p 5432:5432 --volume-driver nutanix -v pgdata01:/var/lib/postgresql/data postgres:latest | |
docker exec -it postgres01 /bin/bash | |
/# psql -U postgres | |
psql (9.5.3) | |
Type "help" for help. | |
postgres=# CREATE DATABASE nutanix with owner postgres ; | |
postgres=# \l | |
List of databases | |
Name | Owner | Encoding | Collate | Ctype | Access privileges | |
-----------+----------+----------+------------+------------+----------------------- | |
nutanix | postgres | UTF8 | en_US.utf8 | en_US.utf8 | | |
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | | |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + | |
| | | | | postgres=CTc/postgres | |
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + | |
| | | | | postgres=CTc/postgres | |
(4 rows) | |
swarm | |
$docker-machine create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' --nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.68' agent2 | |
Running pre-create checks... | |
Creating machine... | |
Waiting for machine to be running, this may take a few minutes... | |
Detecting operating system of created instance... | |
Waiting for SSH to be available... | |
Detecting the provisioner... | |
Provisioning with centos... | |
Copying certs to the local machine directory... | |
Copying certs to the remote machine... | |
Setting Docker configuration on the remote daemon... | |
Checking connection to Docker... | |
Docker is up and running! | |
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env agent2 | |
[root@cos7-docker-machine ~]# eval $(docker-machine env manager) | |
[root@cos7-docker-machine ~]# docker run --rm swarm create | |
Unable to find image 'swarm:latest' locally | |
latest: Pulling from library/swarm | |
1e61bbec5d24: Pull complete | |
8c7b2f6b74da: Pull complete | |
245a8db4f1e1: Pull complete | |
Digest: sha256:661f2e4c9470e7f6238cebf603bcf5700c8b948894ac9e35f2cf6f63dcda723a | |
Status: Downloaded newer image for swarm:latest | |
feaa9e6c52498be8c53fbc8756cf84de | |
also : | |
[root@cos7-docker-machine ~]# sid=$(docker run swarm create) | |
[root@cos7-docker-machine ~]# echo $sid | |
71cc4406e452a889cc69fdd59a53ba50 | |
[root@cos7-docker-machine ~]# docker-machine ls | |
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS | |
agent1 - nutanix Running tcp://10.68.68.144:2376 v1.11.2 | |
agent2 - nutanix Running tcp://10.68.68.146:2376 v1.11.2 | |
manager * nutanix Running tcp://10.68.68.143:2376 v1.11.2 | |
[root@cos7-docker-machine ~]# eval $(docker-machine env agent1) | |
[root@cos7-docker-machine ~]# | |
[root@cos7-docker-machine ~]# docker-machine ip agent1 | |
10.68.68.148 | |
[root@cos7-docker-machine ~]# docker run -d swarm join --addr=$(docker-machine ip agent1):2376 token://feaa9e6c52498be8c53fbc8756cf84de | |
Unable to find image 'swarm:latest' locally | |
latest: Pulling from library/swarm | |
1e61bbec5d24: Pull complete | |
8c7b2f6b74da: Pull complete | |
245a8db4f1e1: Pull complete | |
Digest: sha256:661f2e4c9470e7f6238cebf603bcf5700c8b948894ac9e35f2cf6f63dcda723a | |
Status: Downloaded newer image for swarm:latest | |
3a1a9ecb9e76202fb1d4fd462a5f07774b45546fc72eb907b5d83983d9659555 | |
[root@cos7-docker-machine ~]# | |
[root@cos7-docker-machine ~]# | |
[root@cos7-docker-machine ~]# eval $(docker-machine env agent2) | |
[root@cos7-docker-machine ~]# docker run -d swarm join --addr=$(docker-machine ip agent2):2376 token://feaa9e6c52498be8c53fbc8756cf84de | |
Unable to find image 'swarm:latest' locally | |
latest: Pulling from library/swarm | |
1e61bbec5d24: Pull complete | |
8c7b2f6b74da: Pull complete | |
245a8db4f1e1: Pull complete | |
Digest: sha256:661f2e4c9470e7f6238cebf603bcf5700c8b948894ac9e35f2cf6f63dcda723a | |
Status: Downloaded newer image for swarm:latest | |
04709a869f17aa095d7828b5b71b3c188fa1f2c6bb58ce2261b592527313724a | |
on each machine stop docker and run as : | |
sudo docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock | |
restart... | |
[root@cos7-docker-machine ~]# docker-machine create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' --nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.68' local | |
Running pre-create checks... | |
Creating machine... | |
Waiting for machine to be running, this may take a few minutes... | |
Detecting operating system of created instance... | |
Waiting for SSH to be available... | |
Detecting the provisioner... | |
Provisioning with centos... | |
Copying certs to the local machine directory... | |
Copying certs to the remote machine... | |
Setting Docker configuration on the remote daemon... | |
Checking connection to Docker... | |
Docker is up and running! | |
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env local | |
[root@cos7-docker-machine ~]# eval "$(docker-machine env local)" | |
[root@cos7-docker-machine ~]# docker run swarm create | |
Unable to find image 'swarm:latest' locally | |
latest: Pulling from library/swarm | |
1e61bbec5d24: Pull complete | |
8c7b2f6b74da: Pull complete | |
245a8db4f1e1: Pull complete | |
Digest: sha256:661f2e4c9470e7f6238cebf603bcf5700c8b948894ac9e35f2cf6f63dcda723a | |
Status: Downloaded newer image for swarm:latest | |
8f11ae7846e53082b0d023ed95918786 | |
[root@cos7-docker-machine ~]# docker-machine create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' \ | |
--nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.68' --swarm --swarm-master \ | |
--swarm-discovery token://8f11ae7846e53082b0d023ed95918786 swarm-manager | |
Running pre-create checks... | |
Creating machine... | |
Waiting for machine to be running, this may take a few minutes... | |
Detecting operating system of created instance... | |
Waiting for SSH to be available... | |
Detecting the provisioner... | |
Provisioning with centos... | |
Copying certs to the local machine directory... | |
Copying certs to the remote machine... | |
Setting Docker configuration on the remote daemon... | |
Configuring swarm... | |
Checking connection to Docker... | |
Docker is up and running! | |
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm-manager | |
[root@cos7-docker-machine ~]# docker-machine create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' \ | |
--nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.68' --swarm \ | |
--swarm-discovery token://8f11ae7846e53082b0d023ed95918786 node01 | |
Running pre-create checks... | |
Creating machine... | |
Waiting for machine to be running, this may take a few minutes... | |
Detecting operating system of created instance... | |
Waiting for SSH to be available... | |
Detecting the provisioner... | |
Provisioning with centos... | |
Copying certs to the local machine directory... | |
Copying certs to the remote machine... | |
Setting Docker configuration on the remote daemon... | |
Configuring swarm... | |
Checking connection to Docker... | |
Docker is up and running! | |
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env node01 | |
[root@cos7-docker-machine ~]# docker-machine create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' \ | |
--nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.68' --swarm \ | |
--swarm-discovery token://8f11ae7846e53082b0d023ed95918786 node02 | |
Running pre-create checks... | |
Creating machine... | |
Waiting for machine to be running, this may take a few minutes... | |
Detecting operating system of created instance... | |
Waiting for SSH to be available... | |
Detecting the provisioner... | |
Provisioning with centos... | |
Copying certs to the local machine directory... | |
Copying certs to the remote machine... | |
Setting Docker configuration on the remote daemon... | |
Configuring swarm... | |
Checking connection to Docker... | |
Docker is up and running! | |
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env node02 | |
[root@cos7-docker-machine ~]# eval "$(docker-machine env --swarm swarm-manager)" | |
[root@cos7-docker-machine ~]# | |
[root@cos7-docker-machine ~]# | |
[root@cos7-docker-machine ~]# docker info | |
Containers: 2 | |
Running: 2 | |
Paused: 0 | |
Stopped: 0 | |
Images: 2 | |
Server Version: swarm/1.2.3 | |
Role: primary | |
Strategy: spread | |
Filters: health, port, containerslots, dependency, affinity, constraint | |
Nodes: 3 | |
node01: 10.68.68.153:2376 | |
+ ID: D6XX:YW2A:NHDW:2FOI:EKYP:AF6E:KJUN:LBBU:6TMH:NIIK:TDJQ:NE4B | |
+ Status: Pending | |
+ Containers: 1 | |
+ Reserved CPUs: 0 / 1 | |
+ Reserved Memory: 0 B / 1.018 GiB | |
+ Labels: executiondriver=, kernelversion=3.10.0-327.18.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), provider=nutanix, storagedriver=devicemapper | |
+ Error: ID duplicated. D6XX:YW2A:NHDW:2FOI:EKYP:AF6E:KJUN:LBBU:6TMH:NIIK:TDJQ:NE4B shared by this node 10.68.68.153:2376 and another node 10.68.68.152:2376 | |
+ UpdatedAt: 2016-06-17T16:23:38Z | |
+ ServerVersion: 1.11.2 | |
node02: 10.68.68.154:2376 | |
+ ID: D6XX:YW2A:NHDW:2FOI:EKYP:AF6E:KJUN:LBBU:6TMH:NIIK:TDJQ:NE4B | |
+ Status: Pending | |
+ Containers: 1 | |
+ Reserved CPUs: 0 / 1 | |
+ Reserved Memory: 0 B / 1.018 GiB | |
+ Labels: executiondriver=, kernelversion=3.10.0-327.18.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), provider=nutanix, storagedriver=devicemapper | |
+ Error: ID duplicated. D6XX:YW2A:NHDW:2FOI:EKYP:AF6E:KJUN:LBBU:6TMH:NIIK:TDJQ:NE4B shared by this node 10.68.68.154:2376 and another node 10.68.68.152:2376 | |
+ UpdatedAt: 2016-06-17T16:23:38Z | |
+ ServerVersion: 1.11.2 | |
swarm-manager: 10.68.68.152:2376 | |
+ ID: D6XX:YW2A:NHDW:2FOI:EKYP:AF6E:KJUN:LBBU:6TMH:NIIK:TDJQ:NE4B | |
+ Status: Healthy | |
+ Containers: 2 | |
+ Reserved CPUs: 0 / 1 | |
+ Reserved Memory: 0 B / 1.018 GiB | |
+ Labels: executiondriver=, kernelversion=3.10.0-327.18.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), provider=nutanix, storagedriver=devicemapper | |
+ UpdatedAt: 2016-06-17T16:23:25Z | |
+ ServerVersion: 1.11.2 | |
Plugins: | |
Volume: | |
Network: | |
Kernel Version: 3.10.0-327.18.2.el7.x86_64 | |
Operating System: linux | |
Architecture: amd64 | |
CPUs: 1 | |
Total Memory: 1.018 GiB | |
Name: 82c3a736e441 | |
Docker Root Dir: | |
Debug mode (client): false | |
Debug mode (server): false | |
WARNING: No kernel memory limit support | |
3 | |
[root@cos7-docker-machine ~]# docker-machine ls | |
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS | |
local - nutanix Running tcp://10.68.68.151:2376 v1.11.2 | |
node01 - nutanix Running tcp://10.68.68.153:2376 swarm-manager v1.11.2 | |
node02 - nutanix Running tcp://10.68.68.154:2376 swarm-manager v1.11.2 | |
nutanix-rancher-cm6 - nutanix Running tcp://10.68.68.123:2376 v1.11.2 | |
nutanix-rancher-cm7 - nutanix Running tcp://10.68.68.142:2376 v1.11.2 | |
swarm-manager - nutanix Running tcp://10.68.68.152:2376 swarm-manager (master) v1.11.2 | |
[root@cos7-docker-machine ~]# docker run swarm list token://8f11ae7846e53082b0d023ed95918786 | |
10.68.68.152:2376 | |
10.68.68.154:2376 | |
10.68.68.153:2376 | |
overlay network : | |
keystore | |
$ docker-machine create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' --nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.68' nx-keystore | |
$ eval "$(docker-machine env nx-keystore) | |
$ docker run -d --restart=unless-stopped -p 8500:8500 --name "consul" progrium/consul -server -bootstrap | |
$ docker ps | |
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES | |
48fb9a11325b progrium/consul "/bin/start -server -" 22 seconds ago Up 15 seconds 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp consul | |
swarm | |
$ docker-machine create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' --nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.68' --swarm --swarm-master --swarm-discovery="consul://$(docker-machine ip nx-keystore):8500" \ | |
> --engine-opt="cluster-store=consul://$(docker-machine ip nx-keystore):8500" \ | |
> --engine-opt="cluster-advertise=eth1:2376" nx-demo01 | |
case $key in | |
--prism_ip) | |
PRISM_ADDRESS="$2" | |
shift | |
;; | |
--dataservice_ip) | |
DS_ADDRESS="$2" | |
shift | |
;; | |
--prism_username) | |
PRISM_UNAME="$2" | |
shift | |
;; | |
--prism_password) | |
PRISM_PASWD="$2" | |
shift | |
;; | |
--default_container) | |
CTR_NAME="$2" | |
shift | |
;; | |
*) | |
.55 | |
[root@cos7-docker-machine ~]# sid=$(docker run swarm create) | |
[root@cos7-docker-machine ~]# echo $sid | |
d3e9cb54c894a7e3ae4c21145b9194bb | |
[root@cos7-docker-machine ~]# docker-machine create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' --nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.68' --swarm --swarm-master --swarm-discovery token://$sid swarm-master | |
Running pre-create checks... | |
Creating machine... | |
Waiting for machine to be running, this may take a few minutes... | |
Detecting operating system of created instance... | |
Waiting for SSH to be available... | |
Detecting the provisioner... | |
Provisioning with centos... | |
Copying certs to the local machine directory... | |
Copying certs to the remote machine... | |
Setting Docker configuration on the remote daemon... | |
Configuring swarm... | |
Checking connection to Docker... | |
Docker is up and running! | |
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm-master | |
[root@cos7-docker-machine ~]# docker-machine ls | |
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS | |
local - nutanix Running tcp://10.68.68.151:2376 v1.11.2 | |
node01 - nutanix Running tcp://10.68.68.153:2376 swarm-manager Unknown Unable to query docker version: Cannot connect to the docker engine endpoint | |
node02 - nutanix Running tcp://10.68.68.154:2376 swarm-manager v1.11.2 | |
nutanix-rancher-cm6 - nutanix Running tcp://10.68.68.123:2376 v1.11.2 | |
nutanix-rancher-cm7 - nutanix Running tcp://10.68.68.142:2376 v1.11.2 | |
swarm-manager - nutanix Running tcp://10.68.68.152:2376 swarm-manager (master) v1.11.2 | |
swarm-master - nutanix Running tcp://10.68.68.147:2376 swarm-master (master) v1.11.2 | |
[root@cos7-docker-machine ~]# docker-machine create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' --nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.68' --swarm --swarm-discovery token://$sid swarm01 | |
Running pre-create checks... | |
Creating machine... | |
#Waiting for machine to be running, this may take a few minutes... | |
Detecting operating system of created instance... | |
Waiting for SSH to be available... | |
Detecting the provisioner... | |
Provisioning with centos... | |
Copying certs to the local machine directory... | |
Copying certs to the remote machine... | |
Setting Docker configuration on the remote daemon... | |
Configuring swarm... | |
Checking connection to Docker... | |
Docker is up and running! | |
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm01 | |
[root@cos7-docker-machine ~]# docker-machine create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' --nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.68' --swarm --swarm-discovery token://$sid swarm02 | |
Running pre-create checks... | |
Creating machine... | |
Waiting for machine to be running, this may take a few minutes... | |
Detecting operating system of created instance... | |
Waiting for SSH to be available... | |
Detecting the provisioner... | |
Provisioning with centos... | |
Copying certs to the local machine directory... | |
Copying certs to the remote machine... | |
Setting Docker configuration on the remote daemon... | |
Configuring swarm... | |
Checking connection to Docker... | |
Docker is up and running! | |
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm02 | |
[root@cos7-docker-machine ~]# docker-machine create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' --nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.68' --swarm --swarm-discovery token://$sid swarm03 | |
Running pre-create checks... | |
Creating machine... | |
Waiting for machine to be running, this may take a few minutes... | |
Detecting operating system of created instance... | |
Waiting for SSH to be available... | |
Detecting the provisioner... | |
Provisioning with centos... | |
Copying certs to the local machine directory... | |
Copying certs to the remote machine... | |
Setting Docker configuration on the remote daemon... | |
Configuring swarm... | |
Checking connection to Docker... | |
Docker is up and running! | |
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm03 | |
[root@cos7-docker-machine ~]# for i in 01 02 03 ; do | |
> docker-machine ssh swarm$i ./start-volume-plugin.sh --prism_ip 10.68.64.55 --dataservice_ip 10.68.64.254 --prism_username admin --prism_password nutanix/4u --default_container DEFAULT-CTR | |
> done | |
Starting Nutanix volume plugin container... | |
Redirecting to /bin/systemctl restart docker.service | |
sudo: sorry, you must have a tty to run sudo | |
prism ip address : 10.68.64.55 | |
dataservice ip address: 10.68.64.254 | |
prism username : admin | |
default container : DEFAULT-CTR | |
c3dbd4f98866015f9631ed336f16d867e3b82700a38e2c3d131118f7c29ea52c | |
Starting Nutanix volume plugin container... | |
Redirecting to /bin/systemctl restart docker.service | |
sudo: sorry, you must have a tty to run sudo | |
prism ip address : 10.68.64.55 | |
dataservice ip address: 10.68.64.254 | |
prism username : admin | |
default container : DEFAULT-CTR | |
d529a8f3c8ad06203f3244ed89bd9d49fbc04d7b7ffa4a0a93a2d46b7ac76a27 | |
Starting Nutanix volume plugin container... | |
Redirecting to /bin/systemctl restart docker.service | |
sudo: sorry, you must have a tty to run sudo | |
prism ip address : 10.68.64.55 | |
dataservice ip address: 10.68.64.254 | |
prism username : admin | |
default container : DEFAULT-CTR | |
f7a98c51fe2a2274261733138661d77c475d469db8b2420773255ac784eaf074 | |
[root@cos7-docker-machine ~]# for i in 01 02 03 ; do | |
> docker-machine ssh swarm$i docker ps | |
> done | |
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES | |
c3dbd4f98866 orionapps/vol-plugin "/code/scripts/docker" 6 minutes ago Up 6 minutes NutanixVolumePlugin | |
1aeb46ce94ca swarm:latest "/swarm join --advert" 15 minutes ago Up 6 minutes 2375/tcp swarm-agent | |
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES | |
d529a8f3c8ad orionapps/vol-plugin "/code/scripts/docker" 6 minutes ago Up 6 minutes NutanixVolumePlugin | |
a79be0e45061 swarm:latest "/swarm join --advert" 12 minutes ago Up 6 minutes 2375/tcp swarm-agent | |
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES | |
f7a98c51fe2a orionapps/vol-plugin "/code/scripts/docker" 6 minutes ago Up 6 minutes NutanixVolumePlugin | |
3558a86b1d1d swarm:latest "/swarm join --advert" 9 minutes ago Up 6 minutes 2375/tcp swarm-agent | |
[root@cos7-docker-machine ~]# | |
[root@cos7-docker-machine ~]# docker-machine ls | |
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS | |
local - nutanix Running tcp://10.68.68.151:2376 v1.11.2 | |
node01 - nutanix Running tcp://10.68.68.153:2376 swarm-manager Unknown Unable to query docker version: Cannot connect to the docker engine endpoint | |
node02 - nutanix Running tcp://10.68.68.154:2376 swarm-manager v1.11.2 | |
nutanix-rancher-cm6 - nutanix Running tcp://10.68.68.123:2376 v1.11.2 | |
nutanix-rancher-cm7 - nutanix Running tcp://10.68.68.142:2376 v1.11.2 | |
swarm01 - nutanix Running tcp://10.68.68.155:2376 swarm-master v1.11.2 | |
swarm02 - nutanix Running tcp://10.68.68.156:2376 swarm-master v1.11.2 | |
swarm03 - nutanix Running tcp://10.68.68.157:2376 swarm-master v1.11.2 | |
swarm-manager - nutanix Running tcp://10.68.68.152:2376 swarm-manager (master) v1.11.2 | |
swarm-master - nutanix Running tcp://10.68.68.147:2376 swarm-master (master) v1.11.2 | |
[root@cos7-docker-machine ~]# eval $(docker-machine env --swarm swarm-manager | |
> ) | |
[root@cos7-docker-machine ~]# docker-machine env --swarm swarm-manager | |
export DOCKER_TLS_VERIFY="1" | |
export DOCKER_HOST="tcp://10.68.68.152:3376" | |
export DOCKER_CERT_PATH="/root/.docker/machine/machines/swarm-manager" | |
export DOCKER_MACHINE_NAME="swarm-manager" | |
# Run this command to configure your shell: | |
# eval $(docker-machine env --swarm swarm-manager) | |
[root@cos7-docker-machine ~]# docker info | |
Containers: 1 | |
Running: 1 | |
Paused: 0 | |
Stopped: 0 | |
Images: 2 | |
Server Version: swarm/1.2.3 | |
Role: primary | |
Strategy: spread | |
Filters: health, port, containerslots, dependency, affinity, constraint | |
Nodes: 3 | |
(unknown): 10.68.68.153:2376 | |
+ ID: | |
+ Status: Pending | |
+ Containers: 0 | |
+ Reserved CPUs: 0 / 0 | |
+ Reserved Memory: 0 B / 0 B | |
+ Labels: | |
+ Error: Cannot connect to the Docker daemon. Is the docker daemon running on this host? | |
+ UpdatedAt: 2016-06-22T14:19:26Z | |
+ ServerVersion: | |
node02: 10.68.68.154:2376 | |
+ ID: D6XX:YW2A:NHDW:2FOI:EKYP:AF6E:KJUN:LBBU:6TMH:NIIK:TDJQ:NE4B | |
+ Status: Healthy | |
+ Containers: 1 | |
+ Reserved CPUs: 0 / 1 | |
+ Reserved Memory: 0 B / 1.018 GiB | |
+ Labels: executiondriver=, kernelversion=3.10.0-327.18.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), provider=nutanix, storagedriver=devicemapper | |
+ UpdatedAt: 2016-06-22T15:12:22Z | |
+ ServerVersion: 1.11.2 | |
swarm-manager: 10.68.68.152:2376 | |
+ ID: D6XX:YW2A:NHDW:2FOI:EKYP:AF6E:KJUN:LBBU:6TMH:NIIK:TDJQ:NE4B | |
+ Status: Pending | |
+ Containers: 5 | |
+ Reserved CPUs: 0 / 1 | |
+ Reserved Memory: 0 B / 1.018 GiB | |
+ Labels: executiondriver=, kernelversion=3.10.0-327.18.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), provider=nutanix, storagedriver=devicemapper | |
+ Error: ID duplicated. D6XX:YW2A:NHDW:2FOI:EKYP:AF6E:KJUN:LBBU:6TMH:NIIK:TDJQ:NE4B shared by this node 10.68.68.152:2376 and another node 10.68.68.154:2376 | |
+ UpdatedAt: 2016-06-22T15:12:17Z | |
+ ServerVersion: 1.11.2 | |
Plugins: | |
Volume: | |
Network: | |
Kernel Version: 3.10.0-327.18.2.el7.x86_64 | |
Operating System: linux | |
Architecture: amd64 | |
CPUs: 1 | |
Total Memory: 1.018 GiB | |
Name: 82c3a736e441 | |
Docker Root Dir: | |
Debug mode (client): false | |
Debug mode (server): false | |
WARNING: No kernel memory limit support | |
[root@cos7-docker-machine ~]# eval $(docker-machine env swarm-master) | |
[root@cos7-docker-machine ~]# docker ps | |
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES | |
92f6e0e589da swarm:latest "/swarm join --advert" About an hour ago Up About an hour 2375/tcp swarm-agent | |
22c7c015bc83 swarm:latest "/swarm manage --tlsv" About an hour ago Up About an hour 2375/tcp, 0.0.0.0:3376->3376/tcp swarm-agent-master | |
[root@cos7-docker-machine ~]# eval $(docker-machine env --swarm swarm-master) | |
[root@cos7-docker-machine ~]# docker run swarm list token://d3e9cb54c894a7e3ae4c21145b9194bb | |
10.68.68.157:2376 | |
10.68.68.156:2376 | |
10.68.68.155:2376 | |
10.68.68.147:2376 | |
troubleshooting : | |
[root@cos7-docker-machine ~]# eval $( docker-machine env swarm-master) | |
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "10.68.68.147:2376": tls: DialWithDialer timed out | |
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'. | |
Be advised that this will trigger a Docker daemon restart which will stop running containers. | |
[root@cos7-docker-machine ~]# eval $( docker-machine env swarm-master) | |
[root@cos7-docker-machine ~]# eval $( docker-machine env --swarm swarm-master) | |
Error checking TLS connection: Connection to Swarm cannot be checked but the certs are valid. Maybe swarm is not started | |
[root@cos7-docker-machine ~]# docker-machine regenerate-certs swarm-master | |
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y | |
Regenerating TLS certificates | |
Waiting for SSH to be available... | |
Detecting the provisioner... | |
Copying certs to the local machine directory... | |
Copying certs to the remote machine... | |
Setting Docker configuration on the remote daemon... | |
[root@cos7-docker-machine ~]# eval $( docker-machine env --swarm swarm-master) | |
[root@cos7-docker-machine ~]# docker info <<<<<<< now returns valid /expected output | |
# docker-machine ssh manager1 | |
docker $(docker-machine config consul) run --restart=unless-stopped -d -p "8500:8500" --name consul progrium/consul -server -bootstrap | |
docker run --restart=unless-stopped -d -p 8500:8500 -h nxconsul progrium/consul -server -bootstrap | |
docker-machine create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' --nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.68' --swarm --swarm-master --swarm-discovery="consul://$(docker-machine ip consul):8500" --engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" --engine-opt="cluster-advertise=eth0:2376" manager | |
docker run --restart=unless-stopped -d -p 3375:2375 swarm manage consul://10.68.68.130:8500 | |
docker ps | |
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES | |
6454f24e5991 swarm "/swarm manage consul" 17 seconds ago Up 16 seconds 0.0.0.0:3375->2375/tcp insane_pare | |
a25389c32b43 progrium/consul "/bin/start -server -" 3 minutes ago Up 3 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp adoring_swartz | |
docker-machine create -d nutanix --nutanix-username admin --nutanix-password 'nutanix/4u' --nutanix-endpoint '10.68.64.55:9440' --nutanix-vm-image Docker-Machine-Image --nutanix-vm-network 'vlan.68' --swarm --swarm-discovery="consul://$(docker-machine ip consul):8500" --engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" --engine-opt="cluster-advertise=eth0:2376" node01 | |
Docker Compose | |
web: | |
build: . | |
links: | |
- db | |
ports: | |
- "3000:3000" | |
environment: | |
NODE_ENV: development | |
db: | |
image: mongo | |
ports: | |
- "27017:27017" | |
cloning issue : | |
[root@cos7-docker-machine ~]# docker info | more | |
WARNING: No kernel memory limit support | |
Containers: 10 | |
Running: 4 | |
Paused: 0 | |
Stopped: 6 | |
Images: 4 | |
Server Version: swarm/1.2.3 | |
Role: primary | |
Strategy: spread | |
Filters: health, port, containerslots, dependency, affinity, constraint | |
Nodes: 3 | |
swarm01: 10.68.68.146:2376 | |
+ ID: D6XX:YW2A:NHDW:2FOI:EKYP:AF6E:KJUN:LBBU:6TMH:NIIK:TDJQ:NE4B | |
+ Status: Pending | |
+ Containers: 2 | |
+ Reserved CPUs: 0 / 1 | |
+ Reserved Memory: 0 B / 1.018 GiB | |
+ Labels: executiondriver=, kernelversion=3.10.0-327.22.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), provider=nutanix, storagedriver=devicemapper | |
+ Error: Engine (ID: D6XX:YW2A:NHDW:2FOI:EKYP:AF6E:KJUN:LBBU:6TMH:NIIK:TDJQ:NE4B, Addr: 10.68.68.146:2376) shows up with another ID:Q6JO:2KWO:5J75:I2DD:WUSL:H37 | |
F:75CR:MG2M:ICUM:3O7L:ELRG:AUVR. Please remove it from cluster, it can be added back. | |
+ UpdatedAt: 2016-06-27T17:00:39Z | |
+ ServerVersion: 1.11.2 | |
swarm02: 10.68.68.148:2376 | |
+ ID: RBF7:AEOD:5M2B:B4D5:IBSU:5VAE:YWIN:XZK6:FVIA:UDN2:YZ2H:ID5P | |
+ Status: Healthy | |
+ Containers: 2 | |
+ Reserved CPUs: 0 / 1 | |
+ Reserved Memory: 0 B / 1.018 GiB | |
+ Labels: executiondriver=, kernelversion=3.10.0-327.18.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), provider=nutanix, storagedriver=devicemapper | |
+ UpdatedAt: 2016-06-27T17:00:36Z | |
+ ServerVersion: 1.11.2 | |
swarm-master: 10.68.68.143:2376 | |
+ ID: D6XX:YW2A:NHDW:2FOI:EKYP:AF6E:KJUN:LBBU:6TMH:NIIK:TDJQ:NE4B | |
+ Status: Healthy | |
+ Containers: 8 | |
+ Reserved CPUs: 0 / 1 | |
+ Reserved Memory: 0 B / 1.018 GiB | |
+ Labels: executiondriver=, kernelversion=3.10.0-327.18.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), provider=nutanix, storagedriver=devicemapper | |
+ UpdatedAt: 2016-06-27T17:00:37Z | |
+ ServerVersion: 1.11.2 | |
Plugins: | |
Volume: | |
Network: | |
Kernel Version: 3.10.0-327.18.2.el7.x86_64 | |
Operating System: linux | |
Architecture: amd64 | |
CPUs: 2 | |
Total Memory: 2.036 GiB | |
Name: db8959467d0d | |
Docker Root Dir: | |
Debug mode (client): false | |
Debug mode (server): false | |
https://github.com/docker/swarm/issues/1406 | |
You seems to clone your machines (or VMs). | |
Stop docker service, delete /etc/docker/key.json (or finding then delete it). | |
Then restart docker service again. | |
Your machines will be good to go. | |
Nodes: 4 | |
swarm01: 10.68.68.146:2376 | |
+ ID: IKQO:ILRP:DVTR:R6CX:Y73X:TXTH:KCQI:WW3Y:HD3Y:CSAO:YMRG:36NB | |
+ Status: Healthy | |
+ Containers: 2 | |
+ Reserved CPUs: 0 / 1 | |
+ Reserved Memory: 0 B / 1.018 GiB | |
+ Labels: executiondriver=, kernelversion=3.10.0-327.22.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), provider=nutanix, storagedriver=devicemapper | |
+ UpdatedAt: 2016-06-27T17:07:07Z | |
+ ServerVersion: 1.11.2 | |
swarm02: 10.68.68.148:2376 | |
+ ID: RBF7:AEOD:5M2B:B4D5:IBSU:5VAE:YWIN:XZK6:FVIA:UDN2:YZ2H:ID5P | |
+ Status: Healthy | |
+ Containers: 2 | |
+ Reserved CPUs: 0 / 1 | |
+ Reserved Memory: 0 B / 1.018 GiB | |
+ Labels: executiondriver=, kernelversion=3.10.0-327.18.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), provider=nutanix, storagedriver=devicemapper | |
+ UpdatedAt: 2016-06-27T17:07:25Z | |
+ ServerVersion: 1.11.2 | |
swarm03: 10.68.68.150:2376 | |
+ ID: ADJ2:HJEN:26NC:PQ4Q:3XNJ:RYHQ:3IGC:L5KY:CQLI:3ZGR:RZLS:ZI62 | |
+ Status: Healthy | |
+ Containers: 2 | |
+ Reserved CPUs: 0 / 1 | |
+ Reserved Memory: 0 B / 1.018 GiB | |
+ Labels: executiondriver=, kernelversion=3.10.0-327.18.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), provider=nutanix, storagedriver=devicemapper | |
+ UpdatedAt: 2016-06-27T17:07:01Z | |
+ ServerVersion: 1.11.2 | |
swarm-master: 10.68.68.143:2376 | |
+ ID: D6XX:YW2A:NHDW:2FOI:EKYP:AF6E:KJUN:LBBU:6TMH:NIIK:TDJQ:NE4B | |
+ Status: Healthy | |
+ Containers: 8 | |
+ Reserved CPUs: 0 / 1 | |
+ Reserved Memory: 0 B / 1.018 GiB | |
+ Labels: executiondriver=, kernelversion=3.10.0-327.18.2.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), provider=nutanix, storagedriver=devicemapper | |
+ UpdatedAt: 2016-06-27T17:07:10Z | |
+ ServerVersion: 1.11.2 | |
[root@cos7-docker-machine ~]# for i in $(seq 1 10) ; do docker ; done | |
49d01ed50020273ee5991ffda904ce3f79ab4cedbbdff236611f3c5c33353e7a | |
c892c6d448fa42166f42a1a60a33db09a2a3541492d0e938a02a5bf85f10c44f | |
c9e7be4d5f7bc03ddad0a10abce0e4796bfe1a57a4587f730fad23b8ee98d152 | |
00440d8ddc69b0349494f30b13b734617bcc1d955d723202f28a7c6d54ca7164 | |
077928558b86581a30401928cfe9540262a53a65401ce83dfaed439e26ebe073 | |
7603bedb58981725d2dc121ddb8e8f7b414caabdac424239dc56b1720bd07757 | |
5166032e144644568254a9f732868c294cdc97e6fb1004730a62b38fcc91936d | |
e7d2abeea4fec226a095f588ff9efe07081835b934ba1c52c3745963f7b83411 | |
8a912c021124927ff86c6ef2a2acd234caade5763565e34ed15338124e67591f | |
0a14564e67e57c77ff0cd54f347b1e2667fec74d4b617d72e4239a0f859c322c | |
[root@cos7-docker-machine ~]# docker ps | |
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES | |
0a14564e67e5 ubuntu "/bin/bash" 4 minutes ago Up 3 minutes swarm03/ray10 | |
8a912c021124 ubuntu "/bin/bash" 4 minutes ago Up 3 minutes swarm01/ray9 | |
e7d2abeea4fe ubuntu "/bin/bash" 4 minutes ago Up 4 minutes swarm03/ray8 | |
5166032e1446 ubuntu "/bin/bash" 4 minutes ago Up 4 minutes swarm02/ray7 | |
7603bedb5898 ubuntu "/bin/bash" 4 minutes ago Up 4 minutes swarm01/ray6 | |
077928558b86 ubuntu "/bin/bash" 4 minutes ago Up 4 minutes swarm03/ray5 | |
00440d8ddc69 ubuntu "/bin/bash" 4 minutes ago Up 4 minutes swarm02/ray4 | |
c9e7be4d5f7b ubuntu "/bin/bash" 4 minutes ago Up 4 minutes swarm03/ray3 | |
c892c6d448fa ubuntu "/bin/bash" 5 minutes ago Up 4 minutes swarm01/ray2 | |
49d01ed50020 ubuntu "/bin/bash" 5 minutes ago Up 5 minutes swarm02/ray1 | |
docker run --rm swarm -l debug list consul://10.128.1.65:8500/swarm | |
curl http://10.128.1.65:8500/v1/kv/swarm?recurse | python -m json.tool | |
docker-machine create -d nutanix --nutanix-endpoint 10.4.88.30:9440 --nutanix-username “admin” --nutanix-password “nutanix/4u” --nutanix-vm-image “docker-img” --nutanix-vm-network “vmnet” --nutanix-vm-cores 1 --nutanix-vm-cpus 4 --nutanix-vm-mem 4096 docker4 | |
FROM centos:centos6 | |
MAINTAINER "ray hassan" [email protected] | |
RUN groupadd mongod && useradd mongod -g mongod | |
COPY mongodb3.2-repo /etc/yum.repos.d/mongodb.repo | |
RUN yum update -y yum && yum install -y mongodb-org | |
RUN mkdir -p /data/db && chown -R mongod:mongod /data/db | |
VOLUME ["/data/db"] | |
WORKDIR /data | |
EXPOSE 27017 | |
CMD ["/usr/bin/mongod"] | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment