on your EC2 instance, install REX-Ray using:
$ curl -sSL https://dl.bintray.com/emccode/rexray/install | sh -s -- stable 0.3.3
REX-Ray is installed. Create a new configuration file and add the following contents:
$ sudo vi /etc/rexray/config.yml
| /** | |
| * ST_Anything_Doors Device Type - ST_Anything_Doors.device.groovy | |
| * | |
| * Copyright 2015 Daniel Ogorchock | |
| * | |
| * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except | |
| * in compliance with the License. You may obtain a copy of the License at: | |
| * | |
| * http://www.apache.org/licenses/LICENSE-2.0 | |
| * |
| //****************************************************************************************** | |
| // File: ST_Anything_Doors.ino | |
| // Authors: Dan G Ogorchock & Daniel J Ogorchock (Father and Son) | |
| // | |
| // Summary: This Arduino Sketch, along with the ST_Anything library and the revised SmartThings | |
| // library, demonstrates the ability of one Arduino + SmartThings Shield to | |
| // implement a multi input/output custom device for integration into SmartThings. | |
| // The ST_Anything library takes care of all of the work to schedule device updates | |
| // as well as all communications with the SmartThings Shield. | |
| // |
| /** | |
| * ST_Anything_Doors Device Type - ST_Anything_Doors.device.groovy | |
| * | |
| * Copyright 2015 Daniel Ogorchock | |
| * | |
| * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except | |
| * in compliance with the License. You may obtain a copy of the License at: | |
| * | |
| * http://www.apache.org/licenses/LICENSE-2.0 | |
| * |
| /** | |
| * ST_Anything Doors Multiplexer - ST_Anything_Doors_Multiplexer.smartapp.groovy | |
| * | |
| * Copyright 2015 Daniel Ogorchock | |
| * | |
| * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except | |
| * in compliance with the License. You may obtain a copy of the License at: | |
| * | |
| * http://www.apache.org/licenses/LICENSE-2.0 | |
| * |
| /** | |
| * Virtual Contact Sensor Device Type - VirtualContactSensor.device.groovy | |
| * | |
| * Copyright 2014 Daniel Ogorchock | |
| * | |
| * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except | |
| * in compliance with the License. You may obtain a copy of the License at: | |
| * | |
| * http://www.apache.org/licenses/LICENSE-2.0 | |
| * |
on your EC2 instance, install REX-Ray using:
$ curl -sSL https://dl.bintray.com/emccode/rexray/install | sh -s -- stable 0.3.3
REX-Ray is installed. Create a new configuration file and add the following contents:
$ sudo vi /etc/rexray/config.yml
current status is FAILING
The rexay.sock file is never created under the /var/run/docker/plugins directory. However, everything in /var/run/rexray and /var/run/libstorage all seem to be in working order. /var/log/rexray/rexray.log has the error:
time="2017-01-11T20:46:34Z" level=panic msg="error initializing instance ID cache" inner.lsx="/var/lib/libstorage/lsx-linux" inner.args=[scaleio instanceID] inner.inner.Stderr=[101 114 114 111 114 58 32 101 114 114 111 114 32 103 101 116 116 105 110 103 32 105 110 115 116 97 110 99 101 32 73 68 58 32 112 114 111 98 108 101 109 32 103 101 116 116 105 110 103 32 115 100 99 32 103 117 105 100 10]
This procedure will deploy Docker For AWS and go through the steps to build REX-Ray containers. This process will have some hiccups because Docker for AWS will provision resources in different availability zones (AZs). Multiple workers/agents will be spread across AZs (not regions) which means a potential host failure will trigger Swarm to restart containers that could spread across an AZ. If a container is restarted in a different AZ, the pre-emption mechanism for REX-Ray will not work because it no longer has access to the volume in the former AZ.
SSH into one of your Docker Manager Nodes
| #!/bin/bash | |
| # This script will pre-install everything needed to install Harbor on CentOS 7 | |
| # It will install Harbor using the Online Version which pulls images from DockerHub | |
| # Python & Docker Pre-reqs | |
| yum install gcc openssl-devel bzip2-devel wget yum-utils device-mapper-persistent-data lvm2 -y | |
| # Install Python 2.7.15 | |
| cd /usr/src |
The OpenFaaS documentation for faas-netes gives a clear explanation of how to install with Helm, but Pivotal Container Service (PKS) has 2 caveats since provisoned Kubernetes clusters are non-RBAC but are token backed and LoadBalancer inclusion with NSX-T. This is going to be a quick streamline of the documentation that adds out of the box support for PKS.
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
$ kubectl -n kube-system create sa tiller && kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
$ helm init --skip-refresh --upgrade --service-account tiller
$ kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml
$ helm repo add openfaas https://openfaas.github.io/faas-netes/
$ helm repo update && helm upgrade openfaa