Created
October 18, 2017 16:04
-
-
Save dav1x/dfbc9f9149929c1eddc374c45011e8bc to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The release of OpenShift Container Platform 3.6 brings support for vsphere cloud provider. This provides vsphere VMDK dynamic provisioning for persistent volumes for container workloads. The storage presented to vsphere virtual machines as a VMDK has ReadWriteOnce access mode. | |
In the OCP 3.6 on vSphere <a href="https://access.redhat.com/documentation/en-us/reference_architectures/2017/html/deploying_and_managing_openshift_container_platform_3.6_on_vmware_vsphere/">reference architecture </a> much of this process is automated and can be implemented easily. | |
Virtual Machine Disks or VMDKs exists in virtual machines. Configuring the OCP cluster for vsphere cloud provider support requires: | |
<ul> | |
<li>Master Node Configuration</li> | |
<li>Application Node Configuration</li> | |
<li>vCenter or vSphere Requirements</li> | |
</ul> | |
The <a href="https://kubernetes.io/docs/getting-started-guides/vsphere/">kubernetes docs</a> do a great job of highlighting the requirements for vSphere. | |
First, both master and nodes need the following parameters set and services restarted: | |
<ul> | |
<li>cloud-provider=vsphere</li> | |
<li>cloud-config=path_to_config.conf</li> | |
</ul> | |
<h4><strong>Master configuration</strong></h4> | |
[code language="bash"] | |
$ vi /etc/origin/master/master-config.yaml | |
kubernetesMasterConfig: | |
apiServerArguments: | |
cloud-config: | |
- /etc/vsphere/vsphere.conf | |
cloud-provider: | |
- vsphere | |
controllerArguments: | |
cloud-config: | |
- /etc/vsphere/vsphere.conf | |
cloud-provider: | |
- vsphere | |
$ sudo systemctl restart atomic-openshift-master.service | |
[/code] | |
<h4><strong>Node configuration</strong></h4> | |
[code language="bash"] | |
$ vi /etc/origin/node/node-config.yaml | |
kubeletArguments: | |
cloud-config: | |
- /etc/vsphere/vsphere.conf | |
cloud-provider: | |
- vsphere | |
$ sudo systemctl restart atomic-openshift-node.service | |
[/code] | |
Next, the vsphere.conf file should loosely resemble this: | |
[code language="bash"] | |
$ cat /etc/vsphere/vsphere.conf | |
[Global] | |
user = "[email protected]" | |
password = "vcenter_password" | |
server = "10.*.*.25" | |
port = 443 | |
insecure-flag = 1 | |
datacenter = Boston | |
datastore = ose3-vmware-prod | |
working-dir = /Boston/vm/ocp36/ | |
[Disk] | |
scsicontrollertype = pvscsi | |
[/code] | |
The variables are discussed more in the Kubernetes document. The working-dir variable is the folder that houses the OpenShift guest machines. In vSphere, the folder syntax will be /Datacenter/vm/foldername. | |
<h4><strong>Using GOVC</strong></h4> | |
The tool govc is a GO based application for interacting with vSphere and vCenter. | |
The documentation for govc is located in github: | |
<a href="https://github.com/vmware/govmomi/tree/master/govc">https://github.com/vmware/govmomi/tree/master/govc</a> | |
First, download govc then export the vars that govc needs to query vCenter: | |
[code language="bash"] | |
curl -LO https://github.com/vmware/govmomi/releases/download/v0.15.0/govc_linux_amd64.gz | |
gunzip govc_linux_amd64.gz | |
chmod +x govc_linux_amd64 | |
cp govc_linux_amd64 /usr/bin/ | |
export GOVC_URL='vCenter IP OR FQDN' | |
export GOVC_USERNAME='[email protected]' | |
export GOVC_PASSWORD='vCenter Password' | |
export GOVC_INSECURE=1 | |
$ govc ls | |
/Boston/vm | |
/Boston/network | |
/Boston/host | |
/Boston/datastore | |
$ govc ls /Boston/vm | |
/Boston/vm/ocp36 | |
$ govc ls /Boston/vm/ocp36 | |
/Boston/vm/ocp36/haproxy-0 | |
/Boston/vm/ocp36/app-0 | |
/Boston/vm/ocp36/infra-0 | |
/Boston/vm/ocp36/haproxy-1 | |
/Boston/vm/ocp36/master-0 | |
/Boston/vm/ocp36/nfs-0 | |
[/code] | |
<h4><strong>vSphere Pre-requisites</strong></h4> | |
<ul> | |
<li>disk.enableUUID</li> | |
</ul> | |
Next, the enableUUID parameter needs to be set on all virtual machines in the OCP cluster. The option is necessary so that the VMDK always presents a consistent UUID to the VM, this allows the new disk to be mounted properly. | |
<img class="alignnone wp-image-440099" src="https://developers.redhat.com/blog/wp-content/uploads/2017/10/enableUUID.png" alt="enableUUID" width="1153" height="354" /> | |
The enableUUID option can be set at the template or VM in the vCenter client. Next, the template would then be used to deploy the OpenShift VMs. | |
Additionally, govc can be used to set this as well: | |
[code language="bash"] | |
for each VM in `govc ls /Boston/vm/ocp36/`;do govc vm.change -e="disk.enableUUID=1" -vm="$VM" | |
[/code] | |
Lastly, with the cloud provider configuration in place a new storage class to deploy VMDKs can be created. | |
<h4><strong>Storage Classes and VMDK Provisioning</strong></h4> | |
The datastore storage class can be created with the following yaml file: | |
[code language="bash"] | |
$ vi cloud-provider-storage-class.yaml | |
kind: StorageClass | |
apiVersion: storage.k8s.io/v1 | |
metadata: | |
name: "ose3-vmware-prod" | |
provisioner: kubernetes.io/vsphere-volume | |
parameters: | |
diskformat: zeroedthick | |
datastore: "ose3-vmware-prod" | |
oc create -f cloud-provider-storage-class.yaml | |
[/code] | |
Now that the <em>StorageClass</em> object is created. The <strong><em>oc</em></strong> command can be used to verify the <em>StorageClass</em> exists. | |
[code language="bash"] | |
$ oc get sc | |
NAME TYPE | |
ose3-vmware-prod kubernetes.io/vsphere-volume | |
$ oc describe sc ose3-vmware-prod | |
Name: ose3-vmware-prod | |
IsDefaultClass: No | |
Annotations: <none> | |
Provisioner: kubernetes.io/vsphere-volume | |
Parameters: datastore=ose3-vmware-prod,diskformat=zeroedthick | |
Events: <none> | |
[/code] | |
OpenShift can now dynamically provision <em>VMDKs</em> for persistent containers storage within the OpenShift environment. | |
[code language="bash"] | |
$ vi storage-class-vmware-claim.yaml | |
kind: PersistentVolumeClaim | |
apiVersion: v1 | |
metadata: | |
name: ose3-vmware-prod | |
annotations: | |
volume.beta.kubernetes.io/storage-class: ose3-vmware-prod | |
spec: | |
accessModes: | |
- ReadWriteOnce | |
resources: | |
requests: | |
storage: 2Gi | |
$ oc create -f storage-class-vmware-claim.yaml | |
$ oc describe pvc ose3-vmware-prod | |
Name: ose3-vmware-prod | |
Namespace: default | |
StorageClass: ose3-vmware-prod | |
Status: Bound | |
Volume: pvc-cc8a9970-7c76-11e7-ae86-005056a571ee | |
Labels: <none> | |
Annotations: pv.kubernetes.io/bind-completed=yes | |
pv.kubernetes.io/bound-by-controller=yes | |
volume.beta.kubernetes.io/storage-class=vmware-datastore-ssd | |
volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/vsphere-volume | |
Capacity: 2Gi | |
Access Modes: RWO | |
Events: | |
FirstSeen LastSeen Count From SubObjectPath Type Reason Message | |
--------- -------- ----- ---- ------------- -------- ------ ------- | |
19s 19s 1 persistentvolume-controller Normal ProvisioningSucceeded Successfully provisioned volume pvc-cc8a9970-7c76-11e7-ae86-005056a571ee using kubernetes.io/vsphere-volume | |
[/code] | |
Now, in vCenter, a couple of changes are initiated: | |
First, the new disk is created. | |
<img class="alignnone wp-image-440101" src="https://developers.redhat.com/blog/wp-content/uploads/2017/10/ocp-new-disk.png" alt="ocp-new-disk" width="711" height="153" /> | |
Secondly, the disk is ready to be consumed by a VM to be attached to a POD. | |
<img class="alignnone wp-image-440100" src="https://developers.redhat.com/blog/wp-content/uploads/2017/10/ocp-new-disk-backend.png" alt="" width="441" height="52" /> | |
Lastly, while datastores are generally accessible via shared storage across a vCenter cluster, the VMDKs are tied to a specific machine. This explains the `ReadWriteOnce` limitation of the persistent storage. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment