Skip to content

Instantly share code, notes, and snippets.

View dav1x's full-sized avatar

Davis Phillips dav1x

  • Red Hat
  • Austin, TX
View GitHub Profile
module:
auth:
community: syseng
apcups:
walk:
- 1.3.6.1.2.1.1.3
- 1.3.6.1.2.1.2
- 1.3.6.1.4.1.318.1.1.1.12
- 1.3.6.1.4.1.318.1.1.1.2
- 1.3.6.1.4.1.318.1.1.1.3
---
- name: Copy cloud provider storage class file
template:
src: cloud-provider-storage-class.yaml.j2
dest: ~/cloud-provider-storage-class.yaml
- name: Copy cloud provider storage class file to single master
fetch:
src: ~/cloud-provider-storage-class.yaml
dest: ~/cloud-provider-storage-class.yaml
Keeping your OpenShift Container Platform HAproxy HA with Keepalived
A typical OpenShift Container Platform deployment will have multiple master, app and infra nodes for high availability. In this case, there is no single point of failure for the cluster, unless you have a single HAproxy server configured. The following article will discuss how to configure keepalived for maximum uptime for HAproxy. In the vsphere on OCP reference architecture, [https://access.redhat.com/documentation/en-us/reference_architectures/2017/html/deploying_a_red_hat_openshift_container_platform_3_on_vmware_vcenter_6/] two HAproxy virtual machines are configured and the ansible playbooks set up keepalived using a virtual IP address for Virtual Router Rendundancy Protocol or VRRP.
Load Balancer Options
The load balancer will distribute traffic accross two different groups. HAproxy serves ports 8443 for the masters and 80, 443 for the infra nodes for the routers. The reference architecture provides a couple of different options f
Keeping your OpenShift Container Platform HAproxy Highly Available with Keepalived
A typical OpenShift Container Platform deployment will have multiple master, app and infra nodes for high availability. In this case, there is no single point of failure for the cluster, unless you have a single HAproxy server configured. The following article will discuss how to configure keepalived for maximum uptime for HAproxy. In the vsphere on OCP reference architecture, [https://access.redhat.com/documentation/en-us/reference_architectures/2017/html/deploying_a_red_hat_openshift_container_platform_3_on_vmware_vcenter_6/] two HAproxy virtual machines are configured and the ansible playbooks set up keepalived using a virtual IP address for Virtual Router Rendundancy Protocol or VRRP.
This article will discuss the entire process of adding another haproxy server and configuring them both for high availability with keepalived. Keepalived is routing software written in C. In this configuration there will be a backup and mas
[root@master-0 ~]# cat cloud-provider-storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: "ose3-vmware-prod"
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
datastore: "ose3-vmware-prod"
user = "[email protected]"
password = "xxxxxxx"
server = "10.x.x.25"
port = 443
insecure-flag = 1
datacenter = Boston
datastore = ose3-vmware-prod
working-dir = /Boston/vm/ocp36/
[Disk]
scsicontrollertype = pvscsi
Post cloud provider 3.7
status:
addresses:
- address: 10.19.114.241
type: ExternalIP
- address: 10.19.114.241
type: InternalIP
- address: master-0
type: Hostname
[root@e2e-vsphere1:~] vim-cmd vmsvc/get.guest 388
Guest information:
(vim.vm.GuestInfo) {
toolsStatus = "toolsOk",
toolsVersionStatus = "guestToolsUnmanaged",
toolsVersionStatus2 = "guestToolsUnmanaged",
toolsRunningStatus = "guestToolsRunning",
toolsVersion = "10277",
toolsInstallType = "guestToolsTypeOpenVMTools",
osm_controller_args:
cloud-provider:
- "vsphere"
cloud-config:
- "/etc/origin/cloudprovider/vsphere.conf"
osm_api_server_args:
cloud-provider:
- "vsphere"
cloud-config:
- "/etc/origin/cloudprovider/vsphere.conf"
[root@app-0 ~]# ovs-ofctl -O openflow13 dump-flows br0
OFPST_FLOW reply (OF1.3) (xid=0x2):
cookie=0x0, duration=240384.006s, table=0, n_packets=0, n_bytes=0, priority=250,ip,in_port=2,nw_dst=224.0.0.0/4 actions=drop
cookie=0x0, duration=240384.041s, table=0, n_packets=12460, n_bytes=523320, priority=200,arp,in_port=1,arp_spa=172.16.0.0/16,arp_tpa=172.16.0.0/23 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10
cookie=0x0, duration=240384.037s, table=0, n_packets=198323, n_bytes=13443986, priority=200,ip,in_port=1,nw_src=172.16.0.0/16 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10
cookie=0x0, duration=240384.033s, table=0, n_packets=0, n_bytes=0, priority=200,ip,in_port=1,nw_dst=172.16.0.0/16 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10
cookie=0x0, duration=240383.994s, table=0, n_packets=91, n_bytes=3822, priority=200,arp,in_port=2,arp_spa=172.16.0.1,arp_tpa=172.16.0.0/16 actions=goto_table:30
cookie=0x0, duration=240383.990s, table=0, n_packets=704850, n_by