Please watch: https://www.youtube.com/watch?v=yFPYGeKwmpk
Based from https://docs.openshift.org/latest/install_config/install/advanced_install.html, we will install Openshift Origin on a single host that will act as a single master node (schedulable).
AWS Requirements:
- RHEL or Centos7 - EC2 with Elastic IP, to act as master node
- With EBS volume attached for docker storage setup
- Your EC2 key pair (pem file)
- Ensure the root volume is 40GB and up
Azure Requirements:
- RHEL or Centos7 (OpenLogic) - VM with Static IP, to act as master node
- With an additional attached VHD Storage for docker storage setup
- Ensure the root volume is 40GB and up
Prepare the nodes:
- https://docs.openshift.org/latest/install_config/install/prerequisites.html
- https://docs.openshift.org/latest/install_config/install/host_preparation.html
Important tasks to not forget:
## Install NetworkManager (not part of the documentation)
## For fixing ansible playbook errors about this package not existing
yum -y install NetworkManager
systemctl enable NetworkManager
systemctl start NetworkManager
## Install docker
yum -y install docker
sed -i '/OPTIONS=.*/c\OPTIONS="--selinux-enabled --log-opt max-size=1M --log-opt max-file=3"' /etc/sysconfig/docker
## Configure docker-storage-setup
lvmconf --disable-cluster
cat <<EOF > /etc/sysconfig/docker-storage-setup
DEVS=/dev/xvdb
VG=docker-vg
WIPE_SIGNATURES=true
EOF
## Setup docker storage
rm -fr /var/lib/docker
docker-storage-setup
## Start docker
systemctl enable docker && \
systemctl start docker
References:
- https://docs.openshift.org/latest/install_config/install/advanced_install.html .
- https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.origin.example .
# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=centos
deployment_type=origin
# Master node time sync
openshift_clock_enabled=true
# If ansible_ssh_user is not root, ansible_sudo must be set to true
ansible_become=true
# enable htpasswd authentication
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/openshift-passwd'}]
# This is the public ip of the master instance with private ip ip-172-31-27-247.ap-southeast-1.compute.internal
openshift_master_default_subdomain=ose.EC2PUBLICIP.xip.io
# default project node selector
osm_default_node_selector='region=apps'
# default selectors for router and registry services
openshift_router_selector='region=infra'
openshift_registry_selector='region=infra'
# disable strict production setup check
openshift_disable_check=docker_storage,memory_availability
# so builds can push successfully to the insecure registry using its default cidr block
openshift_docker_insecure_registries=172.30.0.0/16
# Configure dnsIP in the node config
openshift_dns_ip=172.30.0.1
# host group for masters
[masters]
ip-172-31-27-247.ap-southeast-1.compute.internal openshift_schedulable=true
[etcd]
ip-172-31-27-247.ap-southeast-1.compute.internal
# host group for nodes, includes region info
[nodes]
ip-172-31-27-247.ap-southeast-1.compute.internal openshift_node_labels="{'region': 'infra'}"
ip-172-31-22-154.ap-southeast-1.compute.internal openshift_node_labels="{'region': 'apps', 'zone': 'default'}"
# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=john.bryan.j.sazon
deployment_type=origin
# Master node time sync
openshift_clock_enabled=true
# If ansible_ssh_user is not root, ansible_sudo must be set to true
ansible_become=true
# enable htpasswd authentication
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/openshift-passwd'}]
openshift_master_default_subdomain=origin.55.41.148.233.xip.io
# default project node selector
osm_default_node_selector='region=apps'
# default selectors for router and registry services
openshift_router_selector='region=infra'
openshift_registry_selector='region=infra'
# disable strict production setup check
openshift_disable_check=docker_storage,memory_availability,disk_availability
# so builds can push successfully to the insecure registry using its default cidr block
openshift_docker_insecure_registries=172.30.0.0/16
# Configure dnsIP in the node config
openshift_dns_ip=172.30.0.1
# Enable unsupported configurations, things that will yield a partially
# functioning cluster but would not be supported for production use
openshift_enable_unsupported_configurations=false
# openshift-ansible will wait indefinitely for your input when it detects that the
# value of openshift_hostname resolves to an IP address not bound to any local
# interfaces. This mis-configuration is problematic for any pod leveraging host
# networking and liveness or readiness probes.
# Setting this variable to true will override that check.
#openshift_override_hostname_check=true
openshift_override_hostname_check=true
openshift_master_cluster_hostname=public-master.eastus.cloudapp.azure.com
openshift_master_cluster_public_hostname=public-master.eastus.cloudapp.azure.com
openshift_master_cluster_public_vip=55.41.148.233
[masters]
55.41.148.233 openshift_schedulable=true
[etcd]
55.41.148.233
[nodes]
55.41.148.233 openshift_node_labels="{'region': 'infra'}" openshift_hostname=openshift-master01
56.114.47.192 openshift_node_labels="{'region': 'apps', 'zone': 'default'}" openshift_hostname=openshift-worker01
Ensure ansible is installed from the control server .
yum install -y ansible pyOpenSSL python-cryptography python-lxml
Install .
ansible-playbook openshift-ansible/playbooks/byo/openshift_facts.yml --private-key <YOUR_SSH_KEY_PAIR> -i openshift-hosts.txt
ansible-playbook openshift-ansible/playbooks/byo/config.yml --private-key <YOUR_SSH_KEY_PAIR> -i openshift-hosts.txt
- Create the initial user
htpasswd -b /etc/origin/openshift-passwd admin admin
oadm policy add-role-to-user cluster-admin admin
-
Access the console using https://EC2PUBLICIP:8443/console/
-
Ensure all nodes and masters excludes origin and docker updates (the playbook does this for us)
## To exclude docker updates when running yum update
origin-docker-excluder exclude
## To exclude origin updates when running yum update
origin-excluder exclude
cat /etc/yum.conf
should have something like .
exclude= docker*1.20* docker*1.19* docker*1.18* docker*1.17* docker*1.16* docker*1.15* docker*1.14* docker*1.13* tuned-profiles-origin-node origin-tests origin-sdn-ovs origin-recycle origin-pod origin-node origin-master origin-dockerregistry origin-clients-redistributable origin-clients origin
bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://bugs.centos.org/bug_report_page.php?category=yum
- Ensure that the subdomain is accessible http://ose.EC2PUBLICIP.xip.io
- Test the Push to the Docker Registry by creating a sample app
- Enable Cockpit
- Add Persistent Volume to Registry
===========================================
Comments on running with PRIVATE KEY:
===========================================