By default, the reference architecture playbooks are configured to deploy 3 master, 3 application, and 3 infrastructure nodes. This cluster size provides enough resources to get started with
deploying a few test applications or a Continuous Integration Workflow example. However, as the cluster begins to be utilized by more teams and projects, it will be become necessary to
provision more application or infrastructure nodes to support the expanding environment. To facilitate easily growing the cluster, the add-node.py
python script
(similar to ocp-on-vmware.py
) is provided in the openshift-ansible-contrib
repository. It will allow for provisioning either an Application or Infrastructure node per run and
can be ran as many times as needed.
Verify the quantity and type of the nodes in the cluster by using the oc get nodes
command. The output below is an example of a complete OpenShift environment after the deployment of the reference architecture environment.
$ oc get nodes NAME STATUS AGE master-0.example.com Ready,SchedulingDisabled 14m master-1.example.com Ready,SchedulingDisabled 14m master-2.example.com Ready,SchedulingDisabled 14m infra-0.example.com Ready 14m infra-1.example.com Ready 14m infra-2.example.com Ready 14m app-0.example.com Ready 14m app-1.example.com Ready 14m app-2.example.com Ready 14m
The python script add-node.py
is operationally similar to the ocp-on-vmware.py
script. Parameters can optionally be passed in when calling the script and values are
read from ocp-on-vmware.ini
. Any required parameters not already set will automatically prompted for at run time. To see all allowed parameters, the --help trigger is available.
$ ./add-node.py --help usage: add-node.py [-h] [--node_type NODE_TYPE] [--node_number NODE_NUMBER] [--create_inventory] [--no_confirm NO_CONFIRM] [--tag TAG] [--verbose] Add new nodes to an existing OCP deployment optional arguments: -h, --help show this help message and exit --node_type NODE_TYPE Specify the node label: app, infra, storage --node_number NODE_NUMBER Specify the number of nodes to add --create_inventory Helper script to create json inventory file and exit --no_confirm NO_CONFIRM Skip confirmation prompt --tag TAG Skip to various parts of install valid tags include: - vms (create storage vms) - crs-node-setup (install the proper packages on the crs nodes) - heketi-setup (install heketi and config on the crs master) - heketi-ocp (install the heketi secret and storage class on OCP) --verbose Verbosely display commands
To add an application node, run the add-node.py
script following the example below. Once the instance is launched, the installation of OpenShift will automatically begin.
Note
|
The storage node_type is available to add persistent storage to the OCP cluster using container native storage CNS or container ready storage CRS . Please see the
upcoming chapter involving persistent storage for more information about this options.
|
$ ./add-node.py --node_type=app Configured inventory values: console_port: 8443 deployment_type: openshift-enterprise openshift_vers: v3_5 vcenter_host: 10.x.x.25 vcenter_username: [email protected] vcenter_password: xxxxxx vcenter_template_name: ocp-server-template-2.0.2 vcenter_folder: ocp vcenter_datastore: ose-vmware vcenter_cluster: devel vcenter_resource_pool: OCP vcenter_datacenter: Boston public_hosted_zone: example.com app_dns_prefix: apps vm_dns: 10.x.x.5 vm_gw: 10.x.x.254 vm_netmask: 255.255.254.0 vm_network: "VM Network" rhel_subscription_user: sysengra rhel_subscription_pass: xxxxxx rhel_subscription_server: rhel_subscription_pool: Red Hat OpenShift Container Platform, Premium* byo_lb: no lb_host: haproxy-0 byo_nfs: no nfs_host: nfs-0 nfs_registry_mountpoint: /exports master_nodes: 3 infra_nodes: 3 app_nodes: 3 storage_nodes: 0 vm_ipaddr_start: 10.x.x.225 ocp_hostname_prefix: auth_type: ldap ldap_user: openshift ldap_user_password: xxxxxx ldap_fqdn: e2e.bos.redhat.com openshift_hosted_metrics_deploy: false openshift_sdn: redhat/openshift-ovs-subnet containerized: false container_storage: none tag: None node_number: 1 ini_path: ./ocp-on-vmware.ini node_type: app Continue creating the inventory file with these values? [y/N]: y Inventory file created: add-node.json host_inventory: app-1: guestname: app-1 ip4addr: 10.x.x.230 tag: app Continue adding nodes with these values? [y/N]:
The process for adding an Infrastructure Node is nearly identical to adding an Application Node. The only differences in adding an Infrastructure node is the requirement updating the HAproxy load balancer entry used by the router. Follow the example steps below to add a new infrastructure node.
$ *./add-node.py --node_type=infra Configured inventory values: console_port: 8443 deployment_type: openshift-enterprise openshift_vers: v3_5 ...omitted... node_number: 1 ini_path: ./ocp-on-vmware.ini node_type: infra Continue creating the inventory file with these values? [y/N]: y Inventory file created: add-node.json host_inventory: infra-1: guestname: infra-1 ip4addr: 10.x.x.230 tag: infra Continue adding nodes with these values? [y/N]:
To verify a newly provisioned node that has been added to the existing environment, use the oc get nodes
command. In this example, node app-3.example.com
is an application node newly deployed by the add-node.py
playbooks..
$ oc get nodes NAME STATUS AGE master-0. Ready,SchedulingDisabled 14m master-1.example.com Ready,SchedulingDisabled 14m master-2.example.com Ready,SchedulingDisabled 14m infra-0.example.com Ready 14m infra-1.example.com Ready 14m infra-2.example.com Ready 14m app-0.example.com Ready 14m app-1.example.com Ready 14m app-2.example.com Ready 14m app-3.example.com Ready 2m $ oc get nodes --show-labels | grep app | wc -l 4