CONTEXT: 3 Masters:
master1.openshift.com 172.17.28.10
master2.openshift.com 172.17.28.12
master3.openshift.com 172.17.28.18
In this example we will be adding "master2.openshift.com" back into the cluster after it was removed.
Based on OSE enterprise 3.1.0.4 installer | |
# sed -i 's/8443/443/g' /usr/share/ansible/openshift-ansible/roles/openshift_master/tasks/main.yml | |
# sed -i 's/8443/443/g' /usr/share/ansible/openshift-ansible/roles/openshift_master_cluster/tasks/configure.yml | |
Then add the following under "OSEv3:vars" | |
[OSEv3:vars] |
> cat project-request.json | |
{ | |
"kind": "Template", | |
"apiVersion": "v1", | |
"metadata": { | |
"name": "project-request", | |
"creationTimestamp": null | |
}, | |
"objects": [ | |
{ |
[root@infra ~]# cat /etc/haproxy/haproxy.cfg | |
# Global settings | |
#--------------------------------------------------------------------- | |
global | |
chroot /var/lib/haproxy | |
pidfile /var/run/haproxy.pid | |
maxconn 20000 | |
user haproxy | |
group haproxy | |
daemon |
CONTEXT: 3 Masters:
master1.openshift.com 172.17.28.10
master2.openshift.com 172.17.28.12
master3.openshift.com 172.17.28.18
In this example we will be adding "master2.openshift.com" back into the cluster after it was removed.
CONTEXT: 3 Masters:
master1.openshift.com 172.17.28.10
master2.openshift.com 172.17.28.12
master3.openshift.com 172.17.28.18
In this example we will be adding "master2.openshift.com" back into the cluster.
# yum install etcd-2.3.7-4.el7.x86_64
# systemctl enable iptables.service --now
On the 1st master, the master with the directory /etc/etcd/ca
Back up the certs
Create new CA from existing openssl.cnf
# cd /etc/etcd/
# export etcd_openssl_conf=/etc/etcd/ca/openssl.cnf
# sed -i 's/365/1825/' $etcd_openssl_conf
# openssl req -config ${etcd_openssl_conf} -newkey rsa:4096
apiVersion: v1 | |
kind: Pod | |
metadata: | |
name: sleep-test-pod | |
spec: | |
containers: | |
- name: sleep-test-container | |
image: rhel7 | |
command: [ "/bin/bash", "-c", "--" ] | |
args: [ "while true; do sleep 30; done;" ] |
Manual data migration of the etcd cluster | |
In case the migration fails (for some reason), we can finish the migration manually. Depending on a point in which the migration fails we will need the following commands (followed by all the remaining commands): | |
Before the etcd migration started: | |
In this case it is recommended to re-run the migration playbook again. The cluster can end up with master services stopped. They must be started and running before the migration is repeated. | |
Before the first member got migrated: | |
Before the command is run the etcd service must be stopped. | |
Raw |
{ | |
"kind": "Template", | |
"apiVersion": "v1", | |
"metadata": { | |
"creationTimestamp": null | |
}, | |
"objects": [ | |
{ | |
"kind": "ClusterRole", | |
"apiVersion": "v1", |