[core@ip-10-0-7-170 ~]$ oc get deployments --namespace=openshift-cluster-version
[core@ip-10-0-7-170 ~]$ oc scale deployment cluster-version-operator --replicas=0 --namespace=openshift-cluster-version
Check out the current deployment...
Spin up Kubernetes without a network, then... install weave per the installation guide
In theory before CNI plugin install nodes should be not ready & the CNI net.d folder should be empty...
[centos@kube-nonetwork-master ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-nonetwork-master NotReady master 58s v1.13.3
Prerequisites: Multus + NetworkAttachmentDefinition CRD installed.
In order to have DHCP working as an IPAM plugin -- you'll need to have the DHCP CNI binary running.
In this example, we'll run this (as a daemonset) from the dougbtv/dhcp
image, which is based on Tomo's dockerfile.
About my setup: In this setup, I use a macvlan plugin as the secondary interface for Multus. In my setup, I used an upstream Kubernetes running on KVM guests. The master device for macvlan is eth0
and that device is connected to a bridge in the 192.168.122.0/24
network space, and has an already running DHCP server available.
--- | |
apiVersion: v1 | |
kind: ServiceAccount | |
metadata: | |
name: virt-device-plugin | |
namespace: kube-system | |
--- | |
apiVersion: extensions/v1beta1 | |
kind: DaemonSet |
Turned out that I needed to set a custom version of Kubernetes during build so that my build would work with kubeadm
. kubeadm would otherwise complain that it had mismatched version, or, it wouldn't create RBACs correctly
Making with a custom version...
From @dims on #sig-release on slack.kubernetes.com:
@dougbtv looking at code in the scripts that build the version number, looks like you can set
KUBE_GIT_VERSION_FILE
to a file and the file can have the format and you can set it to anything you wish. (Though to be honest i haven’t tried quick-release with it)
Then I made a file like so:
From Kural on intel-corp-team.slack.com, how to cleanup CNI IPs that no longer point to a live container with swiftmedical/cni-cleanup
We prepare a systemd service file with the path to execute cni-cleanup:
# cat /lib/systemd/system/cni-cleanup.service
[Unit]
Description=CNI-cleanup
[Service]
Make sure you're on the latest (or at this commit) on kube-ansible
$ git log -1 --stat
commit ca535bcf8ea5e0fb3b99c80205f9bb8563497aee (HEAD -> master, origin/master)
Merge: 01a8cb6 c79d47f
Author: Doug Smith <[email protected]>
Date: Tue Jul 10 15:08:19 2018 -0400
Merge pull request #239 from redhat-nfvpe/multus_crd_update
Ansible bootstrap cheatsheet.
ansible-playbook -i inventory/doug.openshift310.yml playbooks/vm-teardown.yml && \
ansible-playbook -i inventory/doug.openshift310.yml playbooks/virt-host-setup.yml && \
ansible-playbook -i inventory/vms.local.generated -e "host_type=atomic" playbooks/bootstrap.yml
and... Now including centos/tools
images on the cluster hosts...
Install Helm with a specific service account for tiller...
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 0700 get_helm.sh
./get_helm.sh
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init --service-account tiller --upgrade
Assuming you've already run the virt-host-setup playbook, you'll first need to remove (or move) the existing CentOS image to download a new one. (Or change the variables to otherwise put it in a new place, but, this is how I did it). I also went and ran the vm-teardown.yml
playbook to remove existing hosts.
Go ahead and move the CentOS cloud image...
$ cd /home/images/
$ ls -lh CentOS-7-x86_64-GenericCloud.qcow2
-rw-r--r--. 1 root root 838M Feb 6 19:35 CentOS-7-x86_64-GenericCloud.qcow2
$ mv CentOS-7-x86_64-GenericCloud.qcow2 not.atomic.CentOS-7-x86_64-GenericCloud.qcow2