First of all be root while doing this... sudo su
Set the hostname:
hostnamectl set-hostname 'k8s-master'
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils
systemctl start libvirtd
systemctl enable libvirtd
lsmod | grep kvm
If needed, install xwindows for use of graphical virt manager:
sudo yum groupinstall "GNOME Desktop" "Graphical Administration Tools"
sudo ln -sf /lib/systemd/system/runlevel5.target /etc/systemd/system/default.target
reboot`
Before Start creating VMs, let’s first create the bridge interface. Bridge interface is required if you want to access virtual machines from outside of your hypervisor network.
cd /etc/sysconfig/network-scripts/
cp ifcfg-eno1 ifcfg-br0
Edit the Interface file and set followings:
[root@ network-scripts]# vi ifcfg-eno1
TYPE=Ethernet
BOOTPROTO=static
DEVICE=eno1
ONBOOT=yes
BRIDGE=br0
Edit the Bridge file (ifcfg-br0) and set the followings:
[root@ network-scripts]# vi ifcfg-br0
TYPE=Bridge
BOOTPROTO=static
DEVICE=br0
ONBOOT=yes
Replace the IP address and DNS server details as per your setup.
Restart the network Service to enable the bridge interface.
systemctl restart network
Check the Bridge interface using below command:
ip addr show br0
First we need to disable both SELinux and swap. Issue the following commands:
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
Next, disable swap with the following command:
swapoff -a
We must also ensure that swap isn't re-enabled during a reboot on each server. Open up the /etc/fstab
and comment out the swap entry like this:
# /dev/mapper/centos-swap swap swap defaults 0 0
vi /etc/fstab
At this moment just stop the firewall with:
systemctl stop firewalld
Or load the correct firewall rules for Kubernetes master in the firewall:
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --reload
Or load the correct firewall rules for Kubernetes workers in the firewall:
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --permanent --add-port=6783/tcp
firewall-cmd --reload
Enable the br_netfilter kernel module. This is done with the following commands:
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
Install the Docker-ce dependencies with the following command:
yum install -y yum-utils device-mapper-persistent-data lvm2
Next, add the Docker-ce repository with the command:
yum-config-manager --add-repo https://gist.githubusercontent.com/stolsma/12457f4db016a86fea631fecee419989/raw/ef412fc4e282a3d83fa216b0be53757ecf1edf37/docker-ce.repo
yum update
Install Docker-ce with the command:
yum install -y docker-ce
Start docker automatically on reboot and also now:
systemctl start docker
systemctl enable docker
First we need to create a repository entry for yum. To do this, issue the following command :
yum-config-manager --add-repo https://gist.githubusercontent.com/stolsma/12457f4db016a86fea631fecee419989/raw/ef412fc4e282a3d83fa216b0be53757ecf1edf37/kubernetes.repo
yum update
Install Kubernetes with the command:
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
and start the k8s kubelet deamon with:
systemctl enable kubelet
systemctl start kubelet
Once this part of the installation completes, you could reboot the machine. (TODO: but this is not needed?)
Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup). By default, Docker should already belong to cgroupfs (you can check this with the command docker info | grep -i cgroup
). Add Kubernetes to this too, issue the command:
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Restart the systemd daemon and the kubelet service with the commands:
systemctl daemon-reload
systemctl restart kubelet
We're now ready to initialize the Kubernetes cluster. This is done on kubemaster (and only on that machine). On kubemaster, issue the command (again, adjusting the IP addresses to fit your needs):
kubeadm init --apiserver-advertise-address=<MASTER_IP> --pod-network-cidr=<POD_NETWORK>/<POD_NETWORK_SUBNET_BITS>
With flannel as CNI plugin:
kubeadm init --apiserver-advertise-address=<MASTER_IP> --pod-network-cidr=10.244.0.0/16
When this completes (it'll take anywhere from 30 seconds to 5 minutes), the output should include the joining command for your nodes.
Once that completes, head over to kube2 and issue the command (adjusting the IP address to fit your needs):
kubeadm join <MASTER_IP>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<DISCOVERY_TOKEN_HASH>
Where TOKEN and DISCOVERY_TOKEN_HASH are the tokens displayed after the initialization command completes. Those are only 24 hour valid!
If you do not have the token, you can get it by running the following command on the master node:
kubeadm token list
The output is similar to this:
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
signing token generated by bootstrappers:
'kubeadm init'. kubeadm:
default-node-token
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the master node:
kubeadm token create
The output is similar to this:
5didvk.d09sbcov8ph2amjw
If you don’t have the value of --discovery-token-ca-cert-hash
, you can get it by running the following command chain on the master node:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
The output is similar to this:
8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
Before Kubernetes can be used, we must take care of a bit of configuration. Come out of root and issue the following three commands (to create a new .kube configuration directory, copy the necessary configuration file, and give the file the proper ownership):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
First set something in iptables to pass bridged IPv4 traffic to iptables’ chains
sysctl net.bridge.bridge-nf-call-iptables=1
Now we must deploy the flannel network to the cluster with the command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, run:
kubectl taint nodes --all node-role.kubernetes.io/master-
With output looking something like:
node "test-01" untainted
taint "node-role.kubernetes.io/master:" not found
taint "node-role.kubernetes.io/master:" not found
This will remove the node-role.kubernetes.io/master
taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.
Once the deploy command completes, you should be able to see both nodes on the master, by issuing the command kubectl get nodes
Congratulations, you now have a Kubernetes cluster ready for pods.
sudo kubeadm reset