Skip to content

Instantly share code, notes, and snippets.

@danielepolencic
Last active October 27, 2024 08:34
Show Gist options
  • Save danielepolencic/ef4ddb763fd9a18bf2f1eaaa2e337544 to your computer and use it in GitHub Desktop.
Save danielepolencic/ef4ddb763fd9a18bf2f1eaaa2e337544 to your computer and use it in GitHub Desktop.
Create 3 nodes Kubernetes cluster locally with Vagrant

3 Virtual Machines Kubernetes cluster

Dependencies

You should install VirtualBox and Vagrant before you start.

Creating the cluster

You should create a Vagrantfile in an empty directory with the following content:

Vagrant.configure("2") do |config|
  config.vm.provider :virtualbox do |v|
    v.memory = 1024
    v.cpus = 1
  end

  config.vm.provision :shell, privileged: true, inline: $install_common_tools

  config.vm.define :master do |master|
    master.vm.box = "ubuntu/xenial64"
    master.vm.hostname = "master"
    master.vm.network :private_network, ip: "10.0.0.10"
    master.vm.provision :shell, privileged: false, inline: $provision_master_node
  end

  %w{worker1 worker2}.each_with_index do |name, i|
    config.vm.define name do |worker|
      worker.vm.box = "ubuntu/xenial64"
      worker.vm.hostname = name
      worker.vm.network :private_network, ip: "10.0.0.#{i + 11}"
      worker.vm.provision :shell, privileged: false, inline: <<-SHELL
sudo /vagrant/join.sh
echo 'Environment="KUBELET_EXTRA_ARGS=--node-ip=10.0.0.#{i + 11}"' | sudo tee -a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
sudo systemctl daemon-reload
sudo systemctl restart kubelet
SHELL
    end
  end

  config.vm.provision "shell", inline: $install_multicast
end


$install_common_tools = <<-SCRIPT
# bridged traffic to iptables is enabled for kube-router.
cat >> /etc/ufw/sysctl.conf <<EOF
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
EOF

# disable swap
swapoff -a
sed -i '/swap/d' /etc/fstab

# Install kubeadm, kubectl and kubelet
export DEBIAN_FRONTEND=noninteractive
apt-get -qq install ebtables ethtool
apt-get -qq update
apt-get -qq install -y docker.io apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get -qq update
apt-get -qq install -y kubelet kubeadm kubectl
SCRIPT

$provision_master_node = <<-SHELL
OUTPUT_FILE=/vagrant/join.sh
rm -rf $OUTPUT_FILE

# Start cluster
sudo kubeadm init --apiserver-advertise-address=10.0.0.10 --pod-network-cidr=10.244.0.0/16 | grep "kubeadm join" > ${OUTPUT_FILE}
chmod +x $OUTPUT_FILE

# Configure kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Fix kubelet IP
echo 'Environment="KUBELET_EXTRA_ARGS=--node-ip=10.0.0.10"' | sudo tee -a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

# Configure flannel
curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
sed -i.bak 's|"/opt/bin/flanneld",|"/opt/bin/flanneld", "--iface=enp0s8",|' kube-flannel.yml
kubectl create -f kube-flannel.yml

sudo systemctl daemon-reload
sudo systemctl restart kubelet
SHELL

$install_multicast = <<-SHELL
apt-get -qq install -y avahi-daemon libnss-mdns
SHELL

Starting the cluster

You can create the cluster with:

$ vagrant up

Clean up

You can delete the cluster with:

$ vagrant destroy -f
@danielepolencic
Copy link
Author

Yes

@chidambaranathan-r
Copy link

Generally, in this setup, storage class will not be available. How do we setup in our laptop based infra? I would like to try out dynamic storage provisioning.

@danielepolencic
Copy link
Author

I think you need to install a storage provisioner such as https://github.com/kubevirt/hostpath-provisioner and create a StorageClass with it https://kubernetes.io/docs/concepts/storage/storage-classes/

@chidambaranathan-r
Copy link

chidambaranathan-r commented Aug 17, 2020 via email

@yasahmed
Copy link

i got this error :
master: W0920 17:30:57.290880 16768 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
master: [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
master: error execution phase preflight: [preflight] Some fatal errors occurred:
master: [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
master: [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
master: To see the stack trace of this error execute with --v=5 or higher
master: cp:
master: cannot stat '/etc/kubernetes/admin.conf'
master: : No such file or directory
master: chown:
master: cannot access '/home/vagrant/.kube/config'
master: : No such file or directory

@danielepolencic
Copy link
Author

@yasahmed

master: [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

change this v.cpus = 2:

  config.vm.provider :virtualbox do |v|
    v.memory = 1024
    v.cpus = 2
  end

@Harinath120
Copy link

Hi My name Harinath from India, i tried to install cluster but it shows this error,

configmap/kube-flannel-cfg created
master: error: unable to recognize "kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

@llinuxde
Copy link

llinuxde commented Dec 3, 2020

@Harinath120
you need to change flannel url on Vagrantfile before you run vagrant up:

curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

For more info see:
kubernetes/website#16441 (comment)

@llinuxde
Copy link

llinuxde commented Dec 4, 2020

Hi @danielepolencic
i had the following errors from your script:

1-
master: [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

2-
master: error: unable to recognize "kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

3-
worker1: discovery.bootstrapToken: Invalid value: "": using token-based discovery without caCertHashes can be unsafe. Set unsafeSkipCAVerification as true in your kubeadm config file or pass --discovery-token-unsafe-skip-ca-verification flag to continue worker1: To see the stack trace of this error execute with --v=5 or higher

and i've change your Vagrantfile to this and it works fine for me:
https://gist.github.com/llinuxde/e1ae26f8be0a8579dac372cd3fe99acd

vagrant@master:~$ date && kubectl get nodes
Fri Dec 4 09:22:52 UTC 2020
NAME STATUS ROLES AGE VERSION
master Ready master 9m16s v1.19.4
worker1 Ready <none> 5m23s v1.19.4
worker2 Ready <none> 50s v1.19.4

@gnaneethi81
Copy link

Thanks so much! Modified it a little in a repo I just made today. Have links to you and your gist as credit for this great work. Added disk size to it, creation and copy of the ssh key from master to the nodes, and an alias. Maybe some other stuff too. Would like to have a central config file that's easy to mode for size and spec of cluster later on. One issue was changing the grep command to get the full multi-line join: sudo kubeadm init --apiserver-advertise-address=10.0.0.10 --pod-network-cidr=10.244.0.0/16 | grep -Ei "kubeadm join|discovery-token-ca-cert-hash" > ${OUTPUT_FILE}

https://github.com/LocusInnovations/k8s-vagrant-virtualbox

Thank you , it's works nice

@MichaelLeeHobbs
Copy link

Super easy! Barely an inconvenience.

@nambyats
Copy link

what is the username password of this vmbox

@Yunir
Copy link

Yunir commented May 29, 2021

@nambyats

what is the username password of this vmbox

you can use vagrant ssh to enter to the virtual machine:

vagrant ssh master
vagrant ssh worker1
vagrant ssh worker2

@Yunir
Copy link

Yunir commented May 29, 2021

It is not working in the current state.

Changes needed:

  1. memory to 2048
  2. cpu to 2
  3. kube-flannel version from 0.9.0 to master inside the url
  4. append flag --discovery-token-unsafe-skip-ca-verification after creating join.sh script

@anilgidla
Copy link

how to login to the cluster, pl share instructions

@anilgidla
Copy link

vagrant@master:$ kubectl get nodes
The connection to the server
localhost:8080 was refused - did you specify the right host or port?
vagrant@master:
$

@anilgidla
Copy link

the api-container and other kubernetes containers didnt start automatically after dployment

@kumarabhi4
Copy link

kumarabhi4 commented Aug 28, 2021

Due to difference in cgroup being used in docker and kubeadm, kubeadm init fails. Since kubeadm uses systemd cgroup by default, so the same should be configured for docker. Solution is documented in kubernetes.io "https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker"

I modified "install_common_tools" like below, which worked for me:

$install_common_tools = <<-SCRIPT
# bridged traffic to iptables is enabled for kube-router.
cat >> /etc/ufw/sysctl.conf <<EOF
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
EOF

# disable swap
swapoff -a
sed -i '/swap/d' /etc/fstab

# Install kubeadm, kubectl and kubelet
export DEBIAN_FRONTEND=noninteractive
apt-get -qq install ebtables ethtool
apt-get -qq update
apt-get -qq install -y docker.io apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get -qq update
apt-get -qq install -y kubelet kubeadm kubectl
# included for mismatch in cgroup between docker and kubelet
sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
SCRIPT

@shivanshuraj1333
Copy link

Facing an error while trying to locally set up and practice kubernetes concepts.
Also tried using https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/vagrant/Vagrantfile, still got the same error.
Kindly guide me in the right direction, I'm not sure what's the problem, and have tried changing vagrant and virtual box versions but still not working.
Btw, basic Vagrant files like one generated from vagrant init hashicorp/bionic64 are working fine.

Error I'm getting:

==> master: Importing base box 'ubuntu/xenial64'...
==> master: Matching MAC address for NAT networking...
==> master: Checking if box 'ubuntu/xenial64' version '20211001.0.0' is up to date...
==> master: Setting the name of the VM: testVagrant_master_1640650019434_42678
==> master: Clearing any previously set network interfaces...
The IP address configured for the host-only network is not within the
allowed ranges. Please update the address used to be within the allowed
ranges and run the command again.

  Address: 10.0.0.10
  Ranges:

Valid ranges can be modified in the /etc/vbox/networks.conf file. For
more information including valid format see:

  https://www.virtualbox.org/manual/ch06.html#network_hostonly

@hethkar
Copy link

hethkar commented Sep 1, 2022

@danielepolencic How to use this to spin up with latest kubernetes version with ubuntu 22.04 ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment