Assumptions:
- You're running on an ubuntu 20.04 VM/physical server
- Your IP address is: 10.198.117.162 (obviously you will need to adust this to your env)
- The hostname of the machine is
k8s-test - The windows node we'll be adding is a Windows Server 2022 (ltsc2022)
Obviously you will need to adust these assumptions to your env. But for clarity we define these assumptions here.
Let's set up some environment variables:
export MASTER_IP="10.198.117.162"We need to install Go as well as docker. This will allow us to build all needed bits of kubetest, k8s and containerd.
snap install --classic go
snap install docker
apt-get install -y build-essential libbtrfs-dev socat conntrack pkg-config libseccomp-dev jqDocker is installed from snaps. Thius gives us a convenient way to run the needed docker dependency in paralel to our test setup.
Install docker buildx:
mkdir -p $HOME/snap/docker/current/.docker/cli-plugins
wget https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-amd64 -O $HOME/snap/docker/current/.docker/cli-plugins/docker-buildx
chmod +x $HOME/snap/docker/current/.docker/cli-plugins/docker-buildxSetup environment variables:
echo 'export GOPATH=$(go env GOPATH)' >> ~/.bashrc
echo 'export PATH=$GOPATH/bin:$PATH' >> ~/.bashrc
mkdir -p $GOPATH/binClone needed repositories:
mkdir -p $GOPATH/src/{k8s.io,github.com}
mkdir $GOPATH/src/github.com/containerd
mkdir -p $GOPATH/src/github.com/kubernetes-sigs
mkdir -p $GOPATH/src/github.com/opencontainers
git clone https://github.com/kubernetes/test-infra $GOPATH/src/k8s.io/test-infra
git clone https://github.com/kubernetes/kubernetes $GOPATH/src/k8s.io/kubernetes
git clone https://github.com/containerd/containerd $GOPATH/src/github.com/containerd/containerd
git clone https://github.com/kubernetes-sigs/cri-tools.git $GOPATH/src/github.com/kubernetes-sigs/cri-tools
git clone https://github.com/opencontainers/runc $GOPATH/src/github.com/opencontainers/runcInstall kubetest:
cd $GOPATH/src/k8s.io/test-infra
go install ./kubetestInstall crictl:
cd $GOPATH/src/github.com/kubernetes-sigs/cri-tools
go build -o /usr/bin/crictl ./cmd/crictl/Install runc:
cd $GOPATH/src/github.com/opencontainers/runc
make && make installGenerate a certificate. You can use any method you wish to generate this certificate. This will be used to serve the registry over TLS.
wget https://gist.github.com/gabriel-samfira/61663ec3c07652d4deeeccfdec319d64/raw/ba1a37dedeb224516b0c44fb4c171ac4c8ed1f10/gen_certs.go -O /tmp/gen_certs.go
go build -o $GOPATH/bin/gen_certs /tmp/gen_certs.go
rm /tmp/gen_certs.goCreate certificates:
mkdir -p /var/snap/docker/common/registry/etc/certs
gen_certs \
-output-dir /var/snap/docker/common/registry/etc/certs \
-certificate-hosts localhost,$(hostname),$(hostname -f),127.0.0.1Create registry config:
mkdir -p /var/snap/docker/common/registry/etc
cat << EOF > /var/snap/docker/common/registry/etc/config.yaml
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /var/lib/registry
http:
# Set this to whichever port you want.
addr: 0.0.0.0:443
net: tcp
host: https://10.198.117.162
headers:
X-Content-Type-Options: [nosniff]
tls:
certificate: /certs/srv-pub.pem
key: /certs/srv-key.pem
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
EOFCopy the CA certificate to a location docker can load it from. This will allow us to pull images from that registry without having to set the insecure registry flag.
mkdir -p /var/snap/docker/current/etc/docker/certs.d/10.198.117.162/
cp /var/snap/docker/common/registry/etc/certs/ca-pub.pem /var/snap/docker/current/etc/docker/certs.d/10.198.117.162/cacert.crt
sudo snap restart dockerAlso add the CA crt as a trusted CA on the system:
mkdir /usr/local/share/ca-certificates/extra
cp /var/snap/docker/common/registry/etc/certs/ca-pub.pem /usr/local/share/ca-certificates/extra/capub.crt
sudo update-ca-certificatesCreate the registry:
docker run -d \
-v /var/snap/docker/common/registry/etc/config.yaml:/etc/docker/registry/config.yml:ro \
-v /var/snap/docker/common/registry/etc/certs:/certs:ro \
-p 443:443 --name registry registryTest that the registry works:
# docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
registry latest b8604a3fe854 2 months ago 26.2MB
# docker tag b8604a3fe854 10.198.117.162/registry:latest
# docker push 10.198.117.162/registry:latest
The push refers to repository [10.198.117.162/registry]
aeccf26589a7: Pushed
f640be0d5aad: Pushed
aa4330046b37: Pushed
ad10b481abe7: Pushed
69715584ec78: Pushed
latest: digest: sha256:36cb5b157911061fb610d8884dc09e0b0300a767a350563cbfd88b4b85324ce4 size: 1363This registry will be used to host the kubernetes images we'll build next, and which will be the image source for when we deploy via kubeadm.
We'll be testing Windows, so we'll build the Linux containers and binaries for the control plane and the Windows bits that will run a node.
cd $GOPATH/src/k8s.io/kubernetes
export KUBE_DOCKER_REGISTRY=10.198.117.162
export KUBE_BUILD_PLATFORMS="linux/amd64 windows/amd64"
make quick-releaseCreate symlinks somewhere in your path for each of the kubernetes binaries:
cd $GOPATH/src/k8s.io/kubernetes
for binary in kube-log-runner kube-proxy kubeadm kubectl kubectl-convert kubelet
do
ln -s $GOPATH/src/k8s.io/kubernetes/_output/release-stage/node/linux-amd64/kubernetes/node/bin/$binary /usr/bin/$binary
doneThis will make sure your kubelet is the same version as the kubeadm binary you're running.
We should now also have part of the needed images in docker:
# docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
10.198.117.162/kube-apiserver-amd64 v1.24.0-alpha.1.724_c175418281a607 99267b3ea478 About a minute ago 135MB
10.198.117.162/kube-proxy-amd64 v1.24.0-alpha.1.724_c175418281a607 cb2c1271024a About a minute ago 112MB
10.198.117.162/kube-scheduler-amd64 v1.24.0-alpha.1.724_c175418281a607 7d52db163cb5 About a minute ago 53.5MB
10.198.117.162/kube-controller-manager-amd64 v1.24.0-alpha.1.724_c175418281a607 70e6da0c15c0 About a minute ago 125MB
kube-build build-38a203ad82-5-v1.24.0-go1.17.6-bullseye.0 5de2ca4c4c9f 14 minutes ago 7.49GBIf you plan on adding a Windows node, you'll also need to build the kube-proxy and flannel containers for Windows as well. These images are maintained by the sig-windows-tools SIG. We'll be using their Dockerfile to generate the images, with slight changes to account for the kube-proxy binary we've just built.
export KUBE_VERSION=$(kubeadm version -o short| sed 's/+/_/g')
git clone https://github.com/kubernetes-sigs/sig-windows-tools $HOME/sig-windows-tools
cd $HOME/sig-windows-tools/hostprocess/flannel/
cp $GOPATH/src/k8s.io/kubernetes/_output/release-stage/node/windows-amd64/kubernetes/node/bin/kube-proxy.exe kube-proxy/
sed -i 's/^RUN curl.*kube-proxy.exe/ADD kube-proxy.exe ./g' ./kube-proxy/DockerfileAt the time of this writing, the kube-proxy start script attempts to set the IPv6DualStack=false feature gate. This has been locked to true, and attempting to set it during startup results in an error. We remove that option here:
sed -i 's/,IPv6DualStack=false//g' kube-proxy/start.ps1These images leverage host process containers, which should be enabled in the current main branch of containerd and k8s.
We will need to create a buildkit image that has our registry CA cert:
# The cert here was generated earlier.
cat > buildkit.toml << EOF
[registry."10.198.117.162"]
ca=["/var/snap/docker/common/registry/etc/certs/ca-pub.pem"]
EOFNow create a new buildkit image:
# Pass in the above config.
docker buildx create --config=$PWD/buildkit.toml --name img-builder --useAdd the following to your docker $HOME/snap/docker/current/.docker/config.json:
{
"allow-nondistributable-artifacts": ["10.198.117.162"]
}Replace the registry IP with your own.
Without this, you may get an error when running build in the next step. The reason is that the Windows base container images are not distributable, but we don't really care in a local test env. We won't be distributing it anywhere. We'll simply be using it locally.
We can now build our images:
./build.sh --flannelVersion v0.16.1 --proxyVersion $KUBE_VERSION --repository 10.198.117.162 -aPush the rest of the images and manifests to the registry:
for img in kube-apiserver kube-scheduler kube-controller-manager
do
docker push $REGISTRY/$img-$ARCH:$VERSION
docker manifest create $REGISTRY/$img:$VERSION --amend $REGISTRY/$img-$ARCH:$VERSION
docker manifest push $REGISTRY/$img:$VERSION
doneVerify we have the manifests uploaded:
# docker manifest inspect 10.198.117.162/kube-apiserver:v1.24.0-alpha.1.724_c175418281a607
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 949,
"digest": "sha256:db9a1a0aa4b846e5df1e18d07dde87294425b875f5c194b6a07ca429d0166844",
"platform": {
"architecture": "amd64",
"os": "linux"
}
}
]
}Images and manifests have been uploaded. We still need to get some extra needed images and upload them to our registry. We can see the needed images by running:
# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.2
k8s.gcr.io/kube-controller-manager:v1.23.2
k8s.gcr.io/kube-scheduler:v1.23.2
k8s.gcr.io/kube-proxy:v1.23.2
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6We have the first 4, we need to also fetch the last 3:
docker pull k8s.gcr.io/pause:3.6
docker pull k8s.gcr.io/etcd:3.5.1-0
docker pull k8s.gcr.io/coredns/coredns:v1.8.6
docker image list | grep k8s.gcr.io
k8s.gcr.io/etcd 3.5.1-0 25f8c7f3da61 2 months ago 293MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7 3 months ago 46.8MB
k8s.gcr.io/pause 3.6 6270bb605e12 4 months ago 683kBWe need to tag them and push them:
docker tag 25f8c7f3da61 10.198.117.162/etcd:3.5.1-0
docker tag a4ca41631cc7 10.198.117.162/coredns:v1.8.6
docker tag 6270bb605e12 10.198.117.162/pause:3.6
# and push them
docker push 10.198.117.162/etcd:3.5.1-0
docker push 10.198.117.162/coredns:v1.8.6
docker push 10.198.117.162/pause:3.6If you're not interested in testing k8s against the latest HEAD of containerd, you can install it from the apt repo. But it would be more flexible to install it from source, considering you could at any time opt to switch to a stable or unstable branch.
cd $GOPATH/src/github.com/containerd/containerd
go install ./cmd/...
for binary in containerd containerd-shim containerd-shim-runc-v1 containerd-shim-runc-v2 containerd-stress ctr gen-manpages protoc-gen-gogoctrd
do
ln -s $GOPATH/bin/$binary /usr/bin/$binary
donemkdir -p /etc/containerd
containerd config default > /etc/containerd/containerd.toml
sed -i 's/snapshotter = "overlayfs"/snapshotter = "native"/g' /etc/containerd/containerd.toml
cat << EOF > /etc/systemd/system/containerd.service
[Unit]
Description=Containerd
After=multi-user.target
[Service]
Type=simple
ExecStart=/usr/bin/containerd --config="/etc/containerd/containerd.toml"
Restart=always
RestartSec=5s
# Change this to the user you want the wallet
# daemon to run under
User=root
[Install]
WantedBy=multi-user.target
EOFReload systemd and enable containerd:
systemctl daemon-reload
systemctl enable containerd.service
systemctl start containerd.serviceInstall the containerd CNI binaries:
wget -O /tmp/cni-plugins-linux-amd64-v1.0.1.tgz \
https://github.com/containernetworking/plugins/releases/download/v1.0.1/cni-plugins-linux-amd64-v1.0.1.tgz
mkdir -p /opt/cni/bin
tar xf /tmp/cni-plugins-linux-amd64-v1.0.1.tgz -C /opt/cni/bin
rm -f /tmp/cni-plugins-linux-amd64-v1.0.1.tgzRestart containerd:
systemctl restart containerdcat << EOF > /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime=remote
Restart=always
StartLimitInterval=0
RestartSec=5
[Install]
WantedBy=multi-user.target
EOFEnable the service:
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubeletWe need to turn off swap:
swapoff -aGenerate the default config:
kubeadm config print init-defaults > $HOME/kubeadm.yamlThe config will look something like this:
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: node
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.24.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}We'll change the following settings:
- advertiseAddress
- criSocket
- name
- kubernetesVersion
- imageRepository
- podSubnet
VERSION=$(kubeadm version -o short)
IPADDR="10.198.117.162"
SOCKET="unix:///run/containerd/containerd.sock"
sed -i "s/^kubernetesVersion:.*/kubernetesVersion: $VERSION/g" $HOME/kubeadm.yaml
sed -i "s/name: node/name: $HOSTNAME/g" $HOME/kubeadm.yaml
sed -i "s|criSocket:.*|criSocket: $SOCKET|g" $HOME/kubeadm.yaml
sed -i "s/advertiseAddress:.*/advertiseAddress: $IPADDR/g" $HOME/kubeadm.yaml
sed -i "s/^imageRepository:.*/imageRepository: $IPADDR/g" $HOME/kubeadm.yamlYou will also need to add podSubnet under networking.
The end result should look like this:
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.198.117.162
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: k8s-test
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: 10.198.117.162
kind: ClusterConfiguration
kubernetesVersion: v1.24.0-alpha.1.724_c175418281a607
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}Time to init the cluster:
kubeadm init --config $HOME/kubeadm.yamlCopy the configs:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configRemove taint from master:
kubectl taint nodes --all node-role.kubernetes.io/master-Most of these steps are taken from the official documentation.
You can choose any of the supported CNIs, just make sure that it's supported by all operating systems you plan to include in the cluster as nodes. Windows supports flannel, so we'll be using that.
It's recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. We'll need to make some changes on the linux nodes.
Add br_netfilter to /etc/modules
echo 'br_netfilter' >> /etc/modules
modprobe br_netfilterEnable bridged IPv4 traffic to iptables chains when using Flannel:
echo 'net.bridge.bridge-nf-call-iptables=1' >> /etc/sysctl.d/99-bridge-nf-call-iptables.conf
sysctl --system -aDownload the most recent flannel manifest:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml -O $HOME/kube-flannel.ymlModify the net-conf.json section of the flannel manifest in order to set the VNI to 4096 and the Port to 4789. It should look as follows:
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan",
"VNI": 4096,
"Port": 4789
}
}This is needed for flannel on Linux to interoperate with flannel on Windows. If you don't plan on adding a Windows node, you can skip this step.
Apply the manifest:
kubectl apply -f $HOME/kube-flannel.ymlYou should now be able to see your pods and node:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-test Ready control-plane,master 7m54s v1.24.0-alpha.1.724+c175418281a607
# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-769cd67fd6-lfj2j 1/1 Running 0 4m16s
kube-system coredns-769cd67fd6-n88l9 1/1 Running 0 3m9s
kube-system etcd-k8s-test 1/1 Running 5 8m18s
kube-system kube-apiserver-k8s-test 1/1 Running 5 8m18s
kube-system kube-controller-manager-k8s-test 1/1 Running 0 8m15s
kube-system kube-flannel-ds-qp5qg 1/1 Running 0 7m53s
kube-system kube-proxy-7hbcq 1/1 Running 0 8m1s
kube-system kube-scheduler-k8s-test 1/1 Running 5 8m15sWe'll need to set the location to the images web've built above, in the manifests we're about to deploy.
sed -i "s|sigwindowstools/kube-proxy:VERSION-nanoserver|10.198.117.162/kube-proxy:$KUBE_VERSION-flannel-hostprocess|g" $HOME/sig-windows-tools/hostprocess/flannel/kube-proxy/kube-proxy.ymlApply the manifest:
kubectl apply -f $HOME/sig-windows-tools/hostprocess/flannel/kube-proxy/kube-proxy.ymlDo the same for the flannel overlay manifest:
FLANNEL_IMG="$REGISTRY/flannel:v$FLANNEL_VERSION-hostprocess"
sed -i "s|image: sigwindowstools.*flannel:.*|image: $FLANNEL_IMG|g" $HOME/sig-windows-tools/hostprocess/flannel/flanneld/flannel-overlay.ymlApply the Windows flannel overlay manifests:
kubectl apply -f $HOME/sig-windows-tools/hostprocess/flannel/flanneld/flannel-overlay.ymlWe'll need to install a few features, set up containerd from source and configure at least one CNI for containerd. After which, we'll copy the needed k8s binaries from our linux box, and set up the kubelet service. Finally, we'll join the node to the already deployed k8s cluster. The kube-proxy and flannel containers will then be pulled and deployed, making this node a functional member of the cluster.
Copy the cert from the master node:
scp.exe root@$MASTER_IP:/var/snap/docker/common/registry/etc/certs/ca-pub.pem $HOME\ca-pub.crt
Import-Certificate -FilePath $HOME\ca-pub.crt -CertStoreLocation Cert:\LocalMachine\Root\This guide only focuses on process isolation containers. If you're planning on trying out hyper-v isolation containers, you'll need to install that feature as well.
Note: Installing the Containers feature, will reboot your machine.
Install-WindowsFeature -Name Containers -RestartMost of these steps are taken from the Install-Containerd.ps1 and PrepareNode.ps1 scripts. The steps bellow are adapted to use binaries we've built from main.
Once the machine has rebooted, we need to install the needed build dependencies, and build the containerd binaries. This can be easily done by using scripts made available by the containerd team. We'll use a slightly modified version of what is already present in the containerd repository, hosted in this gist:
wget -UseBasicParsing `
-OutFile "$HOME/setup_env.ps1" `
https://gist.github.com/gabriel-samfira/7b3b519a6a55303329f9278933f7e014/raw/310118c71c52e2cae04b29b15970600186d7e008/setup_env.ps1
& "$HOME/setup_env.ps1"You should now have the needed containerd binaries in C:/containerd/bin. We need to generate a containerd config, make some changes and install the containerd service.
Create the containerd config dir:
$ConainterDPath = "$env:ProgramFiles\containerd"
mkdir $ConainterDPathGenerate the config:
containerd.exe config default | Out-File "$ConainterDPath\config.toml" -Encoding asciiSet proper paths for the CNIs:
$config = Get-Content "$ConainterDPath\config.toml"
$config = $config -replace "bin_dir = (.)*$", "bin_dir = `"c:/opt/cni/bin`""
$config = $config -replace "conf_dir = (.)*$", "conf_dir = `"c:/etc/cni/net.d`""
$config | Set-Content "$ConainterDPath\config.toml" -Force
mkdir -Force c:\opt\cni\bin | Out-Null
mkdir -Force c:\etc\cni\net.d | Out-NullFetch the windows-container-networking repo:
git clone https://github.com/microsoft/windows-container-networking $HOME\windows-container-networkingBuild the CNI binaries:
cd $HOME\windows-container-networking
git checkout master
go build -o C:\opt\cni\bin\nat.exe -mod=vendor .\plugins\nat\
go build -o C:\opt\cni\bin\sdnoverlay.exe -mod=vendor .\plugins\sdnoverlay\
go build -o C:\opt\cni\bin\sdnbridge.exe -mod=vendor .\plugins\sdnbridge\We need to create a CNI config for containerd. This will ensure that any container spun up, will have functioning networking:
@"
{
"cniVersion": "0.2.0",
"name": "nat",
"type": "nat",
"master": "Ethernet",
"ipam": {
"subnet": "172.21.208.0/12",
"routes": [
{
"GW": "172.21.208.1"
}
]
},
"capabilities": {
"portMappings": true,
"dns": true
}
}
"@ | Set-Content "c:\etc\cni\net.d\nat.json" -ForceYou can choose any IP range you wish, as long as it does not conflict with any locally configured subnets. Make sure to set the master interface name to the name of your network adapter.
Remove any HNS networks that may exist. The nat CNI will automatically create one if needed:
Import-Module HostNetworkingService
Get-HnsNetwork | Remove-HnsNetworkInstall the containerd service and enable it on startup:
mkdir C:/var/log
C:/containerd/bin/containerd.exe --register-service --log-level=debug --log-file=C:/var/log/containerd.log --config "$ConainterDPath\config.toml"
Set-Service containerd -StartupType Automatic
Start-Service containerdCopy the needed binaries from the master node:
mkdir -Force /k/bin | Out-Null
scp.exe root@$MASTER_IP:~/go/src/k8s.io/kubernetes/_output/release-stage/node/windows-amd64/kubernetes/node/bin/kubelet.exe /k/bin/
scp.exe root@$MASTER_IP:~/go/src/k8s.io/kubernetes/_output/release-stage/node/windows-amd64/kubernetes/node/bin/kubeadm.exe /k/bin/Create needed folders:
mkdir -force C:\var\log\kubelet
mkdir -force C:\var\lib\kubelet\etc\kubernetes
mkdir -force C:\etc\kubernetes\pki
New-Item -path C:\var\lib\kubelet\etc\kubernetes\pki -type SymbolicLink -value C:\etc\kubernetes\pki\Create the kubelet startup script:
@"
`$FileContent = Get-Content -Path "/var/lib/kubelet/kubeadm-flags.env"
`$global:KubeletArgs = `$FileContent.TrimStart(''KUBELET_KUBEADM_ARGS='').Trim(''"'')
`$global:containerRuntime = "containerd"
if (`$global:containerRuntime -eq "Docker") {
`$netId = docker network ls -f name=host --format "{{ .ID }}"
if (`$netId.Length -lt 1) {
docker network create -d nat host
}
}
`$cmd = "C:\k\kubelet.exe `$global:KubeletArgs --cert-dir=`$env:SYSTEMDRIVE\var\lib\kubelet\pki --config=/var/lib/kubelet/config.yaml --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --hostname-override=`$(hostname) --pod-infra-container-image=`"mcr.microsoft.com/oss/kubernetes/pause:1.4.1`" --enable-debugging-handlers --cgroups-per-qos=false --enforce-node-allocatable=`"`" --network-plugin=cni --resolv-conf=`"`" --log-dir=/var/log/kubelet
Invoke-Expression `$cmd
"@ | Set-Content -Path C:\k\StartKubelet.ps1Set the k8s bin dir in the system PATH:
$currentPath = [Environment]::GetEnvironmentVariable("PATH", [EnvironmentVariableTarget]::Machine)
$currentPath += ';C:\k\bin'
[Environment]::SetEnvironmentVariable("PATH", $currentPath, [EnvironmentVariableTarget]::Machine)Register the kubelet service:
$global:Powershell = (Get-Command powershell).Source
$global:PowershellArgs = "-ExecutionPolicy Bypass -NoProfile"
nssm install kubelet $global:Powershell $global:PowershellArgs C:\k\StartKubelet.ps1
nssm set kubelet DependOnService containerdOpen kubelet port:
New-NetFirewallRule -Name kubelet -DisplayName 'kubelet' -Enabled True -Direction Inbound -Protocol TCP -Action Allow -LocalPort 10250We should now be ready to add this node to the cluster. To get the join command, run the following command on the master node:
~$ kubeadm token create --print-join-command
kubeadm join 10.198.117.162:6443 --token 96luu9.i93zi1c8ab602ipa --discovery-token-ca-cert-hash sha256:fe76cf309f51d65461cb8e83e38380d5907ee38163ad2cc205d51daece7612cf To this command we also need to add the --cri-socket argument and run it on our Windows node:
kubeadm join 10.198.117.162:6443 --token 96luu9.i93zi1c8ab602ipa --discovery-token-ca-cert-hash sha256:fe76cf309f51d65461cb8e83e38380d5907ee38163ad2cc205d51daece7612cf --cri-socket "npipe:////./pipe/containerd-containerd"On the master node, check the Windows node has joined:
$ kubectl get nodes -A
NAME STATUS ROLES AGE VERSION
k8s-test Ready control-plane,master 2d21h v1.24.0-alpha.1.724+c175418281a607
win-sur6h4cvh75 Ready <none> 2d21h v1.24.0-alpha.1.724+c175418281a607Check that flannel and kube-proxy have been set up correctly on Windows:
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-769cd67fd6-45jrw 1/1 Running 0 2d21h
coredns-769cd67fd6-r9488 1/1 Running 0 2d21h
etcd-k8s-test 1/1 Running 7 2d21h
kube-apiserver-k8s-test 1/1 Running 7 2d21h
kube-controller-manager-k8s-test 1/1 Running 2 2d21h
kube-flannel-ds-9mr79 1/1 Running 0 2d21h
kube-flannel-ds-windows-amd64-92lmf 1/1 Running 0 2d21h
kube-proxy-nwfb9 1/1 Running 0 2d21h
kube-proxy-windows-dvftk 1/1 Running 0 2d21h
kube-scheduler-k8s-test 1/1 Running 7 2d21hWe have a working cluster with Windows as a node.