After failing to get e2e to work with a preexisting cluster, ala kubetest, I learned that this isn't the method most folks use, despite what is all over the kubernetes documentation. I was pushed towards e2e-k8s.sh
from the KIND repo.
To get started,
go get -u github.com/kubernetes-sigs/kind go get k8s.io/kubernetes
KIND can start a cluster from sources, assuming you are calling kind from the root of your k/k tree. ur k/k tree
KIND has a script for running the entire test, but I found it better, to start, to start the kind cluster and then to run e2e tests.
kind build node-image #this will build based on my k/k
For out testing, set up a single control plane with 2 workers. Config file (from root of k/k),:
$ cat _artifacts/kind-config.yaml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
kubeadmConfigPatches:
- |
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
name: config
apiServer:
extraArgs:
"feature-gates": "PodOverhead=true"
scheduler:
extraArgs:
"feature-gates": "PodOverhead=true"
- |
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
metadata:
name: config
featureGates:
PodOverhead: true
networking:
ipFamily: ipv4
nodes:
- role: control-plane
- role: worker
- role: worker
create the actual cluster:
kind create cluster --image kindest/node:latest -v=3 --config=../kind-config.yaml
From top of k/k,
make WHAT=test/e2e/e2e.test
From top of k/k, with cluster up and kubeconfig setup:
./hack/ginkgo-e2e.sh --provider=skeleton --num-nodes=2 --ginkgo.focus="PodOverhead" --report-dir=./_artifacts --disable-log-dump=true
make WHAT=test/e2e/e2e.test