This example uses the latest clusterctl (v1.0.0) and also the latest CAPG release with suports v1beta1 (v1.0.0)
steps to get a running workload cluster, for testing/development purposes
this is a quick overview for more in depth you can check https://cluster-api.sigs.k8s.io/user/quick-start.html
- create a kind cluster
$ kind create cluster --image kindest/node:v1.22.1 --wait 5m
- export the required variables
$ export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )
$ export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2
$ export GCP_NODE_MACHINE_TYPE=n1-standard-2
$ export GCP_PROJECT=<YOUR GCP PROJECT>
$ export GCP_REGION=us-east4
$ export IMAGE_ID=<YOUR IMAGE>
$ export GCP_NETWORK_NAME=default
$ export CLUSTER_NAME=test # you can choose any name this is used for this example
- setup the network in this example we are using the default network so we will create some router/nats for our workload cluster to have internet access
$ gcloud compute routers create "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" --region="${GCP_REGION}" --network="default"
$ gcloud compute routers nats create "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" --router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter" --nat-all-subnet-ip-ranges --auto-allocate-nat-external-ips
- deploy CAPI/CAPG
$ clusterctl init --infrastructure gcp
- Generate the workload cluster config and apply it
$ clusterctl generate cluster $CLUSTER_NAME --kubernetes-version v1.22.3 > workload-test.yaml
$ kubectl apply -f workload-test.yaml
-
you can check the capg manager logs / you can watch the gcp console the control plane vm should be up and running soon
-
checks
$ clusterctl describe cluster $CLUSTER_NAME
NAME READY SEVERITY REASON SINCE MESSAGE
/test False Info WaitingForKubeadmInit 5s
├─ClusterInfrastructure - GCPCluster/test
└─ControlPlane - KubeadmControlPlane/test-control-plane False Info WaitingForKubeadmInit 5s
└─Machine/test-control-plane-x57zs True 31s
└─MachineInfrastructure - GCPMachine/test-control-plane-7xzw2
$ kubectl get kubeadmcontrolplane
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
test-control-plane test 1 1 1 2m9s v1.22.3
- Get the kubeconfig for the workload cluster
$ clusterctl get kubeconfig $CLUSTER_NAME
$ clusterctl get kubeconfig $CLUSTER_NAME > workload-test.kubeconfig
- apply the cni
$ kubectl --kubeconfig=./workload-test.kubeconfig \
apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
- wait a bit and you should see this when get the kubeadmcontrolplane
$ kubectl get kubeadmcontrolplane
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
test-control-plane test true true 1 1 1 0 6m33s v1.22.3
$ kubectl get nodes --kubeconfig=./workload-test.kubeconfig
NAME STATUS ROLES AGE VERSION
test-control-plane-7xzw2 Ready control-plane,master 62s v1.22.3
-
edit the
MachineDeployment
in theworkload-test.yaml
it have 0 replicas add the replicas you want to have your nodes, in this case we used 2 -
apply the `workload-test.yaml``
-
after a few minutes you should have all up and running
$ clusterctl describe cluster $CLUSTER_NAME
NAME READY SEVERITY REASON SINCE MESSAGE
/test True 15m
├─ClusterInfrastructure - GCPCluster/test
├─ControlPlane - KubeadmControlPlane/test-control-plane True 15m
│ └─Machine/test-control-plane-x57zs True 19m
│ └─MachineInfrastructure - GCPMachine/test-control-plane-7xzw2
└─Workers
└─MachineDeployment/test-md-0 True 10m
└─2 Machines... True 13m See test-md-0-68bd55744b-qpk67, test-md-0-68bd55744b-tsgf6
$ kubectl get nodes --kubeconfig=./workload-test.kubeconfig
NAME STATUS ROLES AGE VERSION
test-control-plane-7xzw2 Ready control-plane,master 21m v1.22.3
test-md-0-b7766 Ready <none> 17m v1.22.3
test-md-0-wsgpj Ready <none> 17m v1.22.3
-
this is a usual k8s cluster you can deploy your apps and whatever you want
-
to delete the workload cluster
$ kubectl delete cluster $CLUSTER_NAME
- delete the router/nat
$ gcloud compute routers nats delete "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" \
--router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter"
$ gcloud compute routers delete "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" \
--region="${GCP_REGION}"
- delete kind
$ kind delete cluster