- https://nip.io/
- https://github.com/authelia/authelia
- https://docs.cilium.io/en/stable/gettingstarted/istio/
This conversion only works for a single file to use it in the terraform-provider-kubernetes provider
echo 'yamldecode(file("test.yaml"))' | terraform console
One can overcome this multi kind definition yaml conversion with slicer:
curl -sL https://github.com/patrickdappollonio/kubectl-slice/releases/download/v1.2.1/kubectl-slice_1.2.1_linux_x86_64.tar.gz | tar -xvzf -;
rm -rf slices hcl;
./kubectl-slice -f document.yaml -o slices 2>&1 | grep  -oP "Wrote \K.+yaml" | while read yamlfile; do echo 'yamldecode(file("'$yamlfile'"))' | terraform console >>hcl; done;
cat hclEven after this though the manifest resource only takes one resource description, array does not work, so its a pain in the bumm to convert these without further coding to encompass each of these objects into a pseudo object of
resource "kubernetes_manifest" "crd-custom-name-for-each" {
  provider = kubernetes
  manifest = {$HCL-OBJECT_HERE}
}
also this issue https://medium.com/@danieljimgarcia/dont-use-the-terraform-kubernetes-manifest-resource-6c7ff4fe629a so getting rid of the whole provider just use helm maybe or custom flow at the end as this is too much effort for not much benefit beyond having a uniform config language.
- https://learn.hashicorp.com/tutorials/terraform/gke?in=terraform/kubernetes
- https://cloud.google.com/sdk/docs/uninstall-cloud-sdk
- https://cloud.google.com/sdk/docs/install#deb
- step 1 and 2 for source and key
 
sudo su -
apt-get update
apt-get dist-upgrade
apt autoremove
apt-get install apt-transport-https ca-certificates gnupg terraform kubectl google-cloud-sdk
gcloud auth application-default login --no-browser
git clone https://github.com/Neutrollized/free-tier-gke.git
- https://cloud.google.com/community/tutorials/getting-started-on-gcp-with-terraform
- JSON credentials files as above saved as ~/.config/...json
 
below commands do
- Enable neccesary APIs
- IAM config: add the credentials_file key mapped account [email protected] to roles/resourcemanager.projectIamAdmin the roles will be used from variables.tf to be added to the new/different cluster account
- roles/monitoring.viewer
- roles/monitoring.metricWriter
- roles/stackdriver.resourceMetadata.writer
- roles/logging.logWriter
 
gcloud services enable --async conmpute.googleapis.com
gcloud services enable --async container.googleapis.com
gcloud services enable --async cloudresourcemanager.googleapis.com
gcloud services enable --async iam.googleapis.com
gcloud services enable --async container.googleapis.com
gcloud projects add-iam-policy-binding immerspring --member='serviceAccount:[email protected]' --role='roles/resourcemanager.projectIamAdmin'These avoid the below
Error: Request
Set IAM Binding for role "roles/stackdriver.resourceMetadata.writer" on "project \"immerspring\""returned error: Batch request and retried single request "Set IAM Binding for role "roles/stackdriver.resourceMetadata.writer" on "project \"immerspring\""" both failed. Final error: Error applying IAM policy for project "immerspring": Error setting IAM policy for project "immerspring": googleapi: Error 403: Policy update access denied., forbidden
Error: Error creating service account: googleapi: Error 403: Identity and Access Management (IAM) API has not been used in project 351847295691 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/iam.googleapis.com/overview?project=351847295691 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
Error: Request
Set IAM Binding for role "roles/stackdriver.resourceMetadata.writer" on "project \"immerspring\""returned error: Batch request and retried single request "Set IAM Binding for role "roles/stackdriver.resourceMetadata.writer" on "project \"immerspring\""" both failed. Final error: Error retrieving IAM policy for project "immerspring": googleapi: Error 403: Cloud Resource Manager API has not been used in project 351847295691 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/overview?project=351847295691 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
- https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference
- https://github.com/Neutrollized/free-tier-gke/blob/master/terraform.tfvars.sample
vi variables.tf
# see diff below
❯ git commit -a
[master 45f9b8f] Hedgehog v1
 1 file changed, 14 insertions(+), 9 deletions(-)
~/terra/free-tier-gke master ⇡ 10sdiff --git a/variables.tf b/variables.tf
index 7e2050e..e76bbce 100644
--- a/variables.tf
+++ b/variables.tf
@@ -1,16 +1,20 @@
 #-----------------------
 # provider variables
 #-----------------------
-variable "project_id" {}
+variable "project_id" {
+  default = "immerspring"
+}
-variable "credentials_file_path" {}
+variable "credentials_file_path" {
+  default = "/home/sub/.config/immerspring-7d908732db98.json"
+}
 variable "region" {
-  default = "us-central1"
+  default = "australia-southeast1"
 }
 variable "zone" {
-  default = "us-central1-c"
+  default = "australia-southeast1-a"
 }
 #------------------------------------------------
@@ -69,7 +73,9 @@ variable "iam_roles_list" {
 # GKE Cluster
 #-----------------------------
-variable "gke_cluster_name" {}
+variable "gke_cluster_name" {
+  default = "hedgehog"
+}
 variable "regional" {
   description = "Is this cluster regional or zonal? Regional clusters aren't covered by Google's Always Free tier."
@@ -113,7 +119,7 @@ variable "master_authorized_network_cidr" {
 variable "master_ipv4_cidr_block" {
   description = "CIDR of the master network.  Range must not overlap with any other ranges in use within the cluster's network."
-  default     = ""
+  default     = "172.20.1.0/28"
 }
 variable "network_policy_enabled" {
@@ -156,8 +162,7 @@ variable "confidential_nodes_enabled" {
 #-----------------------------
 variable "machine_type" {
-  default = "n2d-standard-2"
-  #  default = "e2-small"
+  default = "e2-small"
 }
 variable "preemptible" {To avoid this
Error: Error waiting for creating GKE cluster: Invalid master authorized networks: network "0.0.0.0/0" is not a reserved network, which is required for private endpoints.\
- enable_private_endpoint -> false
 variable "enable_private_endpoint" {
   description = "When true public access to cluster (master) endpoint is disabled.  When false, it can be accessed both publicly and privately."
-  default     = "true"
+  default     = "false"
 }
- alternatively master_authorized_network_cidr -> 192.168.100.0/24
~/terra/free-tier-gke master*
❯ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/google versions matching "~> 4.0"...
- Finding hashicorp/google-beta versions matching "~> 4.0"...
- Installing hashicorp/google v4.15.0...
- Installed hashicorp/google v4.15.0 (signed by HashiCorp)
- Installing hashicorp/google-beta v4.15.0...
- Installed hashicorp/google-beta v4.15.0 (signed by HashiCorp)
[..]
Terraform has been successfully initialized!
[..]terraform apply
> google_project_iam_binding.gke_sa_iam_binding[0]: Creating...
 
> google_container_cluster.primary: Creating...
> google_container_cluster.primary: Creation complete after 7m56s [id=projects/immerspring/locations/australia-southeast1-a/clusters/hedgehog]
> google_container_node_pool.primary_preemptible_nodes: Still creating... [2m20s elapsed]
> google_container_node_pool.primary_preemptible_nodes: Creation complete after 6m26s [id=projects/immerspring/locations/australia-southeast1-a/clusters/hedgehog/nodePools/preempt-pool]
Apply complete! Resources: 12 added, 0 changed, 0 destroyed.
Outputs:
connect_to_zonal_cluster = "gcloud container clusters get-credentials hedgehog --zone australia-southeast1-a --project immerspring"gcloud container clusters resize hedgehog --node-pool preempt-pool  --num-nodes 3 --zone  australia-southeast1-a
Somehow nodepool was not created by terraform, backfill:
❯ gcloud container node-pools create preempt-pool \
… ❯   --cluster hedgehog \
… ❯   --zone australia-southeast1-a \
… ❯   --enable-autoupgrade \
… ❯   --preemptible \
… ❯   --num-nodes 1 --machine-type e2-medium \
… ❯   --enable-autoscaling --min-nodes=1 --max-nodes=4
Creating node pool preempt-pool...done.
Created [https://container.googleapis.com/v1/projects/immerspring/zones/australia-southeast1-a/clusters/hedgehog/nodePools/preempt-pool].
NAME          MACHINE_TYPE  DISK_SIZE_GB  NODE_VERSION
preempt-pool  e2-medium     100           1.21.9-gke.1002v1.0.7 tag failed with ImagePullBackOff, N-1 version or tag updates on images did not help, tested with an autopilot cluster which worked, so carried over some net configs such as dnscache disable private nodes + enable_intranode_visibility https://learnk8s.io/a/a-visual-guide-on-troubleshooting-kubernetes-deployments/troubleshooting-kubernetes.en_en.v2.pdf
apply dash-user.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboardhelm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --namespace kubernetes-dashboardAlternative
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yamlFinally it runs
❯ kubectl get pods -o wide  --namespace=kubernetes-dashboard
NAME                                        READY   STATUS    RESTARTS   AGE   IP           NODE                                      NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-c45b7869d-fs5s4   1/1     Running   0          34m   10.0.0.125   gke-hedgehog-preempt-pool-d7617843-7gd1   <none>           <none>
kubernetes-dashboard-764b4dd7-zrnzx         1/1     Running   0          34m   10.0.0.127   gke-hedgehog-preempt-pool-d7617843-7gd1   <none>           <none>
Proxy up
kubectl proxy&alias kdash-token='kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"'
then get it with (need this every so often hence the alias):
kdashtoken
This is works because https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls
kubectl port-forward pod/$(kubectl get pods --selector app=prometheus  --namespace=istio-system  -o jsonpath="{.items[0].metadata.name}") -n istio-system 9090 &kubectl port-forward svc/kiali 20001:20001 -n istio-systemQuickstart from https://github.com/graalvm/mandrel/releases