- Run the bastion vm in your data center or in cloud with connectivity set up (usually vpn) to the on prem data center.
- Install tinyproxy on the bastion vm and pick a random port as it would be too easy for spam bot with default 8888, set up as systemd service according to https://nxnjz.net/2019/10/how-to-setup-a-simple-proxy-server-with-tinyproxy-debian-10-buster/. Make sure it works by validating with
curl --proxy http://127.0.0.1:<tinyproxy-port> https://httpbin.org/ip
. And I don't use any user authentication for proxy, so I locked down the firewall rules with my laptop IP/32. - Download the kubeconfig file for the k8s cluster to your laptop
- From your laptop, run
HTTPS_PROXY=<bastion-external-ip>:<tinyproxy-port> KUBECONFIG=my-kubeconfig kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node-0 Ready control-plane,master 32h v1.20.4
k8s-node-1 Ready <none> 32h v1.20.4
k8s-node-2 Ready <none> 32h v1.20.4
k8s-node-3 Ready <none> 32h v1.20.4
k8s-node-4 Ready <none> 32h v1.20.4
k8s-node-5 Ready <none> 32h v1.20.4
- tinyproxy with bastion vm
- privoxy in cluster
Given private GKE cluster with Public endpoint access disabled, here is one hack I did with Cloud IAP SSH forwarding via an internal bastion vm. In this workaroud it is not using any HTTP proxy and no external IP address from user VPC. It works well for one cluster, but I would aim for deploying tinyproxy for more than one cluster as it is a cleaner solution without handling TLS SAN.
Please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#private_cp for the latest info. e.g.
gcloud container clusters create "$CLUSTER_NAME" \
--region ${REGION} \
--network ${NETWORK} \
--subnetwork ${SUBNET} \
--cluster-version "$GKE_VERSION" \
--machine-type "${GKE_NODE_TYPE}" \
--num-nodes=1 \
--enable-autoupgrade \
--enable-autorepair \
--preemptible \
--enable-ip-alias \
--cluster-secondary-range-name=pod-range \
--services-secondary-range-name=service-range \
--enable-private-nodes \
--enable-private-endpoint \
--enable-master-authorized-networks \
--master-ipv4-cidr=172.16.0.64/28
# Get the kubectl credentials for the GKE cluster.
KUBECONFIG=~/.kube/dev gcloud container clusters get-credentials "$CLUSTER_NAME" --region "$REGION"
with only internal IP
in GCP console, grant users/group that can access the private instance from the last step
e.g. 172.16.0.66 is the private master endpoint, The SSH traffic is tunnelled via Cloud IAP in TLS, then port forwarding to the k8s master API endpoint.
gcloud beta compute --project ${PROJECT} ssh --zone ${ZONE} "bastion" --tunnel-through-iap --ssh-flag="-L 8443:172.16.0.66:443"
kubernetes.default and kubernetes are allowed for port
server: https://kubernetes.default:8443
Please append the following line
127.0.0.1 kubernetes kubernetes.default
KUBECONFIG=~/.kube/dev kubectl get po --all-namespaces