-
-
Save rcarrata/016da295c1421cccbfbd66ed9a7922bc to your computer and use it in GitHub Desktop.
#!/bin/bash | |
AUTH_NAME="auth2kube" | |
NEW_KUBECONFIG="newkubeconfig" | |
echo "create a certificate request for system:admin user" | |
openssl req -new -newkey rsa:4096 -nodes -keyout $AUTH_NAME.key -out $AUTH_NAME.csr -subj "/CN=system:admin" | |
echo "create signing request resource definition" | |
oc delete csr $AUTH_NAME-access # Delete old csr with the same name | |
cat << EOF >> $AUTH_NAME-csr.yaml | |
apiVersion: certificates.k8s.io/v1 | |
kind: CertificateSigningRequest | |
metadata: | |
name: $AUTH_NAME-access | |
spec: | |
signerName: kubernetes.io/kube-apiserver-client | |
groups: | |
- system:authenticated | |
request: $(cat $AUTH_NAME.csr | base64 | tr -d '\n') | |
usages: | |
- client auth | |
EOF | |
oc create -f $AUTH_NAME-csr.yaml | |
echo "approve csr and extract client cert" | |
oc get csr | |
oc adm certificate approve $AUTH_NAME-access | |
oc get csr $AUTH_NAME-access -o jsonpath='{.status.certificate}' | base64 -d > $AUTH_NAME-access.crt | |
echo "add system:admin credentials, context to the kubeconfig" | |
oc config set-credentials system:admin --client-certificate=$AUTH_NAME-access.crt \ | |
--client-key=$AUTH_NAME.key --embed-certs --kubeconfig=/tmp/$NEW_KUBECONFIG | |
echo "create context for the system:admin" | |
oc config set-context system:admin --cluster=$(oc config view -o jsonpath='{.clusters[0].name}') \ | |
--namespace=default --user=system:admin --kubeconfig=/tmp/$NEW_KUBECONFIG | |
echo "extract certificate authority" | |
oc -n openshift-authentication rsh `oc get pods -n openshift-authentication -o name | head -1` \ | |
cat /run/secrets/kubernetes.io/serviceaccount/ca.crt > ingress-ca.crt | |
echo "set certificate authority data" | |
oc config set-cluster $(oc config view -o jsonpath='{.clusters[0].name}') \ | |
--server=$(oc config view -o jsonpath='{.clusters[0].cluster.server}') --certificate-authority=ingress-ca.crt --kubeconfig=/tmp/$NEW_KUBECONFIG --embed-certs | |
echo "set current context to system:admin" | |
oc config use-context system:admin --kubeconfig=/tmp/$NEW_KUBECONFIG | |
echo "test client certificate authentication with system:admin" | |
export KUBECONFIG=/tmp/$NEW_KUBECONFIG | |
oc login -u system:admin | |
oc get pod -n openshift-console |
That's awesome. Thanks for creating this.
glad that helped @lousyd! :)
Hi Roberto, it worked flawlessly for me yesterday in a 4.8 cluster. Then I tried to set a second kubeconfig file for a different cluster, same OCP, same version etc. But it failed. After several hours, I tried with a fresh terminal, and then it worked. I post this here, just in case it happens to anybody else.
Muchísimas gracias, menuda currada, enhorabuena!
@rodolof Thanks for the information and for the comment! And also happy that this scripts helped :)
Gracias a ti! Un abrazo!!
Updated to support also OpenShift 4.9 version
does the KUBECONFIG env variable needs to be set before running all these steps? I got the following error without setting KUBECONFIG:
[root@dstrlaae9201 auth]# oc create -f auth2kube-csr.yaml
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
The problem is we don't have a working kubeconfig. Any suggestion?
@voyasas a valid Kubeconfig in .kube/config or specifying the KUBECONFIG env variable needs to be defined in order to perform this series of commands yes.
@rcarrata these kubeconfig files are still time-bound. Do you have a method of regenerating a kubeconfig that doesn't expire? I lost my original.
Tested in ocp4.4.17 but valid in all ocp4 and ocp3(I guess) clusters: