Create a Cloud9 jumpbox using Step 01-03 here. This box will sufficient AWS privileges, for example, EC2 and Route53.
Inspired by Installing a cluster quickly on AWS
base=~/environment # appropriate for Cloud9, change to suit
# version=4.12.0-0.okd-2023-02-18-033438
version=4.13.0-0.okd-2023-08-18-135805
mkdir ${base}/downloads && cd $_
wget https://github.com/okd-project/okd/releases/download/${version}/openshift-install-linux-${version}.tar.gz \
-O openshift-install-linux.tar.gz
tar -xvf openshift-install-linux.tar.gz
wget https://github.com/okd-project/okd/releases/download/${version}/openshift-client-linux-${version}.tar.gz \
-O openshift-client-linux.tar.gz
tar -xvf openshift-client-linux.tar.gz
wget https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64 \
-O odo
chmod +x odo
sudo mv openshift-install oc kubectl odo /usr/local/bin
The following SSH key is a requirement for cluster installation (later)
ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519
# IAM User credentials - TODO why won't EC2 instance profile suffice?
export AWS_ACCESS_KEY_ID=<ID>
export AWS_SECRET_ACCESS_KEY=<KEY>
action=create # or destroy
openshift-install ${action} cluster --dir ${base}/openshift --log-level=info
When prompted, select the following options.
? SSH Public Key
> ~/.ssh/id_ed25519
? Platform
> aws
? Region
> <wherever your C9 (jumpbox) instance resides>
? Base Domain
> <an existing Route53 hosted zone e.g venafi.mcginlay.net>
? Cluster Name
> <something to match the jumpbox name e.g. okd-230320>
? Pull Secret
> <check https://console.redhat.com/openshift/install/pull-secret>
After the install is complete, the kubeadmin password can be viewed as follows.
cat ${base}/openshift/auth/kubeadmin-password
cat > ~/.env << EOF
export KUBECONFIG=${base}/openshift/auth/kubeconfig
source <(kubectl completion bash)
alias k=kubectl
complete -F __start_kubectl k
EOF
echo "source ~/.env" >> ~/.bashrc
source ~/.env
kubectl cluster-info
oc get routes -n openshift-console | grep 'console-openshift'
Navigate to https://your-console/oauth/token/request
, click "Display Token" and copy the oc login
command.
There is a known issue on MacOS which can by circumvented using --insecure-skip-tls-verify=true
.
Some of the OperatorHub sources may not be available by default meaning that the NGINX Ingress Operator may appear to be unavailable. The following patch will ensure Operators from all the default sources are shown.
oc patch OperatorHub cluster --type json \
-p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": false}]'
The following commands address a pair of failure scenarios.
oc -n nginx-ingress adm policy add-scc-to-user -z nginx-ingress anyuid
oc -n nginx-ingress adm policy add-scc-to-user -z nginx-ingress privileged
With these protections removed, you can deploy your first NGINX Ingress controller instance.
This script will STOP all EC2 instances in the current cluster. You will need to track the IDs for the purpose of restarts.
for node in $(kubectl get nodes -ocustom-columns=Name:metadata.name --no-headers); do
for instance in $( \
aws ec2 describe-instances \
--query 'Reservations[].Instances[].InstanceId' \
--filters "Name=private-dns-name,Values=${node}" \
--output text \
); do
aws ec2 stop-instances --instance-ids ${instance}
done
done
Check out OpenShift with NGINX Ingress Operator and cert-manager