Skip to content

Instantly share code, notes, and snippets.

@anjannath
Last active July 4, 2025 11:34
Show Gist options
  • Save anjannath/51f78600568a3e29334ace77e8a32e3b to your computer and use it in GitHub Desktop.
Save anjannath/51f78600568a3e29334ace77e8a32e3b to your computer and use it in GitHub Desktop.
Script to start the self sufficient bundle on LInux using `virt-install`
#!/bin/bash
set -euo pipefail
PASS_DEVELOPER="${PASS_DEVELOPER:-P@ssd3v3loper}"
PASS_KUBEADMIN="${PASS_DEVELOPER:-P@sskub3admin}"
CRC_BUNDLE_PATH="${CRC_BUNDLE_PATH:-$HOME/.crc/cache/crc_libvirt_4.19.0_amd64.crcbundle}"
SSH="ssh -oUserKnownHostsFile=/dev/null -oStrictHostKeyChecking=no -i ${PUB_KEY_PATH%.*}"
SCP="scp -oUserKnownHostsFile=/dev/null -oStrictHostKeyChecking=no -i ${PUB_KEY_PATH%.*}"
if [[ "${PULL_SECRET_PATH}" == "" ]]; then
echo -n "Path to Pull secret file needs to be set using PULL_SECRET_PATH env variable"
exit 1
fi
if [[ "${PUB_KEY_PATH}" == "" ]]; then
echo -n "Path to the SSH Public key needs to be set using PUB_KEY_PATH env variable"
exit 1
fi
if [[ "${CRC_BUNDLE_PATH}" == "" ]]; then
echo -n "Path to a CRC bundle needs to be set using CRC_BUNDLE_PATH env variable"
exit 1
fi
PULL_SECRET=$(cat ${PULL_SECRET_PATH})
PUB_KEY=$(cat ${PUB_KEY_PATH})
function gen_cloud_init() {
echo -n "Generating cloud-init user-data..."
rm -rf seed.iso
cat <<EOF > user-data
#cloud-config
runcmd:
- systemctl enable --now kubelet
write_files:
- path: /home/core/.ssh/authorized_keys
content: '$PUB_KEY'
owner: core
permissions: '0600'
- path: /opt/crc/id_rsa.pub
content: '$PUB_KEY'
owner: root:root
permissions: '0644'
- path: /etc/sysconfig/crc-env
content: |
CRC_CLOUD=1
CRC_NETWORK_MODE_USER=0
owner: root:root
permissions: '0644'
- path: /usr/local/bin/crc-check-cloud-env.sh
content: |
#!/bin/bash
exit 0
owner: root:root
permissions: '0777'
- path: /opt/crc/pull-secret
content: |
$PULL_SECRET
permissions: '0644'
- path: /opt/crc/pass_kubeadmin
content: '$PASS_KUBEADMIN'
permissions: '0644'
- path: /opt/crc/pass_developer
content: '$PASS_DEVELOPER'
permissions: '0644'
- path: /opt/crc/ocp-custom-domain.service.done
permissions: '0644'
EOF
# create cloud-init ISO
# touch meta-data
# mkisofs -output seed.iso -volid cidata -joliet -rock user-data meta-data
# macos: hdiutil makehybrid -o seed.iso -hfs -joliet -iso -default-volume-name cidata seedconfig/
}
function extract_disk_img() {
echo -n "Extracting VM image from CRC bundle ..."
zstd -d --format=zstd -o bundle.tar "${CRC_BUNDLE_PATH}"
bundle_name=$(basename "${CRC_BUNDLE_PATH}")
tar -O -xvf bundle.tar "${bundle_name%.*}"/crc.qcow2 > crc.qcow2
rm -rf bundle.tar
}
function create_libvirt_vm() {
crc_disk_path="$(pwd)/crc.qcow2"
vm_name=${1}
# sudo chown qemu:qemu ${crc_disk_path}
# sudo chown qemu:qemu ${cloud_init_iso}
echo -n "Creating VM..."
sudo virt-install \
--name ${vm_name} \
--vcpus 4 \
--memory 14000 \
--disk path=${crc_disk_path},format=qcow2,bus=virtio \
--import \
--os-variant=generic \
--nographics \
--cloud-init disable=on,user-data=./user-data \
--noautoconsole
}
function get_kubeconfig() {
echo -n "Waiting 3mins for VM to start ..."
sleep 180
vm_name=${1}
VM_IP=$(sudo virsh domifaddr ${vm_name} | tail -2 | head -1 | awk '{print $4}' | cut -d/ -f1)
while ! ${SSH} core@${VM_IP} -- exit 0; do
sleep 5
echo -n "Waiting for SSH to be available ..."
done
echo -n "VM is running ..."
while ! ${SSH} core@${VM_IP} -- 'sudo oc get node --kubeconfig /opt/crc/kubeconfig --context system:admin'; do
sleep 30
echo -n "Waiting for CA to be rotated ..."
done
${SCP} core@${VM_IP}:/opt/kubeconfig .
oc config set clusters.api-crc-testing:6443.server https://${vm_ip}:6443 --config ./kubeconfig
oc config set clusters.crc.server https://${vm_ip}:6443 --config ./kubeconfig
}
gen_cloud_init
extract_disk_img
create_libvirt_vm crc-ng
get_kubeconfig crc-ng
@anjannath
Copy link
Author

yes that can be added, currently we have the following check, where its checking if the node resource is available, which is directly dependent on the kubelet.service being active:

while ! ${SSH} core@${VM_IP} -- 'sudo oc get node --kubeconfig /opt/crc/kubeconfig --context system:admin'; do
        sleep 30
        echo -n "Waiting for CA to be rotated ..."
done

@cfergeau
Copy link

cfergeau commented Jul 3, 2025

for what it's worth, there's a dedicated ssh module for cloud-init https://cloudinit.readthedocs.io/en/latest/reference/modules.html#ssh

Do you know how the cloud-init run is ordered compared to systemd units? are we guaranteed cloud-init completes before systemd units are started? or is there a cloud-init-complete systemd unit we can use to delay the startup of the kubelet unit?

@praveenkumar
Copy link

for what it's worth, there's a dedicated ssh module for cloud-init https://cloudinit.readthedocs.io/en/latest/reference/modules.html#ssh

This is much better than we manually put as part of following.

- path: /home/core/.ssh/authorized_keys
  content: '$PUB_KEY'
  owner: core
  permissions: '0600'
- path: /opt/crc/id_rsa.pub
  content: '$PUB_KEY'
  owner: root:root
  permissions: '0644'

Do you know how the cloud-init run is ordered compared to systemd units? are we guaranteed cloud-init completes before systemd units are started? or is there a cloud-init-complete systemd unit we can use to delay the startup of the kubelet unit?

https://cloudinit.readthedocs.io/en/latest/explanation/boot.html?utm_source=chatgpt.com have details around it.

summary (by chatgpt)

Cloud-init runs in four stages, mapped to systemd services:
    cloud-init-local.service
        Runs very early, before networking is up.
        Responsible for fetching early metadata.
        Part of init-local stage.
    cloud-init.service
        Runs after networking is available.
        Handles instance initialization (user data fetching, etc.).
    cloud-config.service
        Runs config modules.
    cloud-final.service
        Final stage: runs runcmd, bootcmd, and other user commands.

@praveenkumar
Copy link

I want to avoid making multiple ssh calls if possible so in case of kubelet service up and running instead of ${SSH} core@${VM_IP} -- 'systemctl is-active kubelet' better to just scp the kubeconfig on host and then check if resource is available.

function get_kubeconfig() {
    vm_name=${1}

    VM_IP=$(sudo virsh domifaddr ${vm_name} | tail -2 | head -1 | awk '{print $4}' | cut -d/ -f1)

    while ! ${SSH} core@${VM_IP} -- exit 0; do
        sleep 5
        echo -n "Waiting for SSH to be available ..."
    done

    echo -n "VM is running ..."

    ${SCP} core@${VM_IP}:/opt/kubeconfig .
    oc config set clusters.api-crc-testing:6443.server https://${vm_ip}:6443 --config ./kubeconfig
    oc config set clusters.crc.server https://${vm_ip}:6443 --config ./kubeconfig

    while oc get node --kubeconfig ./kubeconfig --context system:admin'; do
        sleep 30
        echo -n "Waiting for apiserver"
    done
}

@cfergeau
Copy link

cfergeau commented Jul 4, 2025

If we order kubelet.service After cloud-final.service, it might be possible to always enable the kubelet service and remove runcmd: systemctl enable --now kubelet from the cloud-init file

@praveenkumar
Copy link

@cfergeau this means adding a drop-in file for kubelet service and changing the way current bundle work locally. As of now bundles which use with crc are not depend on cloud-init so not sure if having this change cause issue for these scenario.

@cfergeau
Copy link

cfergeau commented Jul 4, 2025

Yes, with the way we currently start the bundle, it's not possible, but when we fully switch to the self sufficient bundle, then we can consider it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment