I was playing around with kubevirt.io (v1.2.0) on a Radxa ROCK 5 Model B. When I tried to boot a VM, I just had the qemu-kvm
process eating 100% CPU with no output to the console.
I built an alternative setup based on ubuntu 22.04 and qemu worked with KVM without any problems. After some investigation I had the idea that it might be related to the (U)EFI bios used. I transferred the /usr/share/AAVMF/AAVMF_CODE.fd
files from the 22.04 setup into the kubevirt compute container, started an additional qemu-kvm with -bios AAVMF/AAVMF_CODE.fd
and voila - KVM booted correctly.
My findings so far:
- Ubuntu 22.04: Works 'out of the box'. QEMU emulator version 6.2.0 (Debian 1:6.2+dfsg-2ubuntu6.17). EFI '2022.02' from https://packages.ubuntu.com/jammy/qemu-efi-aarch64
- Using it with later UEFI's - i.e. '2202.11' from https://packages.ubuntu.com/lunar/qemu-efi-aarch64 results in 100% CPU load without any indicies of a booting VM.
- In emulation mode everything works - but it is slow ;)
- In tinacore/edk2 there are two further releases between the two ubuntu releases: [2022.05](https://github.com/tianocore/edk2/releases/tag/ edk2-stable202308) - haven't had the time to build and test them.
- Also tried alpine 3.19.1 (qemu 8.1.5) with EDK based on '2023.08' - also 100% CPU / not working.
My solution is to use a KubeVirt Hook Sidecar to patch the config to use the EFI bios from ubuntu 22.04. The 'key' snipplet is this annotation:
hooks.kubevirt.io/hookSidecars: >
[
{
"args": ["--version", "v1alpha3"],
"image": "quay.io/kubevirt/sidecar-shim:v1.2.0",
"pvc": {"name": "kubevirt-qemu-uefi","volumePath": "/qemu-efi", "sharedComputePath": "/var/run/qemu-efi"},
"configMap": {"name": "efi-patcher-config-map", "key": "my_script.sh", "hookPath": "/usr/bin/onDefineDomain"}
}
]
It mounts a PVC named kubevirt-qemu-uefi
into /var/run/qemu-efi
where I placed the AAVMF
folder from 22.04. I'm using k3s with local driver and a fixed hostPath
folder so this worked fine in my scenario. If you have a complex storage situation, the "PVC population container Idea from QEMU strace" could be a good idea.
The my_script.sh
just patches the folder for the EFI files:
apiVersion: v1
kind: ConfigMap
metadata:
name: efi-patcher-config-map
namespace: virtual-machines
data:
my_script.sh: |
#!/bin/sh
tempFile=`mktemp --dry-run`
echo $4 > $tempFile
sed -i "s|/usr/share/AAVMF/AAVMF|/var/run/qemu-efi/AAVMF/AAVMF|" $tempFile
cat $tempFile
PS: don't forget to enable the Sidecar
feature gate to bring things to fly.
Here is my complete excample:
# PV for the EFI files
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: kube-virt-kubevirt-qemu-uefi
spec:
accessModes:
- ReadWriteOnce
- ReadOnlyMany
capacity:
storage: 500Mi
hostPath:
path: /data/kube-virt/kubevirt-qemu-uefi
type: DirectoryOrCreate
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
volumeMode: Filesystem
# PVC for the EFI files
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kubevirt-qemu-uefi
namespace: virtual-machines
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 500Mi
storageClassName: local-storage
volumeMode: Filesystem
volumeName: kube-virt-kubevirt-qemu-uefi
# ConfigMap & Script for the sidecar
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efi-patcher-config-map
namespace: virtual-machines
data:
my_script.sh: |
#!/bin/sh
tempFile=`mktemp --dry-run`
echo $4 > $tempFile
sed -i "s|/usr/share/AAVMF/AAVMF|/var/run/qemu-efi/AAVMF/AAVMF|" $tempFile
cat $tempFile
# The virtual machine.
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: testvm
namespace: virtual-machines
spec:
running: false
template:
metadata:
labels:
kubevirt.io/size: small
kubevirt.io/domain: testvm
annotations:
hooks.kubevirt.io/hookSidecars: >
[
{
"args": ["--version", "v1alpha3"],
"image": "quay.io/kubevirt/sidecar-shim:v1.2.0",
"pvc": {"name": "kubevirt-qemu-uefi","volumePath": "/qemu-efi", "sharedComputePath": "/var/run/qemu-efi"},
"configMap": {"name": "efi-patcher-config-map", "key": "my_script.sh", "hookPath": "/usr/bin/onDefineDomain"}
}
]
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
resources:
requests:
# 256M is the minimum for aarch64
memory: 256M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: quay.io/kubevirt/cirros-container-disk-demo:20240323_9bd334045-arm64
- name: cloudinitdisk
cloudInitNoCloud:
userDataBase64: SGkuXG4=
And you should be able to see the VM booting up:
# start the vm
virtctl start -n virtual-machines testvm
# attach to the console
virtctl console -n virtual-machines testvm