IMPORTANT: Be sure to complete Installing LVM on SNO (OpenShift v4.17.x) before starting this guide.
-
Deploy the OpenShift Virtualzation operator with the following command.
oc apply -f - <<EOF --- apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.17.1 channel: "stable" EOF
-
Next, apply the following
HyperConverged
CR to your cluster to install OpenShift Virtualization.oc apply -f - <<EOF kind: HyperConverged apiVersion: hco.kubevirt.io/v1beta1 metadata: annotations: deployOVS: 'false' name: kubevirt-hyperconverged namespace: openshift-cnv spec: {} EOF
NOTICE: The following information is only relevant if you've deployed LVM Storage (typically for SNO deployments). ODF does not require the following changes.
Have you noticed that VMs start slowly when using OCPV on SNO deployments? The following section will help!
Modifying the StorageProfile
will improve the overall speed/performance of starting virtual machines on OCPV. By default, OpenShift Data Foundation (ODF) creates a StorageProfile
with the cloneStrategy
key set to snapshot
, but the LVM operator is set this value to copy
by default. You will want to change this if/when using the LVM operator. The process is very simple, and will be covered below.
-
You will have one (1)
StorageProfile
for eachStorageClass
in your OpenShift cluster. You can view theStorageProfile
objects with the following command. These objects are not namespaced, similar toStorageClass
objects.❯ oc get storageprofile NAME AGE lvms-vg1 6d5h lvms-vg1-immediate 6d5h nfs-openshift 6d2h ❯ oc get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 12d lvms-vg1-immediate (default) topolvm.io Delete Immediate true 12d nfs-openshift nfs.csi.k8s.io Delete Immediate false 6d2h
-
You can view the YAML for the
StorageProfile
by using the following command (we will uselvms-vg1
as an example. Look for thespec
, andoc get storageprofile lvms-vg1-immediate -o yaml
It will likely look like the sample below. By default, you can see that the
cloneStrategy
is set tocopy
(look at thestatus
section). We don't want this, and would prefer to setsnapshot
.apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: creationTimestamp: "2024-08-30T15:16:03Z" generation: 3 labels: app: containerized-data-importer app.kubernetes.io/component: storage app.kubernetes.io/managed-by: cdi-controller app.kubernetes.io/part-of: hyperconverged-cluster app.kubernetes.io/version: 4.16.1 cdi.kubevirt.io: "" name: lvms-vg1-immediate ownerReferences: - apiVersion: cdi.kubevirt.io/v1beta1 blockOwnerDeletion: true controller: true kind: CDI name: cdi-kubevirt-hyperconverged uid: 2825a572-ca0c-4fa2-bf30-9f05b0100e92 resourceVersion: "13429408" uid: 94530d4b-99f1-4616-add9-b078264a8c05 spec: {} status: claimPropertySets: - accessModes: - ReadWriteOnce volumeMode: Block - accessModes: - ReadWriteOnce volumeMode: Filesystem cloneStrategy: copy dataImportCronSourceFormat: pvc provisioner: topolvm.io snapshotClass: lvms-vg1 storageClass: lvms-vg1-immediate
-
If your
StorageProfile
looks like the one above (set tocopy
vssnapshot
), then you can use the following patch command to change thecloneStrategy
.IMPORTANT: The following command will NOT work if you didn't first follow the instructions for installing the LVM
StorageClass
exactly like I suggested HEREoc patch storageprofile lvms-vg1-immediate -p '{"spec":{"cloneStrategy": "snapshot"}}' --type=merge
Below is a sample virtual machine that can be used during a proof of concept (PoC), which puts together a few concepts that have been used in several other sections.
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: fedora-devel-001
namespace: jinkit-vms
labels:
app: fedora-devel-001
kubevirt.io/dynamic-credentials-support: 'true'
vm.kubevirt.io/template: fedora-server-small
vm.kubevirt.io/template.namespace: openshift
vm.kubevirt.io/template.revision: '1'
vm.kubevirt.io/template.version: v0.31.1
spec:
dataVolumeTemplates:
- apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
creationTimestamp: null
name: fedora-devel-001
spec:
sourceRef:
kind: DataSource
name: fedora
namespace: openshift-virtualization-os-images
storage:
resources:
requests:
storage: 75Gi
running: true
template:
metadata:
annotations:
vm.kubevirt.io/flavor: small
vm.kubevirt.io/os: fedora
vm.kubevirt.io/workload: server
creationTimestamp: null
labels:
kubevirt.io/domain: fedora-devel-001
kubevirt.io/size: small
network.kubevirt.io/headlessService: headless
spec:
accessCredentials:
- sshPublicKey:
propagationMethod:
noCloud: {}
source:
secret:
secretName: ssh-user-bjozsa
architecture: amd64
domain:
cpu:
cores: 1
sockets: 4
threads: 1
devices:
autoattachPodInterface: false
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- bridge: {}
macAddress: '02:f4:20:00:00:0e'
model: virtio
name: fedora-devel-001-eth0
logSerialConsole: false
rng: {}
features:
acpi: {}
smm:
enabled: true
firmware:
bootloader:
efi: {}
machine:
type: pc-q35-rhel9.4.0
memory:
guest: 8Gi
resources: {}
networks:
- multus:
networkName: v0004-ens8f1-access
name: fedora-devel-001-eth0
terminationGracePeriodSeconds: 180
volumes:
- dataVolume:
name: fedora-devel-001
name: rootdisk
- cloudInitNoCloud:
networkData: |
version: 2
ethernets:
enp1s0:
addresses:
- 192.168.6.35/22
gateway4: 192.168.4.1
nameservers:
search:
- jinkit.com
addresses:
- 192.168.3.5
ntp:
servers:
- 0.pool.ntp.org
userData: |
#cloud-config
user: fedora
password: fedora
chpasswd:
expire: false
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDE1F7Fz3MGgOzst9h/2+5/pbeqCfFFhLfaS0Iu4Bhsr7RenaTdzVpbT+9WpSrrjdxDK9P3KProPwY2njgItOEgfJO6MnRLE9dQDzOUIQ8caIH7olzxy60dblonP5A82EuVUnZ0IGmAWSzUWsKef793tWjlRxl27eS1Bn8zbiI+m91Q8ypkLYSB9MMxQehupfzNzJpjVfA5dncZ2S7C8TFIPFtwBe9ITEb+w2phWvAE0SRjU3rLXwCOWHT+7NRwkFfhK/moalPGDIyMjATPOJrtKKQtzSdyHeh9WyKOjJu8tXiM/4jFpOYmg/aMJeGrO/9fdxPe+zPismC/FaLuv0OACgJ5b13tIfwD02OfB2J4+qXtTz2geJVirxzkoo/6cKtblcN/JjrYjwhfXR/dTehY59srgmQ5V1hzbUx1e4lMs+yZ78Xrf2QO+7BikKJsy4CDHqvRdcLlpRq1pe3R9oODRdoFZhkKWywFCpi52ioR4CVbc/tCewzMzNSKZ/3P0OItBi5IA5ex23dEVO/Mz1uyPrjgVx/U2N8J6yo9OOzX/Gftv/e3RKwGIUPpqZpzIUH/NOdeTtpoSIaL5t8Ki8d3eZuiLZJY5gan7tKUWDAL0JvJK+EEzs1YziBh91Dx1Yit0YeD+ztq/jOl0S8d0G3Q9BhwklILT6PuBI2nAEOS0Q== [email protected]
write_files:
- path: /etc/environment
content: |
http_proxy=http://proxy.example.com:3128
https_proxy=http://proxy.example.com:3128
no_proxy=localhost,127.0.0.1,.example.com,192.168.0.0/16
runcmd:
- echo "export http_proxy=http://proxy.example.com:3128" >> /etc/profile.d/proxy.sh
- echo "export https_proxy=http://proxy.example.com:3128" >> /etc/profile.d/proxy.sh
- echo "export no_proxy=localhost,127.0.0.1,.example.com,192.168.0.0/16" >> /etc/profile.d/proxy.sh
name: cloudinitdisk