Good/Right links https://kubevirt.io/2019/How-To-Import-VM-into-Kubevirt.html
(does not matter what it is minikube or real)
2108 minikube ip
2109 no_proxy="127.0.0.1,192.168.39.157"
2110 kubectl get pods
https://kubevirt.io/user-guide/docs/latest/administration/intro.html
2111 kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/v0.24.0/kubevirt-operator.yaml
2112 kubectl get pods -n kubevirt
2113 kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/v0.24.0/kubevirt-cr.yaml
(via binary - https://kubevirt.io/quickstart_minikube/ or via Krew below)
2114 kubectl get pods -n kubevirt
2115 ( set -x; cd "$(mktemp -d)" && curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/download/v0.3.3/krew.{tar.gz,yaml}" && tar zxvf krew.tar.gz && KREW=./krew-"$(uname | tr '[:upper:]' '[:lower:]')_amd64" && "$KREW" install --manifest=krew.yaml --archive=krew.tar.gz && "$KREW" update; )
2116 export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
2117 kubectl krew
2118 kubectl krew install virt
https://kubevirt.io/2018/containerized-data-importer.html https://kubevirt.io/labs/kubernetes/lab2.html
2121 export VERSION=$(curl -s https://github.com/kubevirt/containerized-data-importer/releases/latest | grep -o "v[0-9]\.[0-9]*\.[0-9]*")
2122 kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
2123 kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
2124 kubectl get pods -n kubevirt
2125 kubectl get pods -n cdi
2126 kubectl get service -n cdi
(probably many more earlier ex https://github.com/fabiand/kubectl-plugin-pvc/raw/master/install.sh)
1. Use the pod importer by giving the http path of the OS image in the Peristant Volume. The CDI will bring up an importer node
Example https://kubevirt.io/labs/kubernetes/lab2.html
Better Exnaple - https://kubevirt.io/2018/containerized-data-importer.html check golden-pvc.yaml
2. Download an image to your local path and use virtctl image-upload through cdi-upload-porxy serivice to 'upload' the local image to the PV. This is what we are doing below.
(local image downloaded from - https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img // img or qcow2 format)
For this we need to expose cdi-uploadproxy as a NodePort.
2129 kubectl describe service cdi-uploadproxy -n cdi
2130 cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: cdi-uploadproxy-nodeport
namespace: cdi
labels:
cdi.kubevirt.io: "cdi-uploadproxy"
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 31001
protocol: TCP
selector:
cdi.kubevirt.io: cdi-uploadproxy
EOF
Get the IP of the service thorugh which it can be accessed. For NodePort this is usually the Node IP. Since K8s is running in minikube it is minikube IP
2144 minikube service cdi-uploadproxy-nodeport --url -n cdi
Thats it - Now use the --insecure option to upload (else you will get certificate errror)
2147 kubectl virt image-upload --pvc-name=cirros-vm-disk2 --pvc-size=500Mi --image-path=/home/alex/Downloads/cirros-0.4.0-x86_64-disk.img --uploadproxy-url=https://192.168.39.157:31001 --insecure
ouput
Using existing PVC default/cirros-vm-disk2
Uploading data to https://192.168.39.157:31001
12.13 MiB / 12.13 MiB [======================================================================================================================================] 100.00% 1s
Uploading /home/alex/Downloads/cirros-0.4.0-x86_64-disk.img completed successful
cat <<EOF | kubectl apply -f -
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
metadata:
name: cirros-vm
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: pvcdisk
machine:
type: ""
resources:
requests:
memory: 64M
terminationGracePeriodSeconds: 0
volumes:
- name: pvcdisk
persistentVolumeClaim:
claimName: cirros-vm-disk2
status: {}
EOF
That's it folks!
alex@drone-OMEN:/home$ kubectl get vmi
NAME AGE PHASE IP NODENAME
cirros-vm 96m Running 172.17.0.15 minikube
Note - If you are doing in Minikube, make sure sufficient memory is there for the VM in the VM, else it may not come up.