This is a small code lab to run SEV-SNP Confidential VMs on a local single-node k8s cluster. This code lab assumes you have a SEV-SNP machine and a recent Linux distro with kernel 6.11+ (e.g. CentOS Stream 10, Debian Trixie, or Ubuntu 24.04.2).
- Install Docker - https://docs.docker.com/engine/install/
- Install Kind - https://kind.sigs.k8s.io/ (or
apt install kind
); - Install kubectl - https://kubernetes.io/docs/tasks/tools/install-kubectl-linux
- Install virtctl - https://kubevirt.io/user-guide/user_workloads/virtctl_client_tool
- Install istioctl - https://istio.io/latest/docs/setup/install/istioctl
- Make
/dev/sev
writable by thekvm
group; to persist the change you can addKERNEL=="sev", GROUP="kvm", MODE="0660"
to/etc/udev/rules.d/01-sev.rules
This command creates a local Kind cluster named "meat". It also sets up port forwarding from 127.0.0.1:8080
to the NodePort 30950.
$ kind create cluster --name meat --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30950
hostPort: 8080
listenAddress: "127.0.0.1"
EOF
Install the Kubevirt operator and custom resources. The command downloads and installs my custom build of Kubevirt which supports SEV-SNP (kubevirt/kubevirt#13755). In particular, kubevirt-cr.yaml
enables three feature gates: Sidecar
, VSOCK
, and WorkloadEncryptionoSEV
.
$ kubectl create -f https://gist.githubusercontent.com/ymjing/bda9b3bf4042e40b21394637eb7b5f8c/raw/3c525d3ad5a6cc852083a5c8e4c30f4d2df1b38e/kubevirt-operator.yaml
$ kubectl create -f https://gist.githubusercontent.com/ymjing/bda9b3bf4042e40b21394637eb7b5f8c/raw/3c525d3ad5a6cc852083a5c8e4c30f4d2df1b38e/kubevirt-cr.yaml
Wait a few minutes until Kubevirt components are running and ready. kubectl get pods -n kubevirt
should return 6 pods in the kubevirt
namespace.
Install the Istio control plane and enable auto Envoy sidecar injection for the default namespace.
$ istioctl install
$ kubectl label namespace default istio-injection=enabled
Install the k8s Gateway APIs.
$ kubectl get crd gateways.gateway.networking.k8s.io &> /dev/null || \
{ kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.2.0" | kubectl apply -f -; }
This command installs a VirtualMachineInstanceReplicaSet that manages and scales 3~5 CVM instances. Each CVM has 1 vCPU, 2GiB RAM, and runs the CentOS Stream 10 cloud image. The cloud init script sets the root password to "centos", and sets up the Nginx HTTP server as a system service.
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: sfe-nodeport-service
spec:
selector:
app: sfe
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30950
type: NodePort
---
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceReplicaSet
metadata:
name: sfe-replicaset
spec:
replicas: 3
selector:
matchLabels:
app: sfe
template:
metadata:
name: sfe-vmi-nginx-
labels:
app: sfe
annotations:
sidecar.istio.io/inject: "true"
spec:
subdomain: "sfe"
terminationGracePeriodSeconds: 0
domain:
launchSecurity:
sev:
policy:
secureNestedPaging: true
firmware:
bootloader:
efi:
secureBoot: false
devices:
disks:
- disk:
bus: virtio
name: containerdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- name: default
masquerade: {}
autoattachGraphicsDevice: false
autoattachSerialConsole: true
autoattachPodInterface: true
autoattachVSOCK: true
disableHotplug: true
resources:
requests:
memory: 2Gi
cpu: "1"
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: bincat01/centos-stream-10:snp
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#cloud-config
chpasswd:
list: |
root:centos
expire: False
packages:
- nginx
runcmd:
- [ systemctl, enable, nginx ]
- [ systemctl, start, nginx ]
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: sfe-scaler
spec:
scaleTargetRef:
kind: VirtualMachineInstanceReplicaSet
name: sfe-replicaset
apiVersion: kubevirt.io/v1
minReplicas: 3
maxReplicas: 5
targetCPUUtilizationPercentage: 80
EOF
Wait a few minutes for the virt-launcher-* pods to be 4/4 ready.
curl http://localhost:8080
should return Nginx's hello world page.
kubectl get vmi
returns the names of the 3 CVMs.
virtctl console sfe-vmi-nginx-*****
opens the console to login to the CVM. Username is root
and password is centos
, as specified in the cloud-init.
$ istioctl uninstall --purge
$ kind delete cluster --name meat
The cluster is not secure due to multiple reasons:
- Measured boot (kernel-hashes=on, id-block, id-auth) is not enabled for the guest CVM;
- QEMU does not measure the ACPI table (see AMD-SB-3012);
- mTLS is terminated in the Envoy sidecar; not inside the CVM. And it uses certs issued by the untrusted Istio CA.