Trivial guide for setting up cloud images (incl. Ubuntu Core) for local development in VM.
It's assumed that the images are configured to run cloud-init
.
Example cloud configuration file:
#cloud-config
datasource_list: [ NoCloud, None ]
# default user password, does not work?
password: guest
chpasswd:
expire: False
users:
- name: guest
# 'guest', does not work?
passwd: $1$xyz$NupBwZXNoMXD8NQwzjRW/0
# for sudo
groups: wheel
# sudo without password
sudo: ALL=(ALL) NOPASSWD:ALL
# eplicitly set the shell
shell: /bin/bash
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA1LCRbWnZ/GmM+LZex06HjNyw0aixYbD3P8mfsNHcBV6/LNk7vxw7+5nhooaBlkv1X/hpc/q3BnGy2W4goJ8aEL3JliJl/+4ijdSEwYXKZj0PpFPY8ir4VOFzVlPIX4SoHgheSo5it7zFRBpHMtDqSmgoWmzFKLX3qg144Cv9Lxkqpkx0ndpflYsLz8hH9WT85OjNvVI51lxoTq86XmU0rSzQkT0vpNwGRHSs0HS197d4ym9f1dGFTflWaKmhrquvBWTztHvfhxgZz6OdEeyywAdQBBi3sbWkRuwjZ/aX5K3obwIJL0iJN8hwf64Wt5plYQVhIrNlqVCU8ZRLuzLAGw== maciek@corsair
# for Ubuntu Core
snap:
commands:
# force add the user
00: snap create-user --sudoer --force-managed [email protected]
# get a debug shell
bootcmd:
- systemctl enable --now debug-shell.service
Build the ISO image:
$ cloud-localds uci-data-guest.img uci-data-guest
Grab the model definition (UC20 model requires snap 2.43+)
$ snap known --remote model model=ubuntu-core-20-amd64 series=16 brand-id=canonical > pc-20.model
$ ubuntu-image snap pc-20.model -w $PWD/image-home [--extra-snaps <mysnap>.snap] [--image-size=10G]
⚠️ For Ubuntu Core 20, the image must be at least 10GB (see--image-size=10G
).
The output image and some intermediate data ends up in $PWD/image-home
Example UC20 VM:
$ qemu-system-x86_64 -enable-kvm \
-snapshot \
-m 2048 \
-smp 4 \
-device virtio-net-pci,netdev=mynet0 -netdev user,id=mynet0,hostfwd=tcp:127.0.0.1:59444-:22 \
-serial telnet:127.0.0.1:59483,server,nowait \
-monitor telnet:127.0.0.1:59426,server,nowait \
-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0 \
-drive file=./image-home/pc.img,if=virtio,index=0 \
-drive file=uci-data-guest.img,if=virtio,index=1,snapshot=on \
-bios /usr/share/ovmf/x64/OVMF_CODE.fd
Command details:
- 4 CPUs, 2GB RAM
- snapshot mode (no permanent changes to disk images)
- SSH port forwarded to local 59444
- serial on port 59483
- qemu monitor on port 59426
- virtio RNG
- virtio network interfaces
- virtio disk access
- optional UEFI support
In snapshot mode, qemu uses $TMPDIR
(usually /tmp
) to create a temporary
file for a new disk image, and then unlinks it. If your $TMPDIR
is on tmpfs
you may run out of RAM.
The instructions should work for RHEL and Fedora too.
CentOS, RHEL and Fedora use Anaconda for installation, you can feed a custom kickstart
file named ks.cfg
on a volume named OEMDRV
.
Grab the CentOS cloud SIG image build tools:
$ git clone http://github.com/CentOS/sig-cloud-instance-build
$ cd sig-cloud-instance-build/cloudimg
Use one of the existing kickstart files and edit it to your liking:
$ mkdir floppy
$ cp CentOS-8-x86_64-hvm.ks floppy/ks.cfg
$ vim floppy/ks.cfg
Build the disk image:
$ mkfs.ext2 -d floppy -L OEMDRV floppy.img
Prepare the image file:
$ qemu-img create -f qcow2 centos-cloud.img 20G
Grab the network install CD of CentOS. Run the VM and wait for it to power down:
$ qemu-system-x86_64 -enable-kvm \
-m 2048 \
-smp 4 \
-device virtio-net-pci,netdev=mynet0 -netdev user,id=mynet0,hostfwd=tcp:127.0.0.1:59444-:22 \
-serial telnet:127.0.0.1:59483,server,nowait \
-monitor telnet:127.0.0.1:59426,server,nowait \
-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0 \
-drive file=centos-cloud.img,if=virtio,index=0 \
-drive file=floppy.img,if=virtio,index=1,snapshot=on \
-cdrom CentOS-8-x86_64-1905-boot.iso