Using the Rescue environment, we can mount or partition the VM disk.
Let's say the VM disk is "/dev/vda". It's recommended to wipe the disk before continuing with wipefs -a /dev/vda
Rescue environments usually have limited disk space, so we copy the disk image via SSH
and use dd to directly write it to the VM disk.
The cloud image doesn't have a password set by default, which prevents us from logging in. Therefore, we must first prepare the image by mounting it locally and setting a password inside chroot.
wget https://fastly.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2
qemu-img convert -f qcow2 -O raw Arch-Linux-x86_64-cloudimg.qcow2 arch.img
lodev=$(sudo losetup --find --show --partscan arch.img)
echo $lodev
sudo fdisk -l $lodev
mkdir archtmp
sudo mount /dev/loop21p3 archtmp
chroot archtmp
# inside chroot: passwd
# inside chroot: exit
sudo umount archtmp
losetup -d $lodev
rm -r archtmp
cat arch.img | ssh [email protected] "dd of=/dev/vda bs=4M"After copying the image, you can simply restart the VM and switch from rescue to booting from disk.
Initially, the root partition will only be about 1.5G.
To fix this, we need to expand it with growpart and then the at the filesystem level
(note: at the time of writing, the image has a btrfs filesystem).
growpart /dev/vda 3
btrfs filesystem resize max / # or resize2fs /dev/vda3 for ext4
dhclient -v # or ip addr add x dev eth0 => ip link set eth0 up => ip r a default via x dev eth0
pacman -Syu
pacman -S openssh vi vim nano
# /etc/ssh/sshd_config => PermitRootLogin yes
systemctl enable sshd
systemctl restart sshd
# copy your ssh key from the localhost: ssh-copy-id [email protected]
# /etc/ssh/sshd_config => PermitRootLogin prohibit-password ; PasswordAuthentication no ; UseDNS no
systemctl restart sshd
pacman -S bash-completion less curl net-tools htop btop iftop tcpdump unzip mtr mc fail2ban man-db
. /usr/share/bash-completion/bash_completion
systemctl enable fail2ban
echo -e "[sshd]\nenabled = true" > /etc/fail2ban/jail.d/sshd.conf
systemctl restart fail2ban
systemctl enable {iptables,ip6tables}
reboot
iptables-restore < /etc/iptables/iptables.rules
ip6tables-restore < /etc/iptables/ip6tables.rulesecho "zram" > /etc/modules-load.d/zram.conf
echo 'ACTION=="add", KERNEL=="zram0", ATTR{initstate}=="0", ATTR{comp_algorithm}="zstd", ATTR{disksize}="1G", TAG+="systemd"' > /etc/udev/rules.d/99-zram.rules
grep -q "/dev/zram0" /etc/fstab || echo "/dev/zram0 none swap defaults,discard,pri=100,x-systemd.makefs 0 0" >> /etc/fstab