Skip to content

Instantly share code, notes, and snippets.

@tlhakhan
Last active December 21, 2023 15:50
Show Gist options
  • Save tlhakhan/03dbb4867f70d17d205c179a58fd5923 to your computer and use it in GitHub Desktop.
Save tlhakhan/03dbb4867f70d17d205c179a58fd5923 to your computer and use it in GitHub Desktop.
iPXE script for deploying Ubuntu 20.04 autoinstall nocloud-net method
#!ipxe
# ubuntu focal 20.04
# $seedfrom used to find the user-data and meta-data files by nocloud-net provider for cloud-init.
# the trailing slash is important, the cloud-init sticks 'meta-data' or 'user-data' right after it, without prepending a forward slash.
set seedfrom http://repo/files/ubuntu2004/
# $base url is where the vmlinuz and initrd live.
# they were fished out from the live-server iso file. the iso file is also in this directory.
set base http://repo/files/ubuntu2004
kernel ${base}/vmlinuz initrd=initrd autoinstall url=${base}/ubuntu-20.04-live-server-amd64.iso net.ifnames=0 biosdevname=0 ip=dhcp ds=nocloud-net;s=${seedfrom}
initrd ${base}/initrd
boot
@tlhakhan
Copy link
Author

tlhakhan commented Mar 4, 2023

@artworkk, here is the iPXE I use in my deployment. I believe the one posted by @cristgal should also work.

My environment I use this in is virtual, specifically an ESX vSphere host. To get iPXE shell on my virtual machine, I use a iPXE ISO I built here https://github.com/tlhakhan/ipxe-iso. See the releases section https://github.com/tlhakhan/ipxe-iso/releases.

I pop this CD into the VM, and it drops me into an iPXE shell in which I can run dhcp && chain http://192.168.200.109:33009/templates/ipxe.sh. This eliminates a whole lot of headache with DHCP+TFTP server to kick off the iPXE.

Here is an example of an Ubuntu focal ipxe.sh I generated. Source: https://github.com/tenzin-io/vmware-builder/blob/main/installers/ubuntu/focal/templates/ipxe.sh.

#!ipxe

#
# Ubuntu Installer
#

#
# The $seed_url is used by cloud-init's nocloud-net provider to find the user-data and meta-data files. The trailing slash is important, the cloud-init process sticks 'meta-data' or 'user-data' right after, without prepending a forward slash to the file name.
set seed_url http://192.168.200.109:33009/templates/

#
# The $vmlinuz_url and $initrd_url, the files can be found on the iso contents
set vmlinuz_url http://192.168.200.109:33009/files/iso_contents/casper/vmlinuz
set initrd_url http://192.168.200.109:33009/files/iso_contents/casper/initrd

#
# The $iso_url points to the live-server iso file
set iso_url http://192.168.200.109:33009/files/ubuntu-20.04.5-live-server-amd64.iso

kernel ${vmlinuz_url} autoinstall url=${iso_url} net.ifnames=0 biosdevname=0 ip=::::my-ubuntu-server-00::dhcp ds=nocloud-net;s=${seed_url}
initrd ${initrd_url}
boot

Notes:

If you don't set this, it will always choose ubuntu-server, which can collide temporarily if you do parallel installations. This initial installation phase is very short, but it would keep your DHCP leases clean.

Below is my user-data file. The username is sysuser, password is password.

#cloud-config
autoinstall:
  version: 1

  early-commands:
    - systemctl stop ssh # otherwise packer tries to connect and exceed max attempts
    - hostnamectl set-hostname my-ubuntu-server-00 # update hostname quickly
    - dhclient # re-register the updated hostname

  network:
    version: 2
    ethernets:
      eth0:
        dhcp4: yes

  ssh:
    install-server: yes

  identity:
    hostname: my-ubuntu-server-00
    password: $2a$10$y0Vh5GC2mi9NoKYz.K251uW06.6u.7mtrHDvA0YXeq0TnIqH96JOm
    username: sysuser # root doesn't work

  storage:
    layout:
      name: lvm

  packages:
    - open-vm-tools

  user-data:
    disable_root: false 

  late-commands:
    - echo 'sysuser ALL=(ALL) NOPASSWD:ALL' > /target/etc/sudoers.d/sysuser # allow sudo without password
    - curtin in-target --target /target -- sed -ie 's/GRUB_CMDLINE_LINUX=.*/GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"/' /etc/default/grub
    - curtin in-target --target /target update-grub2
    - curtin in-target --target /target -- apt-get install -y ansible

iPXE tangent with initrd

Just leaving here as an iPXE "woah thats cool" finding.

With iPXE there is something called "magic initrd". See Notes section here: https://ipxe.org/cmd/imgfetch.

In essence, you can actually download an entire file from a webserver somewhere, stuff it inside of an initrd and have it accessible to what ever is being deployed in that space.

Bad usecase example

I did something very hacky with the NixOS LiveCD, because I really really wanted to use it for iPXE netbooting. Eventually realized this is a bad idea and threw away the script, below is something I dug from one of my commit history here.

#!ipxe

#
# NixOS 22.11
#

set vmlinuz_url http://{{.HTTPAddress}}/files/iso_contents/boot/bzImage
set initrd_url http://{{.HTTPAddress}}/files/iso_contents/boot/initrd
set squashfs_url http://{{.HTTPAddress}}/files/iso_contents/nix-store.squashfs
set iso_url http://{{.HTTPAddress}}/files/latest-nixos-minimal-x86_64-linux.iso

kernel ${vmlinuz_url} initrd=initrd.magic nohibernate loglevel=4 boot.shell_on_fail init=nix/store/ydvcwi28lglmjzq5nk4cn2af9ncir3l3-nixos-system-nixos-22.11.1459.8c03897e262/init root=/latest-nixos-minimal-x86_64-linux.iso live.nixos.passwd={{.GuestPassword}}
initrd ${initrd_url}
initrd ${iso_url} /latest-nixos-minimal-x86_64-linux.iso
boot

Whats happening:

  • The above script is a combination of iPXE variable templating ${}, and Go templating {{ }}.
  • I set initrd=initrd.magic.
  • I stuff the real initrd and the ISO file together into the magic initrd, basically coalescing them together.
  • I boot off the kernel bzImage, I set my init to actually be something inside of the magic initrd filesystem.
  • I set my root to be the ISO file that I stuffed inside of the magic initrd.
  • This got me to use the LiveCD as a netboot.
  • Soon realized this is more complicated and most likely not what the NixOS people intended. The NixOS netboot installation method doesn't fit cleanly with my iPXE method.

@bgbaroo
Copy link

bgbaroo commented Mar 5, 2023

@tlhakhan @cristgal Thank you!! You guys are awesome and very helpful. Now it boots to Ubuntu and no longer complains about not being able to mount root VFS. I guess the part that made the difference was in the iPXE image.

I did the following:

Although it boots almost to Ubuntu, it couldn't progress to autoinstall phase. I guess the problem is that it defaults root to some RAM device right now hence why it says No space left on device (My lab VPS only has 1GB RAM, but the live server image is 1.4GB). Do you guys know of any workaround for this, i.e. to first download the image first to disk or to NFS server and then uses it as root? Or if this is a bad idea, any standard way to pull this of in machines with small memory?

image

Again, thanks a lot guys! You guys are very helpful. I was a fool for not reading about PXE image (and other environment initialization) well enough, so I was blindly debugging the kernel boot parameters for days. I trusted netboot.xyz image too much because when I was manually installing Ubuntu with custom disk configuration, using netboot.xyz image and choosing Ubuntu 20.04 from the menu worked fine, so I thought I might have missed some kernel parameters.

@tlhakhan
Copy link
Author

tlhakhan commented Mar 5, 2023

I was not able to get the Ubuntu installer working with less than 5 GiB of memory on my VM.

At about 4 GiB of memory, things startup, but I noticed that it OMM kills the could-init service and then it doesn't grab the user-data file. At about 5 GiB everything works.

I think it may be possible to install without downloading the ISO but I haven't explored it. All the Ubuntu docs I've seen so far use url or cdrom in their boot params.

@tlhakhan
Copy link
Author

tlhakhan commented Mar 5, 2023

I tried to find where the heck the url and other details are picked up from the boot command line.

I eventually found this, that may help find a way to reduce the memory requirements.

I found snippet from the file ./scripts/casper inside of the initrd file.

parse_cmdline() {
    for x in $(cat /proc/cmdline); do
        case $x in
            showmounts|show-cow)
                export SHOWMOUNTS='Yes' ;;
            persistent)
                export PERSISTENT="Yes" ;;
            nopersistent)
                export PERSISTENT="" NOPERSISTENT="Yes" ;;
            persistent-path=*)
                export PERSISTENT_PATH="${x#persistent-path=}" ;;
            ip=*)
                STATICIP=${x#ip=}
                if [ "${STATICIP}" = "" ]; then
                    STATICIP="frommedia"
                fi
                export STATICIP ;;
            url=*.iso)
                export NETBOOT=url
                export URL="${x#url=}" ;;
            uuid=*)
                UUID=${x#uuid=} ;;
            ignore_uuid)
                UUID="" ;;
            live-media=*)
                LIVEMEDIA="${x#live-media=}"
                export LIVEMEDIA
                echo "export LIVEMEDIA=\"$LIVEMEDIA\"" >> /etc/casper.conf ;;
            live-media-path=*)
                LIVE_MEDIA_PATH="${x#live-media-path=}"
                export LIVE_MEDIA_PATH
                echo "export LIVE_MEDIA_PATH=\"$LIVE_MEDIA_PATH\"" >> /etc/casper.conf ;;
            layerfs-path=*)
                export LAYERFS_PATH="${x#layerfs-path=}"
                echo "export LAYERFS_PATH=\"$LAYERFS_PATH\"" >> /etc/casper.conf ;;
            nfsroot=*)
                export NFSROOT="${x#nfsroot=}" ;;
            netboot=*)
                export NETBOOT="${x#netboot=}" ;;
            toram)
                export TORAM="Yes" ;;
            todisk=*)
                export TODISK="${x#todisk=}" ;;
            hostname=*)
                export CMD_HOST="${x#hostname=}" ;;
            userfullname=*)
                export CMD_USERFULLNAME="${x#userfullname=}" ;;
            username=*)
                export CMD_USERNAME="${x#username=}" ;;
        esac
    done
}

Peeking inside of the initrd file

# Copy the initrd from the ISO mount to a writeable area
╭─root@console ~/investigate
╰─# cp /root/vmware-builder/installers/ubuntu/focal/files/iso_contents/casper/initrd .

# Do a binwalk on the initrd to find the offset
╭─root@console ~/investigate
╰─# binwalk initrd > initrd.binlist

# Find the LZ4 section
╭─root@console ~/investigate
╰─# head -n20 initrd.binlist

DECIMAL       HEXADECIMAL     DESCRIPTION
--------------------------------------------------------------------------------
0             0x0             ASCII cpio archive (SVR4 with no CRC), file name: ".", file name length: "0x00000002", file size: "0x00000000"
112           0x70            ASCII cpio archive (SVR4 with no CRC), file name: "kernel", file name length: "0x00000007", file size: "0x00000000"
232           0xE8            ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86", file name length: "0x0000000B", file size: "0x00000000"
356           0x164           ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode", file name length: "0x00000015", file size: "0x00000000"
488           0x1E8           ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/AuthenticAMD.bin", file name length: "0x00000026", file size: "0x00007752"
31184         0x79D0          ASCII cpio archive (SVR4 with no CRC), file name: "TRAILER!!!", file name length: "0x0000000B", file size: "0x00000000"
31744         0x7C00          ASCII cpio archive (SVR4 with no CRC), file name: "kernel", file name length: "0x00000007", file size: "0x00000000"
31864         0x7C78          ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86", file name length: "0x0000000B", file size: "0x00000000"
31988         0x7CF4          ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode", file name length: "0x00000015", file size: "0x00000000"
32120         0x7D78          ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/.enuineIntel.align.0123456789abc", file name length: "0x00000036", file size: "0x00000000"
32284         0x7E1C          ASCII cpio archive (SVR4 with no CRC), file name: "kernel/x86/microcode/GenuineIntel.bin", file name length: "0x00000026", file size: "0x004C8000"
5045936       0x4CFEB0        ASCII cpio archive (SVR4 with no CRC), file name: "TRAILER!!!", file name length: "0x0000000B", file size: "0x00000000"
5046272       0x4D0000        LZ4 compressed data, legacy
5611686       0x55A0A6        SHA256 hash constants, little endian
5694260       0x56E334        LUKS_MAGIC
5805252       0x5894C4        mcrypt 2.2 encrypted data, algorithm: RC2, mode: CBC, keymode: 8bit
5806942       0x589B5E        xz compressed data

# Get the lz4 archive start and unlz4 it
╭─root@console ~/investigate
╰─# dd if=initrd bs=5046272 skip=1 | unlz4 - initrd.cpio
16+1 records in
16+1 records out
81402340 bytes (81 MB, 78 MiB) copied, 0.573341 s, 142 MB/s
stdin                : decoded 233871872 bytes

# Exctract the cpio archive
╭─root@console ~/investigate
╰─# cpio -d -D initrd-fs -i < initrd.cpio
cpio: fs: Cannot change ownership to uid 0, gid 0: No such file or directory
cpio: fs: Cannot change mode to rwxr-xr-x: No such file or directory
456781 blocks

# Take a look at the casper file
╭─root@console ~/investigate
╰─# find initrd-fs -type f -name "casper"                                                                                                                                                                       
initrd-fs/scripts/casper

@tlhakhan
Copy link
Author

tlhakhan commented Mar 5, 2023

I found this which better describes the options. 🤣
https://manpages.ubuntu.com/manpages/focal/man7/casper.7.html.

@bgbaroo
Copy link

bgbaroo commented Mar 9, 2023

Oh man @tlhakhan, thanks a lot for your contribution!
Now I feel bad for not responding to your replies soon enough (I was working on something else). I'm very grateful for your work - you are truly the most helpful guy I've seen on open-source communities.

After reading through the Ubuntu manpage for casper, I feel like I can't use boot option url on low-RAM machines, because that parameter would download the iso to memory and mount it as live root. We can't do that because the iso is larger than RAM size.

I guess that left us with netboot=nfs? IMO NFS boot is a pain to setup because of security concerns like allowed IPs and stuff. Not only that, it seems like I'll have to bind mount (which is read-only IIRC) the iso image to NFS share mounts, and because this will be mounted read-only, I'm not sure if it can be used as live root.

Or did I miss something?

Anyway, I recently learned qemu and is now using qemu to debug the boot options. This is much better than my previous method of spinning a new, real VPS to test iPXE/autoinstall lol.

I tried using the kernel and initrd from this repo with your boot parameters, and it seems to work (boot without complaining No space left on device on qemu. But this boot does not seem to do autoinstall. Very strange indeed lol.

@tlhakhan
Copy link
Author

tlhakhan commented Mar 9, 2023

@artworkk, ha ☺️! yup, no problem. Doing this in my fun-free time, definitely enjoy it and if its questions I can quickly answer or figure out, I don't mind.

I tried using the kernel and initrd from this repo with your boot parameters, and it seems to work (boot without complaining No space left on device on qemu. But this boot does not seem to do autoinstall. Very strange indeed lol.

I tried that too and ended up at a dead end, I think that is for the preseed version only. I also saw that they totally removed the legacy-images folder in the latest jammy version.

If Ubuntu's server images are too big, maybe their other versions could work? I haven't tried this on a x86 VM box, but maybe their Ubuntu Core? https://ubuntu.com/core.

@bgbaroo
Copy link

bgbaroo commented Mar 10, 2023

@tlhakhan I just did some digging and it seemed the legacy image used Kickstart for cloud-init, while the newer image uses Subiquity. This explains why the legacy image only processed preseed.

As for Ubuntu Core, thanks! I'll try that out. I have always avoided Ubuntu so I don't know a thing about their available flavors lol. The thing I'm doing here is for my work, hence why I'm working with Ubuntu and iPXE for the first time. I'll try Ubuntu Core and report back whether it worked or not.

PS. Actually my work uses bare-metal with 128GB RAM, but I want to make the flow works with all my personal VPSes (which are all 1GB) too.

@tlhakhan
Copy link
Author

@artworkk, good luck 🙌 🖖! When I have some free time, I will also try out the Ubuntu core as well, hopefully the installation is lighter and allows for easy iPXE scripting.

@jamaya77
Copy link

Hi Tenzin,
Thank you very for sharing. Would you happen to have a similar deployment solution for Ubuntu 18.04 Desktop LTS?
Thank you.

Ah my bad 😗, didn't see this earlier question. I think there was a time where GitHub wasn't sending any emails for replies in gists. Hopefully you have gotten to a solution. 🤞

Hi Tenzin,

Thank you for the follow-up. I was deploying 18.04 via netboot, but I still think the live cd is better. Do you have a similar solution for 18.04 Desktop? Your suggestion is much appreciated. Thank you.

@tlhakhan
Copy link
Author

@jamaya77, I haven't done any automated installs of Ubuntu desktops, only server versions. However, with the new auto-install method, I think it must be possible.

Quick googling, I found this repo, which looks like a starting point🤞: https://github.com/canonical/autoinstall-desktop

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment