Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save luckydonald/1849291fb5e19c87df8c8a1618e29eaa to your computer and use it in GitHub Desktop.
Save luckydonald/1849291fb5e19c87df8c8a1618e29eaa to your computer and use it in GitHub Desktop.
Raspberry Pi 4B: Install Proxmox/Pimox, Homeassistent, VM, LXC Containers, Docker

Hello Pi

This is a writedown of how I installed my raspberry pi(s).

The base is a Proxmox, so I don't have to worry about reformat the SD card every time I wanna try out something new, as I can start VMs and LXC containers, as well as Docker containers once it's configured properly.

Note: Those files are versioned, so you can always look what changed from time to time.

Topics

Run

  • sudo raspi-config
  • Select 8 Update Update this tool to the latest version
    • let it run
  • Select 5 Localisation Options Configure language and regional settings
    • Select L1 Locale Configure language and regional settings
    • Go down and deselect [ ] the en_GB.UTF-8 UTF-8 or any other locale with the Spacebar key.
    • Further down select [*] the en_US.UTF-8 UTF-8 with the Spacebar key.
    • Press Enter to confirm.
    • Set Default locale for the system environment: en_US.UTF-8
  • Currently by running the official README.md in the Pimox7 repo
    • Flash and startup the latest image from https://downloads.raspberrypi.org/raspios_arm64/.

    • Set up your ssh key if you didn't with the flashing tool and disable password login. If not at least change the default password.

    • sudo -s

    • curl https://raw.githubusercontent.com/pimox/pimox7/master/RPiOS64-IA-Install.sh > RPiOS64-IA-Install.sh

    • chmod +x RPiOS64-IA-Install.sh

    • ./RPiOS64-IA-Install.sh

    • Follow the prompts

      • Enter new hostname e.g. RPi4-01-PVE : pimox
      • Enter new static IP and NETMASK e.g. 192.168.0.100/24 : 192.168.178.30/24
      • Is 192.168.178.1 the correct gateway ? y / n : y
      • #########################################################################################
        =========================================================================================
        THE NEW HOSTNAME WILL BE: pimox-wg
        =========================================================================================
        THE DHCP SERVER ( dhcpcd5 ) WILL BE  REMOVED  !!!
        =========================================================================================
        THE PIMOX REPO WILL BE ADDED IN :  /etc/apt/sources.list.d/pimox.list  CONFIGURATION :
        # Pimox 7 Development Repo
        deb https://raw.githubusercontent.com/pimox/pimox7/master/ dev/
        =========================================================================================
        THE NETWORK CONFIGURATION IN :  /etc/network/interfaces  WILL BE  CHANGED  !!! TO :
        auto lo
        iface lo inet loopback
        iface eth0 inet manual
        auto vmbr0
        iface vmbr0 inet static
                address  192.168.178.30/24
                gateway  192.168.178.1
                bridge-ports eth0
                bridge-stp off
                bridge-fd 0
        =========================================================================================
        THE HOSTNAMES IN :  /etc/hosts  WILL BE  OVERWRITTEN  !!! WITH :
        127.0.0.1	localhost
        192.168.178.30	pimox-wg
        =========================================================================================
        THESE STATEMENTS WILL BE  ADDED  TO THE  /boot/cmdline.txt  IF NONE EXISTENT :
        cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
        =========================================================================================
        #########################################################################################
        
      • YOU ARE OKAY WITH THESE CHANGES ? YOUR DECLARATIONS ARE CORRECT ? CONTINUE ? y / n : y
      • =========================================================================================
                                   ! SETUP NEW ROOT PASSWORD !
        =========================================================================================
        
      • New password: •••••••••••
      • Retype new password: •••••••••••
    • Work around the issues:

      • a) Apply ToSMaverick's header install trick, downgrading the kernel:

        • This one is simpler, but may propose a problem as upgrading the kernel later on won't work.
        • I used it with the fist install without any problems, but for the second install I used the b) way.
        • wget http://archive.raspberrypi.org/debian/pool/main/r/raspberrypi-firmware/raspberrypi-kernel_1.20220120-1_arm64.deb
          wget http://archive.raspberrypi.org/debian/pool/main/r/raspberrypi-firmware/raspberrypi-kernel-headers_1.20220120-1_arm64.deb
          
          sudo dpkg -i raspberrypi-kernel_1.20220120-1_arm64.deb raspberrypi-kernel-headers_1.20220120-1_arm64.deb
          
          sudo apt-mark hold raspberrypi-kernel raspberrypi-kernel-headers
      • b) Apply rgsilva's compile trick, and install ceph-dkms from scratch

        Not working
        • which is surprisingly fast on the Pi 4.
        • Install ceph-dkms
          1. sudo -i

          2. apt install git debhelper (ignore the errors, we are working on resolving those right now.)

          3. First you need to clone RPi's Kernel source code.

            git clone https://github.com/raspberrypi/linux.git --depth=1
            
          4. Then copy all ceph files into a new default-src directory.

            Note that the third line needs to copy to default-src/… not sdefault-srcrc/…. This is corrected below.

            mkdir -p default-src/drivers/block
            # typo
            cp -r linux/drivers/block/{rbd.c,rbd_types.h} default-src/drivers/block/
            grep -E "^#|^ccflags|rbd.o" linux/drivers/block/Makefile > default-src/drivers/block/Makefile
            mkdir -p default-src/fs/ceph
            cp linux/fs/ceph/* default-src/fs/ceph/
            mkdir -p default-src/net/ceph
            cp -r linux/net/ceph/* default-src/net/ceph/
            
          5. Now you need to clone the original repository

            git clone https://github.com/pimox/ceph-dkms.git
            
          6. Then you need to replace the default-src directory with the one you created in the step before

            rm -rf ceph-dkms/src/default-src
            cp -r default-src ceph-dkms/src/default-src
            
          7. To make sure the deb won't get (easily) replaced, we can increase its minor version. This step is optional and can cause issues with future updates, but I recommend it.

            Maybe it's better not do this and instead pin the version at the end. (That's what I'm attempting)

            (
              cd ceph-dkms/
              sed -i "s/0\\.0\\.2/0.0.2-1/g" debian/changelog
              sed -i "s/0\\.0\\.2/0.0.2-1/g" debian/dkms
            )
            
          8. Now you need to regenerate the .deb

            (cd ceph-dkms && make)
            
          9. You can finally install it using dpkg. Note: if you didn't change the version before, you need to update the package version below.

            Because I didn't modify the version, instead of

            (
              cd ceph-dkms
              sudo dpkg -i ceph-dkms_0.0.2-1_all.deb
            )
            

            I did run

            (
              cd ceph-dkms
              sudo dpkg -i ceph-dkms_0.0.2_all.deb
            )
            
          10. Make sure it's locked to not update

            sudo apt-mark hold ceph-dkms
            
        • Install zfs-dkms 0. For the zfs-dkms, you can just follow OpenZFS's instructions for installing on Debian Bullseye.
          1. echo 'deb http://deb.debian.org/debian bullseye-backports main contrib' > /etc/apt/sources.list.d/bullseye-backports.list
            echo 'deb-src http://deb.debian.org/debian bullseye-backports main contrib' >> /etc/apt/sources.list.d/bullseye-backports.list
            
          2. echo 'Package: libnvpair1linux libnvpair3linux libuutil1linux libuutil3linux libzfs2linux libzfs4linux libzpool2linux libzpool4linux spl-dkms zfs-dkms zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed' > /etc/apt/preferences.d/90_zfs
            echo 'Pin: release n=bullseye-backports' >> /etc/apt/preferences.d/90_zfs
            echo 'Pin-Priority: 990' >> /etc/apt/preferences.d/90_zfs
            
          3. apt update
            apt install dpkg-dev
            
            • We didn't install linux-headers-$(uname -r) because of Couldn't find any package by glob 'linux-headers-5.15.32-v8'
            • We didn't install linux-image-amd64 as that would be the wrong achitecture, and linux-image-aarch64 was not available.
          4. DEBIAN_FRONTEND=noninteractive  apt install zfs-dkms zfsutils-linux
            
        • Check that the ceph-dkms and zfs-dkms packages exist. > > As of today, at least, this seems to be fully working. Just tested on both nodes on my cluster and they all seem pretty happy with the new packages: > > > $ apt list --installed | grep -E "(ceph|zfs)-dkms" > > ceph-dkms/now 0.0.2-1 all [installed,local] > zfs-dkms/bullseye-backports,bullseye-backports,now 2.1.4-1~bpo11+1 all [installed,automatic] > >

  • Use the generic aarm64 image instead of the pi one
  • generic-aarch64
  • Follow luckydonald's post (thanks!):
  1. Download the generic-aarch64 the image from the home assistant release page
    wget https://github.com/home-assistant/operating-system/releases/download/8.0.rc4/haos_generic-aarch64-8.0.rc4.img.xz
    
  2. xz --decompress haos_generic-aarch64-*.img.xz
    
  3. mv haos_generic-aarch64-8.0.rc4.img /var/lib/vz/template/iso/haos_generic-aarch64-8.0.rc4.img
    
  4. # this needs sudo. It did work in the shell on the website.
    qm create 100 --bios ovmf --cores 2 --memory 4096 --scsihw virtio-scsi-pci -net0 virtio,bridge=vmbr0
    
  5. # this needs sudo. It did work in the shell on the website.
    qm importdisk 100 /var/lib/vz/template/iso/haos_generic-aarch64-*.img local
    
    which ends with
    Successfully imported disk as 'unused0:local:100/vm-100-disk-0.raw'
    
  6. On the Proxmox web interface highlight your new VM.
  7. Attach created disk
    1. Select Hardware Tab
    2. Select the new, unassigned disk
    3. click Edit
    4. in the Bus/Device drop down select SATA.
    5. Click Add button.
  8. In the Options Tab:
  9. Name: home-assistant
  10. Start at Boot: Check yes.
  11. Boot Order: Set the boot disk to the newly added one.
  12. Click Start

Clean restart (for VM configuration changes)

  • issue a shutdown in Proxmox
  • then to on your home assistant's page (e.g. http://homeassistant.local:8123)
  • SettingsSystemHardware (http://homeassistant.local:8123/config/hardware)
  • Three Dots menu on the top left
  • Click SHUTDOWN HOST
  • Start the VM again in Proxmox.

Quick restart (for home assistant config file changes only)

Following pimox/pimox7#48 (comment)

1. Take note of your USB IDs

  1. Connect the USB to the Raspberry Pi
    • Yes, I did forgot this in the past, and was trying to fix it for 20 minutes.
  2. In the Proxmox UI, go to the Hardware page of your VM
  3. Click Add USB Device:
    • ▶︎ Click to see pictureimage
  4. Select your connected USB device/port
    • Either select Use USB Vendor/Device ID,
      if you want this exact device to be shared, independent on where you plug it in
      • You can resize the column of that table if the id is truncated with a ellipsis (…).
      • ▶︎ Click to see pictureimage
      • Note down the value in the Device column, in this case that's 10c4:ea60.
    • Or select Use USB Port,
      if you want this usb port to be shared,
      or you have to add more than one device of the same model (= same device id)
      • You can resize the column of that table if the id is truncated with a ellipsis (…).
      • ▶︎ Click to see pictureimage
      • Note down the value in the Port column, in this case that's 1-1.3.
    • Note down the value in the first colum (Device or Port) when you selected your device.
      • In this case that's either 10c4:ea60 for the Device ID, or 1-1.3 if you went for the Port.
    • Don't add the device, press the x button.
  5. In case you already added the device, take note of it's id and delete it with the Remove button on the top.
  1. Run lsusb without your stick pluggend it
  • Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
    Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    
  1. Plug in your USB drive
  2. Run lsusb with your stick pluggend it
  • Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
    Bus 001 Device 003: ID 10c4:ea60 Silicon Labs CP210x UART Bridge
    Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    
  1. The newly added entry is your device:

    • Bus 001 Device 003: ID 10c4:ea60 Silicon Labs CP210x UART Bridge
  2. Those lines have the format Bus AAA Device BBB: ID CCCC:DDDD blahblah, so

    • your Device ID is CCCC:DDDD, in this case 10c4:ea60.
    • your Port is (probably) A-A.B, in this case 1-1.3.

2. Figure out the ID of your VM

  • Use the GUI to read the ID.
    image
  • Usually they start at 100.
  1. From the previous step we know which number the VM has we want to edit, in our case the 100.
  2. Stop the VM
  3. Open the VM config via Terminal Editor, and edit the config file with the number of your VM.
    • The file is located at /etc/pve/qemu-server/<VM number>.conf
    • Edit it, e.g. sudo pico /etc/pve/qemu-server/100.conf
  4. Find the line starting with args:
    • If that line does not exist, create one above the first line of the file, and add the stuff above, also see Examples.
    1. it should contain -device qemu-xhci, to have the USB drives for the remote console (fake mouse and fake keyboard)
    2. Add your device as well, either one these for each of them
      • Device ID AAAA:BBBB: Add -device usb-host,vendorid=0x<AAAA>,productid=0x<BBBB>
        • in this case for Device ID 10c4:ea60 it would be -device usb-host,vendorid=0x10c4,productid=0xea60.
        • This is recommended, because no matter to which USB port you plug this exact device, it will always be picked up.
      • Port A-A.B: Add -device usb-host,hostbus=<A>,hostport=<A.B>
        • in this case for Port 1-1.3 it would be -device usb-host,hostbus=1,hostport=1.3.
        • this makes sense if you want to share this slot exactly. This could make sense for a flash drive for backups, and you want to be able to use different flash drives, but always the same USB slot.
  5. Start the VM again

args: -device qemu-xhci -device usb-host,vendorid=<0xVendorid>,productid=<0xProductid>

In my case for the SONOFF Zigbee stick:

args: -device qemu-xhci -device usb-host,vendorid=0x10c4,productid=0xea60

For having the same stick two times in that (i.e. same vendor + product id) you have to use the port number.

args: -device qemu-xhci -device usb-host,hostbus=1,hostport=1.2 -device usb-host,hostbus=1,hostport=1.3

First you need an 64bit ARM template.

This is also known as Root FS. Basically a bunch of files that make the chosen linux flavor special.

If you are coming from the Docker world, basically that's the FROM debian step.

About VM Ids

For clarity sake, even if the numerical IDs for CTs share the starting point of 100 with the VMs, I will use 100-199 for VMs exclusively, and 200-299 for LXC containers. Should I ever breach that limit, I will always add blocks in an alternatig pattern, i.e. it is VMs if the hundreds place is even, and CTs if it's odd. This also means my first CT will have the id 200 in this guide.

Install LXC container in Pimox

Now we are ready to go

Get a arm64 rootfs

Raspberry pi still uses arm, but thanks to the Pi 4B you can use the 64 bit version, hence arm64.

  1. Choose your linux flavor

  2. Find a rootfs archive file

    • This is usually a file called rootfs.zip, rootfs.tar, rootfs.tar.gz, rootfs.tar.xz or similar.
    • For example, I found rootfs.tar.xz in /images/debian/bullseye/arm64/default/20220505_06:03.
      • Note: there will be new images build every day, so the URL also only lasts for a day or two. Just grab a recent one from a few directories up.
  3. Download the rootfs archive, …and give it a proper name:
    You need to modify the $DATE variable with an existing date from the url above.
    Note, if you picked ubuntu, your $ROOTFS_DOWNLOAD_URL needs to be adapted as well.

    DATE="20220616_05:25"
    TEMPLATE_FILE_NAME="debian-bullseye-arm64_${DATE}_rootfs.tar.xz"
    ROOTFS_DOWNLOAD_URL="https://uk.lxd.images.canonical.com/images/debian/bullseye/arm64/default/${DATE}/rootfs.tar.xz"
    
    wget "${ROOTFS_DOWNLOAD_URL}" -O "${TEMPLATE_FILE_NAME}"

    If the server serves them on different paths in the future you need to adopt $ROOTFS_DOWNLOAD_URL as needed.

  4. Move it to the CT (Container Template) directory

    • mv ${TEMPLATE_FILE_NAME} /var/lib/vz/template/cache/${TEMPLATE_FILE_NAME}
  5. Confirm it's existance in the GUI

    • in the tree view on the left click on Datacenter, your node (e.g. pimox), local, then in the middle CT Images.
    • The newly downloaded image should now be in that list.
      ▶︎ Click to see pictureimageimage
  6. Crate an CT, either gui or via terminal

  • pct create 200 /var/lib/vz/template/cache/${TEMPLATE_FILE_NAME} --arch arm64 --features nesting=1 --hostname docker --ostype debian --net0 name=eth0,bridge=vmbr0,firewall=1,ip=dhcp,ip6=dhcp --password='much $ecure passw0rd' --tags docker-host
    • pct create The command to create a CT container
    • 200 : (1 - N) The (unique) ID of the VM. See About VM Ids.
    • /var/lib/vz/template/cache/${TEMPLATE_FILE_NAME} The OS template or backup file.
    • --arch arm64 Architecture for the Pi.
    • --features nesting=1 for docker.
    • --hostname docker hostname and how it will be called in the local network.
    • --ostype debian That's what we choose above.
    • --net0 name=eth0,bridge=vmbr0,firewall=1,ip=dhcp,ip6=dhcp
    • // --ssh-public-keys ~/.ssh/id_rsa.pub the root user of the Pimox/Proxmox install.
    • // --ssh-public-keys ~/.ssh/remote_id_rsa.pub my own one, for my laptop to connect.
    • --password='much $ecure passw0rd' as alternative to the ssh-public-keys command above, which seems to be failing unfortunantely.
    • --tags docker-host Tags of the Container. This is only meta information.
  • Note, most of those options can be edited/added with the CLI later as well, e.g.:

    • pct set 200 -net0 name=eth0,bridge=vmbr0,firewall=1,ip=dhcp,ip6=dhcp
  • The net0 one seems to fail with unable to open file '/etc/network/interfaces.tmp.205501' - No such file or directory

    • In that case, create it via GUI.
      Example Screenshotimage
      .
    • If that also fails, check out this comment by @AubsUK (thanks!)
  • pct shutdown 200

  • // echo 'lxc.apparmor.profile: lxc-default-with-nesting' | sudo tee -a /etc/pve/lxc/200.conf

  • pct start 200

  • pct pull 200 /root/.ssh/authorized_keys authorized_keys.200.bak If there's no such file, that's okey.

  • pct push 200 /root/.ssh/authorized_keys /root/.ssh/authorized_keys Copy our authorized file to the CT.

  • Verbose start for debugging: lxc-start -n 200 -F -l DEBUG -o /tmp/lxc-200.log

  1. k, k
  2. Something
  3. ????
  4. maybe?



Sources

Here is a list of links I looked at.

6 Docker

  • For now I have to use a VM it seems, the CT thingo isn't starting as soon as I attach any network.

ubuntu

  1. Download a debian
    wget https://cdimage.debian.org/debian-cd/11.4.0/arm64/iso-dvd/debian-11.4.0-arm64-DVD-1.iso
  2. Move to correct folder
    mv debian-11.4.0-arm64-DVD-1.iso /var/lib/vz/template/iso/debian-11.4.0-arm64-DVD-1.iso
@nistvan86
Copy link

There doesn't seem to be too much info about this on the net so forgive me if this is the wrong place to ask (and delete if you like).

How does the performance compare eg. to a native Raspbian + Docker setup when running containers on top of Pimox (which I think uses KVM) + Debian + Docker? Is it usable? I'm thinking about migrating my current Raspbian + dockerized HA to a Pimox + HAOS + Debian/Docker (for rest of the containers) setup, but I'm unsure if it's a wise idea to do it or not.

@luckydonald
Copy link
Author

luckydonald commented Jun 3, 2023

@nistvan86 I don't really have good experience or benchmarks because I only tried this route and never any other. HA isn't very demanding I think (i don't use cameras etc, just simple zigbee), so it doesn't matter much.
With the snapshot functionality of proxmox NOT working I however don't see a big Argument for proxmox if you only run HA - like I do

@nistvan86
Copy link

@luckydonald thanks!

@AubsUK
Copy link

AubsUK commented Jun 16, 2023

I too had the "unable to open file '/etc/network/interfaces.tmp.xxxx' - No such file or directory" issue.

I've just figured out how to get the latest Cloud image of Debian 12 Bullseye working on PiMox 7:

https://uk.lxd.images.canonical.com/images/debian/bookworm/arm64/cloud/20230614_05:24/rootfs.tar.xz

When configuring the CT, leave IP = Static. Do not enter any IP addresses anywhere.

Boot the CT, enter the CTs console.

Install Nano (you can use VI/VIM, but I'm not a fan of them)
apt install nano

Edit the network configuration file
nano /etc/systemd/network/eth0.network

Change:

[Network]
DHCP=yes

to

[Network]
Address=192.168.1.101/24
Gateway=192.168.1.1

Restart Systemd Networkd
systemctl restart systemd-networkd

Check the static IP is set (you'll probably also see the DHCP IP is still there)
ip -c a

Reboot
reboot now

You've now got the static IP set up

@luckydonald
Copy link
Author

luckydonald commented Jun 21, 2023

@AubsUK thanks! I linked to that at that section above.

@jackpassgfw
Copy link

I too had the "unable to open file '/etc/network/interfaces.tmp.xxxx' - No such file or directory" issue.

I've just figured out how to get the latest Cloud image of Debian 12 Bullseye working on PiMox 7:

https://uk.lxd.images.canonical.com/images/debian/bookworm/arm64/cloud/20230614_05:24/rootfs.tar.xz

When configuring the CT, leave IP = Static. Do not enter any IP addresses anywhere.

Boot the CT, enter the CTs console.

Install Nano (you can use VI/VIM, but I'm not a fan of them) apt install nano

Edit the network configuration file nano /etc/systemd/network/eth0.network

Change:

[Network]
DHCP=yes

to

[Network]
Address=192.168.1.101/24
Gateway=192.168.1.1

Restart Systemd Networkd systemctl restart systemd-networkd

Check the static IP is set (you'll probably also see the DHCP IP is still there) ip -c a

Reboot reboot now

You've now got the static IP set up

Why I lost the static IP after I reboot my container? The static IP seems not be saved. The static IP comes again after run
systemctl restart systemd-networkd

@remkohat
Copy link

Maybe a bit late to the party...

Yes for Debian CT's you need to setup your network manually within the CT and yes you lose your networkconnectivety after a reboot.
That's because systemd-networkd isn't started automatically.

There are 2 options.

Option 1: Run systemctl enable systemd-networkd to have network after a reboot.

Option 2: Make sure you have networkconnectivety and then run apt install ifupdown2.
After that you'll be able to setup your network in the CT properties as you can with Ubuntu CT's.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment