You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a writedown of how I installed my raspberry pi(s).
The base is a Proxmox, so I don't have to worry about reformat the SD card every time I wanna try out something new, as I can start VMs and LXC containers, as well as Docker containers once it's configured properly.
Note: Those files are versioned, so you can always look what changed from time to time.
Enter new static IP and NETMASK e.g. 192.168.0.100/24 : 192.168.178.30/24
Is 192.168.178.1 the correct gateway ? y / n : y
#########################################################################################
=========================================================================================
THE NEW HOSTNAME WILL BE: pimox-wg
=========================================================================================
THE DHCP SERVER ( dhcpcd5 ) WILL BE REMOVED !!!
=========================================================================================
THE PIMOX REPO WILL BE ADDED IN : /etc/apt/sources.list.d/pimox.list CONFIGURATION :
# Pimox 7 Development Repo
deb https://raw.githubusercontent.com/pimox/pimox7/master/ dev/
=========================================================================================
THE NETWORK CONFIGURATION IN : /etc/network/interfaces WILL BE CHANGED !!! TO :
auto lo
iface lo inet loopback
iface eth0 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.178.30/24
gateway 192.168.178.1
bridge-ports eth0
bridge-stp off
bridge-fd 0
=========================================================================================
THE HOSTNAMES IN : /etc/hosts WILL BE OVERWRITTEN !!! WITH :
127.0.0.1 localhost
192.168.178.30 pimox-wg
=========================================================================================
THESE STATEMENTS WILL BE ADDED TO THE /boot/cmdline.txt IF NONE EXISTENT :
cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
=========================================================================================
#########################################################################################
YOU ARE OKAY WITH THESE CHANGES ? YOUR DECLARATIONS ARE CORRECT ? CONTINUE ? y / n : y
=========================================================================================
! SETUP NEW ROOT PASSWORD !
=========================================================================================
To make sure the deb won't get (easily) replaced, we can increase its minor version.
This step is optional and can cause issues with future updates, but I recommend it.
Maybe it's better not do this and instead pin the version at the end. (That's what I'm attempting)
(
cd ceph-dkms/
sed -i "s/0\\.0\\.2/0.0.2-1/g" debian/changelog
sed -i "s/0\\.0\\.2/0.0.2-1/g" debian/dkms
)
Now you need to regenerate the .deb
(cd ceph-dkms && make)
You can finally install it using dpkg.
Note: if you didn't change the version before, you need to update the package version below.
Because I didn't modify the version, instead of
(
cd ceph-dkms
sudo dpkg -i ceph-dkms_0.0.2-1_all.deb
)
I did run
(
cd ceph-dkms
sudo dpkg -i ceph-dkms_0.0.2_all.deb
)
Check that the ceph-dkms and zfs-dkms packages exist.
>
> As of today, at least, this seems to be fully working. Just tested on both nodes on my cluster and they all seem pretty happy with the new packages:
>
> > $ apt list --installed | grep -E "(ceph|zfs)-dkms" > > ceph-dkms/now 0.0.2-1 all [installed,local] > zfs-dkms/bullseye-backports,bullseye-backports,now 2.1.4-1~bpo11+1 all [installed,automatic] >
>
# this needs sudo. It did work in the shell on the website.
qm create 100 --bios ovmf --cores 2 --memory 4096 --scsihw virtio-scsi-pci -net0 virtio,bridge=vmbr0
# this needs sudo. It did work in the shell on the website.
qm importdisk 100 /var/lib/vz/template/iso/haos_generic-aarch64-*.img local
which ends with
Successfully imported disk as 'unused0:local:100/vm-100-disk-0.raw'
On the Proxmox web interface highlight your new VM.
Attach created disk
Select Hardware Tab
Select the new, unassigned disk
click Edit
in the Bus/Device drop down select SATA.
Click Add button.
In the Options Tab:
Name: home-assistant
Start at Boot: Check yes.
Boot Order: Set the boot disk to the newly added one.
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Plug in your USB drive
Run lsusb with your stick pluggend it
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 10c4:ea60 Silicon Labs CP210x UART Bridge
Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
The newly added entry is your device:
Bus 001 Device 003: ID 10c4:ea60 Silicon Labs CP210x UART Bridge
Those lines have the format Bus AAA Device BBB: ID CCCC:DDDD blahblah, so
your Device ID is CCCC:DDDD, in this case 10c4:ea60.
your Port is (probably) A-A.B, in this case 1-1.3.
in this case for Port 1-1.3 it would be -device usb-host,hostbus=1,hostport=1.3.
this makes sense if you want to share this slot exactly. This could make sense for a flash drive for backups, and you want to be able to use different flash drives, but always the same USB slot.
This is also known as Root FS.
Basically a bunch of files that make the chosen linux flavor special.
If you are coming from the Docker world, basically that's the FROM debian step.
About VM Ids
For clarity sake, even if the numerical IDs for CTs share the starting point of 100 with the VMs, I will use 100-199 for VMs exclusively, and 200-299 for LXC containers. Should I ever breach that limit, I will always add blocks in an alternatig pattern, i.e. it is VMs if the hundreds place is even, and CTs if it's odd.
This also means my first CT will have the id 200 in this guide.
Install LXC container in Pimox
Now we are ready to go
Get a arm64 rootfs
Raspberry pi still uses arm, but thanks to the Pi 4B you can use the 64 bit version, hence arm64.
I will be using debian in this guide, as I only want to run docker in it, therefore I don't need cutting edge system packages. Docker adds their own packages in a different package repository, anyway, so those will be fresh. Everything else being stale stable packages is not a bad thing here.
Find a rootfs archive file
This is usually a file called rootfs.zip, rootfs.tar, rootfs.tar.gz, rootfs.tar.xz or similar.
Note: there will be new images build every day, so the URL also only lasts for a day or two. Just grab a recent one from a few directories up.
Download the rootfs archive,
…and give it a proper name:
You need to modify the $DATE variable with an existing date from the url above.
Note, if you picked ubuntu, your $ROOTFS_DOWNLOAD_URL needs to be adapted as well.
There doesn't seem to be too much info about this on the net so forgive me if this is the wrong place to ask (and delete if you like).
How does the performance compare eg. to a native Raspbian + Docker setup when running containers on top of Pimox (which I think uses KVM) + Debian + Docker? Is it usable? I'm thinking about migrating my current Raspbian + dockerized HA to a Pimox + HAOS + Debian/Docker (for rest of the containers) setup, but I'm unsure if it's a wise idea to do it or not.
@nistvan86 I don't really have good experience or benchmarks because I only tried this route and never any other. HA isn't very demanding I think (i don't use cameras etc, just simple zigbee), so it doesn't matter much.
With the snapshot functionality of proxmox NOT working I however don't see a big Argument for proxmox if you only run HA - like I do
Check the static IP is set (you'll probably also see the DHCP IP is still there) ip -c a
Reboot reboot now
You've now got the static IP set up
Why I lost the static IP after I reboot my container? The static IP seems not be saved. The static IP comes again after run
systemctl restart systemd-networkd
Yes for Debian CT's you need to setup your network manually within the CT and yes you lose your networkconnectivety after a reboot.
That's because systemd-networkd isn't started automatically.
There are 2 options.
Option 1: Run systemctl enable systemd-networkd to have network after a reboot.
Option 2: Make sure you have networkconnectivety and then run apt install ifupdown2.
After that you'll be able to setup your network in the CT properties as you can with Ubuntu CT's.
There doesn't seem to be too much info about this on the net so forgive me if this is the wrong place to ask (and delete if you like).
How does the performance compare eg. to a native Raspbian + Docker setup when running containers on top of Pimox (which I think uses KVM) + Debian + Docker? Is it usable? I'm thinking about migrating my current Raspbian + dockerized HA to a Pimox + HAOS + Debian/Docker (for rest of the containers) setup, but I'm unsure if it's a wise idea to do it or not.