Proxmox-VE is great. You can spin-up a Debian guest in a trice, and then spend endless hours tinkering away getting everything working just the way you want. You declare yourself happy and move on to other things.
Until…
Days, weeks or months later you hit a problem when your guest OS runs out of disk space. You think back to the moment when you picked the guest's disk size. And you realise that you - to borrow the words of the Grail Knight in Indiana Jones and the Last Crusade - "chose poorly".
You think, "surely auto-resizing should be possible". You Google. You find some hints that ZFS might've been a solution but, like your original choice of disk size, that, too, would've needed you to "choose wisely" in advance.
You also find information about using Proxmox-VE to resize the disk. You try that. It looks like it works but your Debian guest clings stubbornly to its original size. You give up and set about rebuilding, this time with wiser choices.
The reason why Proxmox-VE disk-size changes don't make it through to the Debian guest is simple. Let's assume the starting position is a 32GB "disk" allocated to the guest. What Proxmox-VE sees is something like this:
$ sudo lvs | grep "\bvm-«vmid»-disk-0\b"
vm-«vmid»-disk-0 pve Vwi-aotz-- 32.00g
What Debian sees is:
$ lsblk /dev/sda
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 32G 0 disk
├─sda1 8:1 0 31G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 975M 0 part [SWAP]
When you ask Proxmox-VE to add space (eg double to 64GB) the effect is:
$ sudo lvs | grep "\bvm-«vmid»-disk-0\b"
vm-«vmid»-disk-0 pve Vwi-aotz-- 64.00g
The change Debian sees is:
$ lsblk /dev/sda
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 31G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 975M 0 part [SWAP]
In short, both Proxmox-VE and Debian are aware that the disk is 64GB in size. It's just that /dev/sda1
(the root partition) isn't expanding to take up the extra space.
If you've been playing with Linux for a while, your first thought is likely to be:
$ sudo resize2fs /dev/sda1
The filesystem is already 8138240 (4k) blocks long. Nothing to do!
Computer says "no". 👎
Why? Here's what's going on:
Partition Tables |
---|
![]() |
Before you started on this journey, the sda1
partition had about 31GB out of the 32GB on the sda
disk, while that final 1GB was devoted to swap.
When you expanded the disk at the Proxmox-VE level, another 32GB was added to the end. But the sda1
partition can't expand into that space because the sda2/sda5
swap partitions are behaving like a bit of petrified chewing gum on the road to progress.
What we need to do is move the swap partitions to the end of the disk. Then sda1
will be able to expand.
The usual caveats apply here:
- You'll be fiddling with partition tables. That's low-level. It's easy to stuff things up so don't rush. Be careful and cautious, and double-check everything as you go;
- You follow these instructions entirely at your own risk. Please don't blame me if you wind up with a busted system; and
- You should take a backup of, or make a clone of, your guest before you start.
There's a chance you might be reading this gist for complete instructions, including the step of using Proxmox-VE to resize the disk. So let's include that too.
It's safe to expand the disk while the guest is running but, if that makes you uncomfortable, you can always shutdown first:
$ sudo shutdown -h now
Then, in the Proxmox-VE GUI:
Expanding a disk |
---|
![]() |
- In "Pool" view 🄰, select "Datacenter" 🄱 and click the disclosure triangle ﹥ to expand the group if necessary.
- Select the guest 🄲 whose disk you want to expand.
- In the middle panel, click "Hardware" 🄳.
- In the adjacent panel, select the "Hard Disk" row 🄴 (near the bottom).
- Click Disk Action 🄵 and choose "Resize" 🄶.
- Type a value in GiB into the "Size Increment" field 🄷 and/or use the up and down buttons. In this example, I'm adding 32GiB to get to a total of 64GiB for the disk.
- Click Resize 🄸.
- If you shutdown the guest, click Start 🄹.
Wait until the guest comes up and then connect to it. You can do everything that follows from the "Console" window in the Proxmox-VE interface but I reckon it's always easier to use SSH.
A long UUID is involved. You don't want to be re-typing those by hand. You want copy-and-paste to work. In my experience, that rules out the "Console" window. But, hey, if it works for you…
Confirm that the disk has been expanded:
$ lsblk /dev/sda
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 31G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 975M 0 part [SWAP]
The disk (sda
) is 64GB so we're good.
You can't delete or move partitions while they're in use so we need to disable virtual memory.
You'd be wise to stop any production services that are RAM-intensive and likely to cause swapping. You want a minimally-active system for this.
$ sudo swapoff -a
Confirm that VM has been disabled:
$ free -m
total used free shared buff/cache available
Mem: 3915 393 3459 0 272 3521
Swap: 0 0 0
All-zeroes in the Swap line. We're good to go.
You're going to need the parted
disk-partitioning utility so make sure that's installed:
$ sudo apt update && sudo apt install -y parted
Launch the utility:
$ sudo parted /dev/sda
GNU Parted 3.5
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
It's always a good idea to pass the disk you want to manipulate to
parted
. It avoids "guessing poorly".
Ask the utility to report on the status quo. This is our baseline and it's a handy reference as we move along:
(parted) print
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 68.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 33.3GB 33.3GB primary ext4 boot
2 33.3GB 34.4GB 1022MB extended
5 33.3GB 34.4GB 1022MB logical linux-swap(v1) swap
We need to clobber partitions 2 and 5. Partition 5 is a logical volume within partition 2 so getting rid of 2 takes 5 along for the ride:
(parted) rm 2
(parted) print
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 68.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 33.3GB 33.3GB primary ext4 boot
Next, we need to re-create partition 2 but, this time, at the end of the disk. Seeing as I'm here, I'm also taking the opportunity to increase swap space to 4GB.
Opinions vary on how much swap is good. It's really up to you.
Heads up! We're about to use some negative numbers:
-4GB
means "4GB from the end of the disk"; while-1s
is a magic incantation that means "the last sector of the disk".
(parted) mkpart extended -4GB -1s
(parted) print
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 68.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 33.3GB 33.3GB primary ext4 boot
2 64.7GB 68.7GB 4000MB extended lba
Next, we need to recreate the logical partition within partition 2. We need to use the same Start
and End
numbers as partition 2:
(parted) mkpart logical linux-swap(v1) 64.7GB 68.7GB
(parted) print
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 68.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 33.3GB 33.3GB primary ext4 boot
2 64.6GB 68.7GB 4096MB extended lba
5 64.6GB 68.7GB 4095MB logical linux-swap(v1) swap, lba
If you compare that with the baseline, you'll see partition 2 didn't have any flags while partition 5 only had a swap
flag. Here, both have gained the lba
flag which stands for Logical Block Addressing. I have no idea whether that's appropriate in this situation. I assume the people who set up Debian Bookworm know a lot more than I do and had good reasons for avoiding lba
so I'm going to follow their lead by toggling the flag off for both partitions:
(parted) toggle 2 lba
(parted) toggle 5 lba
(parted) print
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 68.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 33.3GB 33.3GB primary ext4 boot
2 64.7GB 68.7GB 4000MB extended
5 64.7GB 68.7GB 3999MB logical linux-swap(v1) swap
The last step is to change the end of partition 1 to be the same as the start of partition 2. The way you read those columns is:
Start
is "from and including"; whileEnd
is "up to but not including".
Thus we want the End
of partition 1 to be the same as the Start
of partition 2. Here we go:
(parted) resizepart 1 64.7GB
Warning: Partition /dev/sda1 is being used. Are you sure you want to continue?
Yes/No? yes
(parted) print
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 68.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 64.7GB 64.7GB primary ext4 boot
2 64.7GB 68.7GB 4000MB extended
5 64.7GB 68.7GB 3999MB logical swap
Our work with parted
is done:
(parted) quit
Information: You may need to update /etc/fstab.
That's good advice about
fstab
and we'll get to that in a moment.
Take a peek at what Debian thinks now:
$ lsblk /dev/sda
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 60.3G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 3.7G 0 part
We have a 64GB disk with a ~60GB root partition, and ~4GB in sda5
which is intended for swap.
It you're wondering about the numeric rubberiness on display in these commands, it is the result of some calculations being done in Gigabytes (GB = 109 bytes) while others are done in Gibibytes (GiB = 230 bytes).
If you re-run the
lsblk
command with the-b
flag ("display in bytes") thesda5
partition is 3,999,268,864 bytes. From there it's easy to see that the3999MB
displayed byparted
is the result of shifting the decimal point 6 places to the left (ie it's a "divide by powers of 10" operation) and, accordingly the GB and MB being employed byparted
are the formal SI gigabyte and megabyte units.If, instead, you divide the byte count by 230 and round to one decimal place, the answer is 3.7 which implies
lsblk
is reporting in powers of 2 and using G as an abbreviation for the IEC gibibyte unit GiB.
There is, however, still one fly in the ointment:
$ df -h /dev/sda1
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 31G 3.2G 26G 11% /
In other words, although the partition has been expanded, the file system still isn't using all of it. Fix that:
$ sudo resize2fs /dev/sda1
resize2fs 1.47.0 (5-Feb-2023)
Filesystem at /dev/sda1 is mounted on /; on-line resizing required
old_desc_blocks = 4, new_desc_blocks = 8
The filesystem on /dev/sda1 is now 15795642 (4k) blocks long.
$ df -h /dev/sda1
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 60G 3.2G 54G 6% /
Ripper! We have finally increased the size of the root file system.
Not quite done, though. We have some house-keeping to do.
We need to re-enable swapping:
$ sudo mkswap /dev/sda5
Setting up swapspace version 1, size = 3.7 GiB (3999264768 bytes)
no label, UUID=d30f841a-116f-4de6-a4fe-e453ad2b2a06
$ sudo swapon /dev/sda5
Notice the UUID that came back from mkswap
command. Copy that to the clipboard.
Earlier, parted
reminded us to fix up /etc/fstab
. That contains the "glue" which mounts the swap partition at boot time. You need to use sudo
and your favourite text editor to edit the relevant line in that file. For example:
$ sudo vi /etc/fstab
Find the line that looks like this:
UUID=8c028e96-913a-4037-a586-7d4e8b6a2bbb none swap sw 0 0
The actual UUID on your system will be different. The
swap
in the third field (the File System Type) is the clue.
Edit that line to replace the UUID with the one that came back from mkswap
, and which should now be on your clipboard. In this example, the result would be:
UUID=d30f841a-116f-4de6-a4fe-e453ad2b2a06 none swap sw 0 0
The Initial RAM Filesystem (initramfs
) knows about the location of the old swap file on the old sda5
partition, so it needs a kick in the pants:
$ sudo update-initramfs -u
update-initramfs: Generating /boot/initrd.img-6.1.0-32-amd64
W: initramfs-tools configuration sets RESUME=UUID=8c028e96-913a-4037-a586-7d4e8b6a2bbb
W: but no matching swap device is available.
I: The initramfs will attempt to resume from /dev/sda5
I: (UUID=d30f841a-116f-4de6-a4fe-e453ad2b2a06)
I: Set the RESUME variable to override this.
You can see both the original UUID that was in
/etc/fstab
, along with its replacement.
We've also been futzing about in GRUB's domain so:
$ sudo update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.1.0-32-amd64
Found initrd image: /boot/initrd.img-6.1.0-32-amd64
Found linux image: /boot/vmlinuz-6.1.0-30-amd64
Found initrd image: /boot/initrd.img-6.1.0-30-amd64
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
done
Now for the acid test:
$ sudo reboot
After the reboot:
$ lsblk /dev/sda
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 64G 0 disk
├─sda1 8:1 0 60.3G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 3.7G 0 part [SWAP]
Enlarged root partition and /dev/sda5
is flagged as [SWAP]
. Excellent!
$ df -h /dev/sda1
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 60G 3.2G 54G 6% /
The root file system is using all of the sda1
partition!
$ free -m
total used free shared buff/cache available
Mem: 3915 398 3455 0 271 3517
Swap: 3813 0 3813
Swap is on and has also increased!
Cookin' with photons! 💡
It should be apparent that this gist also answers the question:
How can I change the swap size in a Debian guest?
Say it's at the default of 1GB but you want 4GB. You:
-
Tell Proxmox-VE to add 3GB to the guest's disk.
-
In the guest, you:
- disable swap;
- remove, then re-create the
sda2
andsda5
partitions; - re-enable swap; and
- do the housekeeping.