Skip to content

Instantly share code, notes, and snippets.

@dergachev
Last active September 2, 2024 23:28
Show Gist options
  • Save dergachev/6828967 to your computer and use it in GitHub Desktop.
Save dergachev/6828967 to your computer and use it in GitHub Desktop.
# Copied from http://ttaportal.org/wp-content/uploads/2012/10/7-Reallocation-using-LVM.pdf
##
## Showing the problem: need to reallocate 32GB from /dev/mapper/pve-data to /dev/mapper/pve-root
##
df -h
# Filesystem Size Used Avail Use% Mounted on
# /dev/mapper/pve-root 37G 37G 0 100% /
# tmpfs 2.0G 0 2.0G 0% /lib/init/rw
# udev 10M 548K 9.5M 6% /dev
# tmpfs 2.0G 0 2.0G 0 /dev/shm
# /dev/mapper/pve-data 102G 19G 84G 19% /var/lib/vz
# /dev/sda1 504M 31M 448M 7% /boot
##
## shrinking /dev/mapper/pve-data
##
# unmount the file system from mount point /var/lib/vz
umount /var/lib/vz
# To check the file system
e2fsck -f /dev/mapper/pve-data
# e2fsck 1.41.3 (12-Oct-2008)
# Pass 1: Checking inodes, blocks, and sizes
# Pass 2: Checking directory structure
# Pass 3: Checking directory connectivity
# Pass 4: Checking reference counts
# Pass 5: Checking group summary information
# /dev/mapper/pve-data: 20/4587520 files (0.0% non-contiguous), 333981/18350080 blocks
# shrink the file system from 102G to 70G
resize2fs /dev/mapper/pve-data 70G
# Resizing the filesystem on /dev/mapper/pve-data to 18350080 (4k) blocks
# The filesystem on /dev/mapper/pve-data is now 18350080 blocks long
# reduce the logical volume /dev/mapper/pve-data with 32GB (102 – 32 = 70 GB)
lvreduce -L-32G /dev/mapper/pve-data
# remount the file system mounted on /var/lib/vz
mount /var/lib/vz
##
## extend /dev/mapper/pve-root
##
# extend the logical volume to fill 100% of the free space
lvextend -l +100% FREE /dev/mapper/pve-root
# expand the filesystem (by default performs on-line resizing!!!)
resize2fs /dev/mapper/pve-root
##
## checking the output
##
df -h
# Filesystem Size Used Avail Use% Mounted on
# /dev/mapper/pve-root 73G 31G 39G 45% /
# tmpfs 2.0G 0 2.0G 0% /lib/init/rw
# udev 10M 548K 9.5M 6% /dev
# tmpfs 2.0G 0 2.0G 0% /dev/shm
# /dev/sda1 504M 31M 448M 7% /boot
# /dev/mapper/pve-data 69G 404M 69G 1% /var/lib/vz
@HillLiu
Copy link

HillLiu commented Jan 11, 2019

For version 5.3-5

  • check size
    • lvs
  • remove
    • lvremove /dev/pve/data
    • lvremove /dev/pve/swap
  • resize
    • lvresize -l +100%FREE /dev/pve/root
    • resize2fs /dev/mapper/pve-root

@gregdotca
Copy link

gregdotca commented Sep 14, 2019

This doesn't seem to work with Proxmox v6.

When I run df -h I have no /dev/mapper/pve-data mounted to /var/lib/vz, so umount /var/lib/vz fails because nothing is mounted.

# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  8.8G     0  8.8G   0% /dev
tmpfs                 1.8G   11M  1.8G   1% /run
/dev/mapper/pve-root   94G   68G   22G  76% /
tmpfs                 8.9G   43M  8.8G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 8.9G     0  8.9G   0% /sys/fs/cgroup
/dev/sdf1             7.3T  4.2T  3.0T  59% /drives/media
/dev/fuse              30M   20K   30M   1% /etc/pve
tmpfs                 1.8G     0  1.8G   0% /run/user/1000

And then when I try to run the e2fsck command I get:

# e2fsck -f /dev/mapper/pve-data
e2fsck 1.44.5 (15-Dec-2018)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/mapper/pve-data

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

I didn't run any of the other commands because I didn't want to mess anything up.

@doouz
Copy link

doouz commented Dec 11, 2019

I need to have all the virtual machines off before do this right?

@Brandin
Copy link

Brandin commented Apr 25, 2020

@chetcuti Experiencing the same issue on 6.0 upgraded to 6.1. Did you ever find the solution to this?

@gregdotca
Copy link

@brandinarsenault, it turns out that it was actually an issue with how I setup my containers and VMs. I used Proxmox's storage defaults when I first set everything up, and apparently that's what ended up giving me the above issue. This thread should give you everything you need, provided it's actually the same issue I was having.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment