Last active
November 10, 2022 03:46
-
-
Save greginvm/af68bef3c81a9594a80d to your computer and use it in GitHub Desktop.
Restore GRUB in UEFI + LVM + LUKS setup (Manjaro)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Setup: UEFI, LVM + LUKS encrypted drive | |
Bootloader: Grub | |
Links: | |
- https://wiki.manjaro.org/index.php/Restore_the_GRUB_Bootloader | |
- | |
Restore GRUB (boot into live env): | |
# get the encrypted partition (crypto_LUKS) | |
lsblk -f | |
cryptsetup open --type luks /dev/sda2 lvm | |
mount /dev/mapper/cryptVG-root /mnt | |
mount /dev/sda1 /mnt/boot # boot partition | |
mount -t proc proc /mnt/proc | |
mount -t sysfs sys /mnt/sys | |
mount -o bind /dev /mnt/dev | |
mount -t devpts pts /mnt/dev/pts/ | |
modprobe efivarfs | |
chroot /mnt | |
# I get the "EFI variables are not supported on this system." so do this: | |
mount -t efivarfs efivarfs /sys/firmware/efi/efivars | |
pacman -S mtools os-prober # probably not needed | |
# set use_lvmetad = 0 in /etc/lvm/lvm.conf | |
sudo grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=manjaro_grub --recheck | |
update-grub | |
# set use_lvmetad to previous value in /etc/lvm/lvm.conf | |
REBOOT and should work | |
Note: If initial ramdisk env needs to be updated and boot is unsuccessful do this: | |
- chroot like here but don't mount the boot partition | |
- make sure use_lvmetad is set correctly | |
mkinitcpio -p linux41 |
You saved me an hour or more. Thanks.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
here try this strategy on for size.... i had a simliar problem with ParrotOS, its pretty common ive noticed of their distro that you have to do a little triage work just to get it up and running. typically its the UEFI Grub bootloader that doesnt get loaded up.. its a pain in the ass... especially when its on a daily basis it seems like that somethings breaking and its just easier to reinstall... only until it starts becoming easier to figure out what you broke about 20 issues before, instead of reinstalling... LOL
======================================================================================================
via ChRoot
-Boot into the live installer environment CD or USB will do just fine.
(irrelevent, or maybe not, commentarynot sure how your system is, but for me, i need to put two USB or one CD and one USB, both of which are to be 32b/64b UEFI capable, or at least one of them must be 64b uefi, just to activate the UEFI logon, its a dell... LOL... )
Next, dependant upon your filesystem schema, Raid (mdadm, dmraid) LVM, or bcache. you will execute one of the following commands in your bash
If using mdadm:
1. executing 'sudo apt-get install mdadm'.
2. Then assemble the arrays: 'sudo mdadm --assemble --scan'
If using LVM:
1. sudo sh -ec 'apt-get install lvm2; vgchange -ay'
If using bcache:
1. sudo sh -ec 'apt-get install software-properties-common; add-apt-repository ppa:g2p/storage; apt-get update; apt-get install bcache-tools'
Determine your normal system partition. The following commands may be helpful. The fdisk switch is a lowercase "L".
1. sudo fdisk -l
2. sudo blkid
3. df -Th
Mount your normal system partition. X is the drive letter. Y is the partition number:
Substitute the correct partition: sda1, sdb5, etc.
1. sudo mount /dev/sdXY /mnt
Example 1: sudo mount /dev/sda1 /mnt
Example 2: sudo mount /dev/md1 /mnt
Only if you have a separate boot partition (where sdXY is the /boot partition designation):
1. sudo mount /dev/sdXY /mnt/boot
Example 1: sudo mount /dev/sdb6 /mnt/boot
Example 2: sudo mount /dev/md0 /mnt/boot
Only if (some) of the system partitions are on a software RAID (otherwise skip this step): make sure the output of mdadm --examine --scan agrees with the array definitions in /etc/mdadm/mdadm.conf.
Mount the critical virtual filesystems. Run the following as a single command:
Then, in bash 'for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done'
finally, well almost finally, Chroot into your normal system device:
1. sudo chroot /mnt
Reinstall GRUB 2 (substitute the correct device with sda, sdb, etc. Do not specify a partition number):
1. grub-install /dev/sdX
If the system partitions are on a software RAID install GRUB 2 on all disks in the RAID. Example (software RAID using /dev/sda and /dev/sdb):
1. grub-install /dev/sda
2. grub-install /dev/sdb
Recreate the GRUB 2 menu file (grub.cfg)
1. update-grub
Exit chroot: CTRL-D on keyboard
AND VOILA! reboot and you are good to go.