Skip to content

Instantly share code, notes, and snippets.

@umpirsky
Last active September 13, 2024 22:26
Show Gist options
  • Save umpirsky/6ee1f870e759815333c8 to your computer and use it in GitHub Desktop.
Save umpirsky/6ee1f870e759815333c8 to your computer and use it in GitHub Desktop.
Install Ubuntu on RAID 0 and UEFI/GPT system
# http://askubuntu.com/questions/505446/how-to-install-ubuntu-14-04-with-raid-1-using-desktop-installer
# http://askubuntu.com/questions/660023/how-to-install-ubuntu-14-04-64-bit-with-a-dual-boot-raid-1-partition-on-an-uefi%5D
sudo -s
apt-get -y install mdadm
apt-get -y install grub-efi-amd64
sgdisk -z /dev/sda
sgdisk -z /dev/sdb
sgdisk -n 1:0:+100M -t 1:ef00 -c 1:"EFI System" /dev/sda
sgdisk -n 2:0:+8G -t 2:fd00 -c 2:"Linux RAID" /dev/sda
sgdisk -n 3:0:0 -t 3:fd00 -c 3:"Linux RAID" /dev/sda
sgdisk /dev/sda -R /dev/sdb -G
mkfs.fat -F 32 /dev/sda1
mkdir /tmp/sda1
mount /dev/sda1 /tmp/sda1
mkdir /tmp/sda1/EFI
umount /dev/sda1
mdadm --create /dev/md0 --level=0 --raid-disks=2 /dev/sd[ab]2
mdadm --create /dev/md1 --level=0 --raid-disks=2 /dev/sd[ab]3
sgdisk -z /dev/md0
sgdisk -z /dev/md1
sgdisk -N 1 -t 1:8200 -c 1:"Linux swap" /dev/md0
sgdisk -N 1 -t 1:8300 -c 1:"Linux filesystem" /dev/md1
ubiquity -b
mount /dev/md1p1 /mnt
mount -o bind /dev /mnt/dev
mount -o bind /dev/pts /mnt/dev/pts
mount -o bind /sys /mnt/sys
mount -o bind /proc /mnt/proc
cat /etc/resolv.conf >> /mnt/etc/resolv.conf
chroot /mnt
nano /etc/grub.d/10_linux
# change quick_boot and quiet_boot to 0
apt-get install -y grub-efi-amd64
apt-get install -y mdadm
nano /etc/mdadm/mdadm.conf
# remove metadata and name
update-grub
mount /dev/sda1 /boot/efi
grub-install --boot-directory=/boot --bootloader-id=Ubuntu --target=x86_64-efi --efi-directory=/boot/efi --recheck
update-grub
umount /dev/sda1
dd if=/dev/sda1 of=/dev/sdb1
efibootmgr -c -g -d /dev/sdb -p 1 -L "Ubuntu #2" -l '\EFI\Ubuntu\grubx64.efi'
exit # from chroot
exit # from sudo -s
reboot
@ssybesma
Copy link

ssybesma commented Sep 18, 2020

Here is the picture of my Dell BIOS F12 boot menu which I want to purge both Ubuntu 2's from but don't know how yet unless I were to start from scratch...at this point I don't really want to do that if I can learn how to solve without the nuclear option:

20200917_004849

@ssybesma
Copy link

I should look before I ask questions...I think I solved it here:

https://askubuntu.com/questions/921046/how-to-remove-ubuntu-from-boot-menu-after-deleting-ubuntu-partition-in-windows

Screenshot from 2020-09-18 14-26-32

Rebooting and will upload resulting screenshot of boot menu to prove this worked.

@ssybesma
Copy link

ssybesma commented Sep 18, 2020

It worked...with a side effect...I can't figure out why Boot013* (UEFI: ADATA SX8100NP) automatically got added to the list that was not there before or if that's needed or how to suppress/remove that if it's not needed...must be why that extra line might have been there for Ubuntu 2 but that's a kludgy solution IMHO:

20200918_143305_resized

Screenshot from 2020-09-18 14-46-01

My effort to remove that seems successful...

Screenshot from 2020-09-18 14-52-07

...wonder if I'll be able to reboot and get back in or if it will magically appear again...stay tuned!

@ssybesma
Copy link

ssybesma commented Sep 18, 2020

I rebooted but the extra entry in the Dell BIOS F12 boot menu magically appeared again exactly as it was before and efibootmgr output is exactly the same (contains Boot0013* as before)...so no change. I decided for kicks to try booting from that 2nd one and it works (like one of the "Ubuntu 2" entries used to work before)...SO, I guess someone just masked "UEFI: ADATA SX8100NP" (or whatever it would be called on his machine) in the original instructions by simply renaming it 'Ubuntu 2'. Now I just have to figure out why that 2nd entry is forced to show up...my GUESS is that is due to one of the TWO first partitions (named "EFI System") on the NVMe devices both being mounted and having filesystem on it, which probably causes the 2nd (locked one) on the 3rd NVMe device to produce the Boot0013* entry. Question for the day would be if the 3rd one should have ever been mounted and have a filesystem on it. I suspect for BOOT SPEED reasons it SHOULD, and that means ideally the 2nd device's 1st partition probably should be identical to the 1st and 3rd devices as well...and then we will have to suppress two entries in the Dell BIOS F12 boot menu instead of just one. I must say this boots amazingly FAST after I got rid of that duplicate Ubuntu 2 (which actually caused it to stall)...boots in only a few seconds...never saw anything like that before!!!

To summarize, the ideal situation for this then...and to correct ALL the oddball issues, is to figure out how to make all three 1st partitions identical "EFI System" (with the original instructions being for only two devices I THINK that was the original intent) and then how to suppress displaying the Dell BIOS F12 boot menu entries for the 2nd and 3rd NVMe devices. Doing that would achieve perfection with nothing else left to do...and would help my machine boot that tiny bit faster with all three devices contributing to the boot speed.

@ssybesma
Copy link

ssybesma commented Sep 19, 2020

I think the problems I'm running into with the inconsistent 1st partitions on the NVMe devices and the odd efibootmgr/Dell BIOS F12 boot menu entries are confined to this section which somehow has to be straightened out to make using this with 3 NVMe devices not only work right but look right, yet I haven't figured this out yet and was struggling with this yesterday where I had to wipe and start over twice. The last two lines seem to have the most to do to effect this issue.

mount /dev/nvme0n1p1 /boot/efi
grub-install --boot-directory=/boot --bootloader-id=Ubuntu --target=x86_64-efi --efi-directory=/boot/efi --recheck <--IS ENTIRE LINE OPTIMAL?
update-grub
umount /dev/nvme0n1p1

dd if=/dev/nvme0n1p1 of=/dev/nvme1n1p1 <--CLONING 1ST TO 2ND NVME...PART OF ORIGINAL SCRIPT
dd if=/dev/nvme0n1p1 of=/dev/nvme2n1p1 <--CLONING 1ST TO 3RD NVME SEEMS REQUIRED IF MAX BOOT SPEED DESIRED

efibootmgr -c -g -d /dev/nvme1n1 -p 1 -L "Ubuntu #2" -l '\EFI\Ubuntu\grubx64.efi' <--THIS SEEMS TO NEED ADJUSTMENT

The 'dd' line seems like it's begging for the 3rd NVMe to be addressed by cloning as that's what it's doing with the 2nd NVMe.
The most suspect line of all is the last one for efibootmgr...I think that line has to be adjusted (and not actually added to as I've done nothing but make the problem worse in adding another line or two). At one point I had SIX separate boot entries for Ubuntu and one of them didn't even work. Any reason the efibootmgr line has to be there or any reason this cannot just mention the 1st NVMe rather than the 2nd one?
So, starting from scratch again.

@ssybesma
Copy link

ssybesma commented Sep 20, 2020

Since I tinkered with this, things got worse and worse to the point I can't even get it to boot anymore and I cannot figure out what broke it. I emailed Rod at Rod's Books with a $25 donation to see if he can help with the grub and efibootmgr issues I'm having. Linux is horrendously touchy if you do one thing slightly out of order or make what seems like one minor change.

Update:

FIXED...the problem? NVRAM in my BIOS was locking it up and preventing boot. I cleared it out by switching between Legacy and UEFI...went to Legacy, rebooted and then went back to UEFI. If you ever get stuck on the 2nd line of the boot sequence "Loading intial ramdisk ..." try it and see if that gets you past the obstacle. I'm back in better than ever now.

@retserj-jrester
Copy link

Hey, I am very impressed by how simple this installation can be. Good job mate.

But I wonder, do you have such simple script for a legacy BIOS machine, as EFI doesnt work on my PC sadly.

Thank you for your response!

@ssybesma
Copy link

ssybesma commented Oct 11, 2020 via email

@retserj-jrester
Copy link

Thank you for this amazingly fast response.

For my part, setting up the working RAID partition is no problem. My software RAID0 over two HDDs works fine.
My only problem is a working bootloader for legacy BIOS that is able to work with the RAID as such.
I am pretty sure it will work somehow, but like you, I am clueless atm.

@retserj-jrester
Copy link

Guys!
I did it. Ubuntu Desktop 20.04 up and running on software RAID 0.
Everything works and boots correctly.

@ffrogurt
Copy link

If anyone is trying this with nvme keep in mind that sda1 would become nvme0n1p1, partition needs "p" to define the num.

sdb2 would be nvme1n1p2 and so on. If you get "is in use/busy" during the mkfs fat formatting it might be due to a previous raid being active on the disk, check mdadm commands to stop and remove a specific raid, then try again.

@swatwithmk3
Copy link

Hey,

I really appreciate the work and wanted to thank you for the code, I also wanted to warn people that I had an issue with the array being locked after OS installed and it was listed as "Encrypted" in spite of me not encrypting it. In this case I rebooted into the live USB again then re assembled the array and proceeded from there. I also wanted to clarify for inexperienced users like myself that at line 29 you need to mount the partition where the OS was installed to "/mnt" which in my case was "/dev/md1p2" and not "/dev/md1p1". I've also mentioned these issues in my fork of this repo which also modified the commands for a 4 disk raid 0 array.

@ssybesma
Copy link

ssybesma commented Jan 25, 2022 via email

@swatwithmk3
Copy link

swatwithmk3 commented Jan 25, 2022

Hi ssybesma,

Given that the EFI partition is copied to the second disk in the original script, I believe that it is intended for all disks to be visible so that no matter which one you choose you would boot into the RAID array which is why in my 4 disk script the EFI is copied to the other three disks. At line 55, the command is supposed to give a name you specify to the disk and this name will appear in the bios boot menu. Upon further inspection of my bios, it seems that 1 of the disks was named incorrectly and I am not sure if this is an issue with the motherboard or from the OS. If you want each disk to be identified with a unique name in the bios then use the command at line 55 once for each disk except the first one which in your case would be two times since you have 3 disks. You need to make sure that at the end of the command the correct /dev/nvme is set and change the name from "Ubuntu #2" to whatever you want and that should appear in your bios. You can try skipping the command altogether and that should leave the names in the Bios as what they were set by the manufacturer.

PS: I have just tested it to be sure and choosing any of my array disks at bootup will still load Ubuntu normally, the fact that you choosing the wrong one did not boot Ubuntu for you means that you have either skipped copying the EFI partition to your other 2 SSD's or that the copy of the partition was not successful. I hope I managed to clear things up and help :)

If you're still having trouble then a video recording of the installation you're doing might help with discovering the source, the next best thing would be a doc with every command you have used for the array creation and OS installation would be helpful too.

@pdxwebdev
Copy link

Confirmed this works for Ubuntu 22.04

@sparks79
Copy link

sparks79 commented Nov 6, 2022

I've been looking for how to install raid on linux for a few years now ( with the o/s as part of the bootable array ).
I tested ubuntu about vers 18 or there abouts and with the Server Vers it's possible to do it, and you can install the desktop later to get a more user friendley system.
I've also used fedora 36 and it is by far the easiest version to set up raid with ( so far from my observations ).
It's a pitty one can't swap ubuntu into fedora to get their easy raid setup.
The way that ( umpirsky ) has done it looks interesting.
And I'm sure he's put a lot of time into it ( Congrats ).
But it's a very long set of procedure ( 59 to be precise ).
And am I right in guessing that each of the 59 have to be entered Line by Line.
That's a Long Process.
I would be happy to use fedora because of it's easy raid setup.
But I'm not very conversant with linux and I just find fedora to hard use.
I am an avid windows user as for the last 27 years.
And at the age of 74 I really need a linux system in raid0 that is similar to windows.
Or at least a bit easier to use.
Has anyone got any sugestions.

@JorgeBasaure
Copy link

Me gustaría que explicaras con detalle esos comandos. Esto es por que requiero lo siguiente:

  • Instalar Ubuntu en su versión mas reciente, en los 4 SSD x 240Gb, En RAID 0, SIN SWAP. Debido, a que no tiene sentido meter SWAP en los SSD (Mi tarjeta madre lleva Intel Rapid Storage para dejar listo el Raid 0 desde la BIOS (MSI Z97 MPOWER), Y el Ubuntu no desencadena el aviso de que no se puede instalar con IRST)
  • Dejar configurado los 4 HDD x 2Tb en RAID 0, Para almacenamiento

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment