[opensuse] UEFI Recommendations?
Building a new mini-server on a Gigabyte Motherboard (GA-J1900N-D3V)that has both Legacy and UEFI support. Will maybe have software raid mirrored drives, two network interfaces, and handle mail and firewall/routing. Did a test install in Legacy mode to find out if all the hardware works, and am satisfied that it does what I need for it to do. Before I get too committed to this machine I might want to install in UEFI mode for whatever protections it might offer against getting hacked. Which would be wiser, stick with old legacy, or install in UEFI Mode. Any experiences? and recommendations? Is OpenSUSE handling all aspects of UEFI well these days? Are upgrades a pain? -- After all is said and done, more is said than done. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 07/16/2016 04:57 PM, John Andersen wrote:
Before I get too committed to this machine I might want to install in UEFI mode for whatever protections it might offer against getting hacked.
Which would be wiser, stick with old legacy, or install in UEFI Mode.
Go with UEFI. I doubt that it significantly affects the chances of being hacked. However, GPT partitioning is better than legacy partitioning, and UEFI booting is more flexible than legacy booting.
Any experiences? and recommendations? Is OpenSUSE handling all aspects of UEFI well these days? Are upgrades a pain?
Yes, opensuse is handling it well, unless it is 32bit UEFI (this shows up on systems with Intel Atom processors). Upgrading is pretty much the same as legacy upgrading. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, Jul 16, 2016 at 3:57 PM, John Andersen <jsamyth@gmail.com> wrote:
Building a new mini-server on a Gigabyte Motherboard (GA-J1900N-D3V)that has both Legacy and UEFI support. Will maybe have software raid mirrored drives, two network interfaces, and handle mail and firewall/routing.
OK small problem because you want to do mirrored drives: UEFI puts the bootloader on a FAT32 volume called the EFI System partition. The problem is that other than firmware RAID, there is no way right now to properly keep the ESP on all member drives in sync as bootloader configuration stuff changes. What *should* be true, and someone ought to check since I'm too lazy to do it right now: - Each ESP should be created on each selected device for installation automatically (I think it's perverse to ask the user to create bootloader volumes) - Each ESP should be populated with all necessary bootloader files such as shim and grubx64.efi. - Each ESP should have a static, immutable, grub.cfg whose sole purpose is to find the "real" grub.cfg. It can do this by using mduuid= to find and assemble the RAID, and then search for a volume UUID to find an ext4 or XFS file system, where the grub.cfg is found. On Btrfs, GRUB understands raid levels 0, 1 and 10 by the Btrfs volume UUID, nothing special is needed to find the real grub.cfg. That 2nd grub.cfg is the one that's modified whenever bootloader configuration changes happen like kernel additions/deletions. That way there is a single real grub.cfg, which is replicated by software RAID, no tricks. And it means we don't need fancy ways to keep every grub.cfg on the ESP modified. This is the way Ubuntu does it, last time I checked. It is not the way Fedora does it on UEFI, there it's broken because the real grub.cfg only exists on the ESP and it is never sycned. - Do away with this idiocy of persistently mounting the EFI system partition. It shouldn't be mounted let alone at /boot/efi. No other modern OS keeps the bootloader volume mounted all the time. There's no good reason it should be done on Linux either. Anyway, depending on whether opensuse on UEFI puts the grub.cfg, determines if this works out of the box or not. I suspect openSUSE does this per upstream GRUB, where grub.cfg is always at /boot/grub (or /boot/grub2 depending on distro specific naming) rather than on the ESP. Note for systemd-boot/gummiboot users, upstream considers the only supportable way to do RAID booting is with proprietary firmware RAID, which is rather annoying. I'd really rather see the various file system GRUB modules put into EFI file system wrappers, so any EFI boot loader (really it's a boot manager but...) can just understand any file system that GRUB already supports. And GRUB, as overly complicated as it can be, is pretty badass when it comes to recognizing almost anything: it'll even find grub.cfg, the kernel, and initrams, on a *degraded* raid6, so long as the firmware recognizes all the remaining drives in the pre-boot environment. It's pretty remarkable how well supported these things are. Not really related to the UEFI part, but the mirroring part, if you want something that's stable and mature, pick mdadm RAID, or as a 2nd option LVM RAID. Don't do Btrfs RAID and expect it to work degraded for rootfs, because it doesn't. That work isn't done yet, and while your data is safe, you will not be able to boot without a lot of esoteric knowledge. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 7/20/2016 3:34 PM, Chris Murphy wrote:
On Sat, Jul 16, 2016 at 3:57 PM, John Andersen <jsamyth@gmail.com> wrote:
Building a new mini-server on a Gigabyte Motherboard (GA-J1900N-D3V)that has both Legacy and UEFI support. Will maybe have software raid mirrored drives, two network interfaces, and handle mail and firewall/routing.
OK small problem because you want to do mirrored drives:
UEFI puts the bootloader on a FAT32 volume called the EFI System partition. The problem is that other than firmware RAID, there is no way right now to properly keep the ESP on all member drives in sync as bootloader configuration stuff changes.
What *should* be true, and someone ought to check since I'm too lazy to do it right now:
- Each ESP should be created on each selected device for installation automatically (I think it's perverse to ask the user to create bootloader volumes)
- Each ESP should be populated with all necessary bootloader files such as shim and grubx64.efi.
- Each ESP should have a static, immutable, grub.cfg whose sole purpose is to find the "real" grub.cfg. It can do this by using mduuid= to find and assemble the RAID, and then search for a volume UUID to find an ext4 or XFS file system, where the grub.cfg is found. On Btrfs, GRUB understands raid levels 0, 1 and 10 by the Btrfs volume UUID, nothing special is needed to find the real grub.cfg. That 2nd grub.cfg is the one that's modified whenever bootloader configuration changes happen like kernel additions/deletions. That way there is a single real grub.cfg, which is replicated by software RAID, no tricks. And it means we don't need fancy ways to keep every grub.cfg on the ESP modified.
This is the way Ubuntu does it, last time I checked. It is not the way Fedora does it on UEFI, there it's broken because the real grub.cfg only exists on the ESP and it is never sycned.
- Do away with this idiocy of persistently mounting the EFI system partition. It shouldn't be mounted let alone at /boot/efi. No other modern OS keeps the bootloader volume mounted all the time. There's no good reason it should be done on Linux either.
Anyway, depending on whether opensuse on UEFI puts the grub.cfg, determines if this works out of the box or not. I suspect openSUSE does this per upstream GRUB, where grub.cfg is always at /boot/grub (or /boot/grub2 depending on distro specific naming) rather than on the ESP.
Note for systemd-boot/gummiboot users, upstream considers the only supportable way to do RAID booting is with proprietary firmware RAID, which is rather annoying. I'd really rather see the various file system GRUB modules put into EFI file system wrappers, so any EFI boot loader (really it's a boot manager but...) can just understand any file system that GRUB already supports. And GRUB, as overly complicated as it can be, is pretty badass when it comes to recognizing almost anything: it'll even find grub.cfg, the kernel, and initrams, on a *degraded* raid6, so long as the firmware recognizes all the remaining drives in the pre-boot environment. It's pretty remarkable how well supported these things are.
Not really related to the UEFI part, but the mirroring part, if you want something that's stable and mature, pick mdadm RAID, or as a 2nd option LVM RAID. Don't do Btrfs RAID and expect it to work degraded for rootfs, because it doesn't. That work isn't done yet, and while your data is safe, you will not be able to boot without a lot of esoteric knowledge.
Thanks Chris. You'd be surprised how hard it is to dig up this sort of information. I've been running mdadm Raid since the 90s, although often I have used an additional device to boot from (since booting from mdadm was problematic in those early days.) I might just stick with Legacy ad nobody has made a convincing case that I have anything to gain by doing it over. Thx. -- _____________________________________ ---This space for rent--- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
21.07.2016 01:34, Chris Murphy пишет:
On Sat, Jul 16, 2016 at 3:57 PM, John Andersen <jsamyth@gmail.com> wrote:
Building a new mini-server on a Gigabyte Motherboard (GA-J1900N-D3V)that has both Legacy and UEFI support. Will maybe have software raid mirrored drives, two network interfaces, and handle mail and firewall/routing.
OK small problem because you want to do mirrored drives:
UEFI puts the bootloader on a FAT32 volume called the EFI System partition. The problem is that other than firmware RAID, there is no way right now to properly keep the ESP on all member drives in sync as bootloader configuration stuff changes.
What *should* be true, and someone ought to check since I'm too lazy to do it right now:
Last step is to create EFI firmware bootmanager entries for each ESP instance.
- Each ESP should be created on each selected device for installation automatically (I think it's perverse to ask the user to create bootloader volumes)
- Each ESP should be populated with all necessary bootloader files such as shim and grubx64.efi.
- Each ESP should have a static, immutable, grub.cfg whose sole purpose is to find the "real" grub.cfg. It can do this by using mduuid= to find and assemble the RAID, and then search for a volume UUID to find an ext4 or XFS file system, where the grub.cfg is found. On Btrfs, GRUB understands raid levels 0, 1 and 10 by the Btrfs volume UUID, nothing special is needed to find the real grub.cfg. That 2nd grub.cfg is the one that's modified whenever bootloader configuration changes happen like kernel additions/deletions. That way there is a single real grub.cfg, which is replicated by software RAID, no tricks. And it means we don't need fancy ways to keep every grub.cfg on the ESP modified.
This is the way Ubuntu does it, last time I checked. It is not the way Fedora does it on UEFI, there it's broken because the real grub.cfg only exists on the ESP and it is never sycned.
Does it also support multiple ESP and populates each one with bootloader files when bootloader is updated? What about EFI boot menu entries?
- Do away with this idiocy of persistently mounting the EFI system partition. It shouldn't be mounted let alone at /boot/efi. No other modern OS keeps the bootloader volume mounted all the time. There's no good reason it should be done on Linux either.
Recent systemd does it by default, it configures ESP for automount. But again - only the very first ESP it finds, there is no provision for any automatic redundancy (nor am I sure how it can be done automatically).
Anyway, depending on whether opensuse on UEFI puts the grub.cfg, determines if this works out of the box or not. I suspect openSUSE does this per upstream GRUB, where grub.cfg is always at /boot/grub (or /boot/grub2 depending on distro specific naming) rather than on the ESP.
Correct. On non-secure EFI there is no grub.cfg on ESP at all. On secure EFI, where we install pre-built signed grub image, we do exactly as you describe - minimal grub.cfg stup to find real $prefix. Which arguably creates security issue because this stub itself is not signed.
Note for systemd-boot/gummiboot users, upstream considers the only supportable way to do RAID booting is with proprietary firmware RAID, which is rather annoying. I'd really rather see the various file system GRUB modules put into EFI file system wrappers, so any EFI boot loader (really it's a boot manager but...) can just understand any file system that GRUB already supports. And GRUB, as overly complicated as it can be, is pretty badass when it comes to recognizing almost anything: it'll even find grub.cfg, the kernel, and initrams, on a *degraded* raid6, so long as the firmware recognizes all the remaining drives in the pre-boot environment. It's pretty remarkable how well supported these things are.
Not really related to the UEFI part, but the mirroring part, if you want something that's stable and mature, pick mdadm RAID, or as a 2nd option LVM RAID. Don't do Btrfs RAID and expect it to work degraded for rootfs, because it doesn't. That work isn't done yet, and while your data is safe, you will not be able to boot without a lot of esoteric knowledge.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, Jul 20, 2016 at 9:29 PM, Andrei Borzenkov <arvidjaar@gmail.com> wrote:
21.07.2016 01:34, Chris Murphy пишет:
What *should* be true, and someone ought to check since I'm too lazy to do it right now:
Last step is to create EFI firmware bootmanager entries for each ESP instance.
Fedora's installer does do this, with the caveat I mention below...
This is the way Ubuntu does it, last time I checked. It is not the way Fedora does it on UEFI, there it's broken because the real grub.cfg only exists on the ESP and it is never sycned.
Does it also support multiple ESP and populates each one with bootloader files when bootloader is updated? What about EFI boot menu entries?
Good questions. Fedora doesn't support multiple ESPs except indirectly by mdadm RAID 1 using 0.9 metadata, which means gdisk type code FD00 is used for these ESPs rather than the proper EF00 type code. It seems most firmware don't actually care if the ESP has type code EF00, it'll try to read the file specified by NVRAM. And each member device is added to NVRAM. So on the one hand they're individuals, on the other hand they're part of a logical array. Messy. I'm not sure about Ubuntu and openSUSE. What's needed is a daemon that makes sure the proper NVRAM entries exist, otherwise there most likely won't be a proper fallback to a working drive; and this daemon can also make sure the ESPs are sync'd, including when the bootloader is updated (including shim).
Anyway, depending on whether opensuse on UEFI puts the grub.cfg, determines if this works out of the box or not. I suspect openSUSE does this per upstream GRUB, where grub.cfg is always at /boot/grub (or /boot/grub2 depending on distro specific naming) rather than on the ESP.
Correct. On non-secure EFI there is no grub.cfg on ESP at all. On secure EFI, where we install pre-built signed grub image, we do exactly as you describe - minimal grub.cfg stup to find real $prefix. Which arguably creates security issue because this stub itself is not signed.
Right. I guess you have to trust something. And that something modifying (or creating a new) grub.cfg should be capable of signing it, and then include configuration file verification within GRUB proper before trusting it. Another way of doing this is including it in measured boot. -- Chris Murphy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I'm trying this now with Leap 42.2 alpha3 (openSUSE-Leap-42.2-DVD-x86_64-Build0109-Media.iso). YaST is driving me crazy, I actually can't figure out how to make a totally mirrored setup, it's completely non-obvious. It easily allows me to create a setup that's not bootable degraded. When I try to manually create an ESP on the 2nd drive (the ESP on the 1st drive is setup automatically) in the expert partitioner, it complains "This mount point is already in use. Select a different one." OK fine, so I change this to Do not mount partition. I set the rest of the free space to become md members, set to RAID 1, and start the installation. This fails with an rpm related input/output error which I didn't save, and the installation can't proceed, I'm dropped to a text UI where I choose the power off option. And now I've run into a whole new universe, where only one of the backing qcow2's I created was written to, so apparently the md RAID setup didn't really happen, and that qcow2 is 37 Petabytes in size. No kidding. So.... I guess today is a Chris Murphy extra special bug magnet day. This might take a while to sift through, there are several problems going on here, maybe even some of them are bugs. qemu crashing https://bugzilla.redhat.com/show_bug.cgi?id=1359324 the 37P file possibly resulting from or maybe causing the crash https://bugzilla.redhat.com/show_bug.cgi?id=13593245 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (4)
-
Andrei Borzenkov
-
Chris Murphy
-
John Andersen
-
Neil Rickert