alerighi 12 days ago

It's something you can do since a lot of years. I used to do so 10 years ago, when I've got the first motherboard with UEFI. But is it useful? It saves a minimal time in the boot sequence, but at what cost?

The bootloader (being it grub, or something more simple as systemd-boot) is useful to me for a couple of reasons:

- it allows to dual-boot with Windows easily: motherboard boot menu is often not easy to access, you need to perform some key combination in a short window, also modern bootloader save the last boot option such that if Windows reboots for an update Linux does not start

- it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it

- it allows you to choose between multiple kernels and initrd images easily, again for recovery purposes

- it has a voice for entering the UEFI setup menu: in most modern systems again entering the UEFI with a keyboard combination is unnecessarily difficult and has a too short timeout

- it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.

  • yjftsjthsd-h 12 days ago

    If I'm understanding correctly, it might help to point out that in spite of the title they are proposing a bootloader, which can still let you modify the cmdline, boot to other OSs, etc. It's just that the bootloader is itself using the Linux kernel so it can do things like read all Linux filesystems for "free" without having to rewrite filesystem drivers.

    • kragen 12 days ago

      you seem to be saying that they are using two separate kernels, one for the bootloader and one for the final boot target

      the title text says 'Loaded by the EFI stub on UEFI, and packed into a unified kernel image (UKI), the kernel, initramfs, and kernel command line, contain everything they need to reach the final boot target' which sounds like they're not talking about using two separate kernels, one for the bootloader and one for the final boot target, but rather only one single kernel. possibly that is not the case because the actual information is hidden in a video i haven't watched

      https://news.ycombinator.com/item?id=40909165 seems to confirm that they are indeed not saying what you thought

      edit: they're proposing both configurations

      • comex 12 days ago

        I watched the video. They have two different configurations, one where there’s only one kernel, one where there are indeed two separate kernels with one kexec’ing to the other.

        • kragen 12 days ago

          thank you for your sacrifice and for the resulting correction to my error

      • thom 11 days ago

        To be clear: the win here is that there's no longer duplicated (or worse - less capable and outdated) code to do the same things in both the bootloader and the kernel, however the two versions of that code might be deployed.

      • samatman 12 days ago

        > It's just that the bootloader is itself using the Linux kernel

        This sentence does not say "the bootloader is itself another, separate, Linux kernel", so I'm not seeing him saying what you're saying he seems to be saying.

      • nmstoker 12 days ago

        >> you seem to be saying that they are using two separate kernels, one for the bootloader and one for the final boot target

        This doesn't make sense. There's nothing in the post you responded to which could realistically be interpreted as making that point. And there haven't been any edits, which might have explained your confusion.

        • kragen 12 days ago

          the comment says 'they are proposing a bootloader, which can still let you modify the cmdline, (...) the bootloader is itself using the Linux kernel'

          possibly you don't know this, but in order to run a kernel with a modified command line, the bootloader-kernel would need to run a second kernel, for example using kexec; linux doesn't have a useful way to modify the command line of the running kernel. that's why i interpreted the comment as saying that they are proposing using two separate kernels. in https://news.ycombinator.com/item?id=40910796 comex clarifies that they are in fact proposing using two separate kernels; the reason i was confused is that that's not the only configuration they're proposing

          • nmstoker 12 days ago

            What I know or don't know is irrelevant, because what matters is that your statement rests of bringing in external knowledge/assumptions, so it's clearly not what the commenter is saying (alone).

            • Dylan16807 12 days ago

              Using external knowledge to interpret the meaning of sentences is how every communication works.

              • nmstoker 11 days ago

                Indeed, but accusing someone of saying something based on unstated external knowledge/assumptions is the original problem here. They just needed to say words to the effect of "taken with point X what you say implies Y" and it would be fine and much less accusatory.

                • Dylan16807 11 days ago

                  I don't find "it sounds like you're saying" on a rather neutral technical topic to be very accusatory, personally.

                  • nmstoker 11 days ago

                    Fair point. I should perhaps have said putting words in someone's mouth. Anyway far too much on this side point, I'll bow out here.

    • garaetjjte 12 days ago

      It could kexec other kernels but probably won't be able to jump to other OS bootloaders after it already called ExitBootServices.

      • derefr 12 days ago

        The sibling comments who think you need to jump back to EFI to solve this, are thinking in layer-ossified terms. This is Redhat proposing this, and they're perfectly confident in upstreaming kernel patches to make this happen.

        I would assume that in their proposed solution, the kernel would have logic to check for a CMDLINE flag (or rather, lack of any CMDLINE flags!) to indicate that it's operating in bootloader mode; and if decides that it is, then it never calls ExitBootServices. All the EFI stuff stays mapped for the whole lifetime of the kernel.

        (Also, given that they call this a "unified kernel image", I presume that in the case where the kernel decides to boot the same kernel image that's already loaded in memory as the bootloader, then nothing like a kexec needs to occur — rather, that's the point at which the kernel calls ExitBootServices (basically to say "I'm done with caring about being able to potentially boot into something else now"), and transitions from "phase 1 initramfs for running bootload-time logic" into "phase 2 initramfs to bootstrap a multi-user userland.")

        • garaetjjte 12 days ago

          >and if decides that it is, then it never calls ExitBootServices

          That's unlikely, I think that would mean you cannot use native drivers, at which point you're just writing another bootloader. I suspect they only planning to kexec into target kernel, not chainloading other EFI bootloaders.

          • drewdevault 11 days ago

            Something that hasn't been addressed by comments here yet is that you could implement EFI boot services in the Linux kernel and essentially turn Linux into a firmware interface. Though note that I generally shy away from any attempts to make the kernel into a really fat bootloader.

            • derefr 11 days ago

              I mean, you can and you can't.

              AFAIK, the UEFI spec imposes no requirement that (non-hotplug) devices be re-initializable after you've already initialized them once. Devices are free to take the "ExitBootServices has been called" signal from EFI and use it to latch a mask over their ACPI initialization endpoints, and then depend on the device's physical reset line going low to unmask these (as the device would start off in this unmasked state on first power-on.)

              Devices are also free to have an "EFI-app support mode" they enter on power-on, and which they can't enter again once they are told to leave that mode (except by being physically reset.) For example, a USB controller's PS2 legacy keyboard emulation, or a modern GPU's VGA emulation, could both be one-way transitions like this, as only EFI apps (like BIOS setup programs) use these modes any more.

              Of course, presuming we're talking about a device that exists on a bus that was designed to support hotplug, the ability to "logically" power the device off and on — essentially, a software-controlled reset line — is part of the abstraction, something the OS kernel necessarily has access to. So devices on such busses can be put back in whatever their power-on state is quite easily.

              But for non-hotplug busses (e.g. the bus between the CPU to DRAM), bringing the bus's reset line low is something that the board itself can do; and something that the CPU can do in "System Management Mode", using special board-specific knowledge burned into the board's EFI firmware (which is how EFI bring-up and EFI ResetSystem manage to do it); but which the OS kernel has no access to.

              So while a Linux kernel could in theory call ExitBootServices and then virtualize the API of EFI boot services, the kernel wouldn't be guaranteed to be able to actually do what EFI boot services does, in terms of getting the hardware back into its on-boot EFI-support state.

              The kernel could emulate these states, by having its native drivers for these devices configure the hardware into states approximating their on-boot EFI-support states; but it would just be an emulation at best. And some devices wouldn't have any kind of runtime state approximating their on-boot state (e.g. the CPU in protected mode doesn't have any state it can enter that approximates real mode.)

          • derefr 12 days ago

            You're right (as I saw another comment cite the primary-source for); but I'm still curious now, whether there'd be a way to pull this off.

            > I think that would mean you cannot use native drivers

            Yes, that's right.

            > at which point you're just writing another bootloader

            But that's not necessarily true.

            Even if you could only use EFI boot+runtime services until you call ExitBootServices, in theory, an OS kernel could have a HAL for which many different pieces of hardware have an "EFI boot services driver" as well as a native driver; and where the active driver for a given piece of discovered hardware could be hotswapped "under" the HAL abstraction, atomically, without live HAL-intermediated kernel handles going bad — as long as the kernel includes a driver-to-driver state-translation function for the two implementations.

            So you could "bring up" a kernel and userland while riding on EFI boot services; and then the kernel would snap its fingers at some critical point, and it'd suddenly all be native drivers.

            Of course, Linux is not architected in a way that even comes close to allowing something like this. (Windows might be, maybe?)

            ---

            I think a more interesting idea, though, would come from slightly extending the UEFI spec. Imagine two calls: PauseBootServices and ResumeBootServices.

            PauseBootServices would stop all management of devices by the EFI (so, as with ExitBootServices, you'd have to be ready to take over such management) — but crucially, it would leave all the stuff that EFI had discovered+computed+mapped into memory during early boot, mapped into memory (and these pages would be read-only and would be locked at ring-negative-3 or something, so the kernel wouldn't have permission to unmap them.)

            If this existed, then at any time (even in the middle of running a multi-user OS!), the running kernel that had previously called PauseBootServices, could call ResumeBootServices — basically "relinquishing back" control over the hardware to EFI.

            EFI would then go about reinitializing all hardware other than the CPU and memory, taking over the CPU for a while the same way peripheral bring-up logic does at early boot. But when it's done with getting all the peripherals into known-good states, it would then return control to the caller[1] of ResumeBootServices, with the kernel now having transitioned into being an EFI app again.

            [1] ...through a vector of the caller's choice. To get those drivers back into being EFI boot services drivers before the kernel tries using them again, naturally.

            It's a dumb idea, mostly useless, thoroughly impractical to implement given how ossified EFI already is — but it'd "work" ;)

            • Joker_vD 11 days ago

              Giving "the control of hardware back" is going to be extremely difficult. Just look at the mess that ACPI is: there are lots of notebooks that Linux can not put into/back from hibernation, and here we're talking simply about pausing/resuming devices themselves. What you are proposing means that an OS would have to revert the hardware back to the state that would be compatible with its state at the moment of booting, so that UEFI could manage it correctly. I don't think that's gonna happen.

      • yjftsjthsd-h 12 days ago

        This is being discussed more extensively in other comment threads but it sounds like maybe there's a way for it to just reboot but set a flag so the firmware boots into a different .efi next time (once).

        • p_l 11 days ago

          You can set BootNext variable to number of BootXXX variable you want to use once for next boot.

      • TylerE 12 days ago

        Theoretically, couldn't it just write to a "boot this image next time" field (is the legacy MBR area available?) and trigger a reboot?

        • adtac 12 days ago

          The target image would need to reset that field so that a second reboot puts you back into the bootloader because otherwise you'll be stuck booting that image forever.

          • rcxdude 12 days ago

            The image doesn't need to do it, that's how UEFI bootnext works: the firmware resets the flag before it loads the image.

        • garaetjjte 12 days ago

          Well you could change default boot entry in efivars, but if you're relying on firmware for that why not use firmware provided boot menu anyway?

        • Arch-TK 11 days ago

          The boot disk isn't guaranteed to be writable.

          • TylerE 11 days ago

            Even after you’ve already installed a custom boot laser to it? I mean, I agree with you in principle, but we already have the chicken - can’t existence of the egg be assumed?

            • Arch-TK 10 days ago

              Aside from the DVD issue mentioned in the other person's comment. I have a design for a SED OPAL based encryption setup where the system boots with a read-only boot partition and it only becomes RW as part of the initramfs running (although optionally you can just keep it RO until you need to write to it, but this requires buy-in from the package manager).

              I think network booting with EFI would also suffer from a similar problem.

            • yjftsjthsd-h 10 days ago

              Consider a DVD that's EFI bootable; we can have whatever bootloader we want on the disc but it is not physically writable

    • cool_beanz 12 days ago

      You can have command line parameters baked into the EFISTUB. I also have two kernels, so there's two UKIs on /efi, and I have both added as separate boot options in BIOS.

  • ec109685 12 days ago

    Just because the boot loader is using Linux, it doesn’t prevent an alternative OS from being booted into, so there is nothing fundamentally stopping all of grub’s features from working in this new scheme.

    • jchw 12 days ago

      It is a bit more complex, though. Quoting "nmbl: we don’t need a bootloader" from last month[1]:

      > - Possibility to chainload from Linux while using Secure / Trusted boot: Dual-booting, although not supported on RHEL, is important for Fedora. While there are attempts to kexec any PE binary, our plan is to set BootNext and then reset, which will preserve the chain of trust that originates in firmware, while not interfering with other bootloaders.

      It could be seen as an advantage to do chainloading by setting BootNext and resetting. I think Windows even does this now. However, it certainly is a different approach with more moving parts (e.g. the firmware has to not interfere or do anything stupid, harder than you'd hope) and it's definitely slower. It'd be ideal if both options were on the table (being able to `kexec` arbitrary UEFI PE binaries) but I can't imagine kexec'ing random UEFI binaries will ever be ideal. It took long enough to really feel like kexec'ing other Linux kernels was somewhat reliable.

      [1]: https://fizuxchyk.wordpress.com/2024/06/13/nmbl-we-dont-need...

      • bityard 12 days ago

        Let's say I have a dual-boot system with two totally independent OSes, Systems A and B. It is powered down. I want to boot into System B but the EFI is configured to boot into System A by default.

        Am I correct in understanding that the offered solution here is to first boot into System A, find some well-hidden EFI configuration utility (which varies from OS to OS, if it even exists), and then tell EFI to boot into System B on the next reboot?

        If so, that's a pretty terrible experience.

        • jchw 12 days ago

          Sort of, except it's automated.

          Basically, System A's kernel boots. But, instead of immediately loading the System A userland, it loads a boot menu of systems that it reads from UEFI NVRAM and presents it to the user. So you select System B from the list, the menu sets BootNext in NVRAM and issues a reboot.

          In practice, the main UX difference is that it takes a bit longer and you'll see the UEFI vendor splash screen again after selecting the boot option.

          I'm not a user of Windows anymore but I seem to recall Windows doing something quite similar, where it had a boot menu that felt suspiciously like it was inside of Windows, and to actually change the boot target, it had to reboot.

          • derefr 12 days ago

            > instead of immediately loading the System A userland

            I mean, it kind of is loading the System A userland. At least the initramfs of it. AFAICT in the proposal the bootloader would now be a regular userland program living in the initramfs.

            I get the impression that the eventual goal would be to make this bootloader program into the "init(8) but for the initramfs phase of boot" — i.e. rather than there being a tool like update-grub that calls mkinitramfs, feeding it a shell-script GRUB generated (which then becomes the /init of the initramfs); instead, there'd be a tooling package you'd install that's related to the kernel itself, where you call e.g. kernel-update(8) and that would call mkinitramfs — and the /init shoved inside it would be this bootloader. This bootloader would then be running for the whole initramfs phase of boot, "owning" the whole bootstrap process.

            What the architecture is at that point, I'm less clear on. I think either way, this initramfs userland, through this bootloader program, will now handle both the cases of "acting like a bootloader" and "acting like the rest of initramfs-based boot up to pivot-root." That could mean one monolithic binary, or an init daemon and a hierarchy of services (systemd: now in your bootloader), or just a pile of shell scripts like GRUB gives you, just now written by Redhat.

            • jchw 12 days ago

              Yes of course. I really mean to say, before/instead of pivoting to the OS root. It sounds like this will synergize well with the UKI effort too, at least from a Secure Boot perspective.

          • gray_-_wolf 11 days ago

            I wonder if I have ever had a laptop where the UEFI worked correctly and without bugs. It always required some workaround somewhere to get stuff working.

        • superb_dev 12 days ago

          Presumably nmbl would show you a menu to select the which OS start if you’re dual booting. You wouldn’t have to manually set some UEFI variable

      • DEADMINCE 10 days ago

        > I think Windows even does this now.

        Why? What advantage is there for Windows to do this?

        • jchw 10 days ago

          I'm not entirely sure, to be honest. If you google something like "windows 11 advanced startup settings" you'll see what I mean, though: the boot menu is now in Windows.

          I guess it allows the bootloader to be much simpler, at least in theory.

  • throwway120385 12 days ago

    If you embed an x86 system somewhere then you might find yourself not wanting to use GRUB because you don't want to display any boot options anywhere other than the Linux kernel. The EFI stub is really handy for this use case. And on platforms where UBoot is common UBoot supports EFI which makes GRUB superfluous in those cases.

    Many of the Linux systems I support don't have displays and EFI is supported through UBoot. In those cases you're using a character-based console of some sort like RS232.

    A lot of those GRUB options could also be solved by embedding a simple pre-boot system in an initial ramdisk to display options, which maintains all of the advantages of not using GRUB and also gives you the ability to make your boot selection. The only thing GRUB is doing here is allowing you to select which kernel to chain-load, and you can probably do the same thing in initramfs too through some kind of kernel API that is disabled after pivot root.

    • Sesse__ 12 days ago

      I must admit that on U-Boot platforms, I use U-Boot EFI to load grub-efi, so that I can have a non-terrible bootloader…

    • cool_beanz 12 days ago

      I just have two kernels with two boot options in BIOS. I just hit F11 at boot time and choose a BIOS boot option for either kernel. Of-course, you need to add the entries in UEFI, either from UEFI shell either with some tool (efibootmgr). This scheme also supports secure booting and silent booting. The stubs are signed after being generated.

  • ziml77 12 days ago

    Does Windows not ensure that the UEFI boots back into Windows when it does an auto-reboot for updates? There's a UEFI variable called BootNext which Windows already knows how to use since the advanced startup options must be setting it to allow rebooting directly to the UEFI settings.

    Given that Windows tries to restore open windows to make it look like it didn't even reboot, I'm surprised they wouldn't make sure that the reboot actually goes back into Windows.

    • ale42 11 days ago

      No, it doesn't. Even a sysprepped image of Windows (which thus runs Setup to install drivers and finalize the installation) doesn't change the boot order on UEFI machines. I think just the installer does this when you first install Windows.

      • ziml77 11 days ago

        That's so weird. Normally I don't want my OS changing what is booted into on a whim, but going back into the same OS for cases like these just seems like sane behavior to me.

        • DEADMINCE 10 days ago

          There's good reason you might not want that behavior, and no reason to enforce it. Booting an alternate OS doesn't interrupt Windows update operations.

    • joe5150 12 days ago

      Not in my experience. For my typical dual boot situation where Grub is installed as the bootloader, I have to update the Grub settings like so to allow Windows updates to go smoothly:

        GRUB_DEFAULT=saved
        GRUB_SAVEDEFAULT=true
      • lproven 11 days ago

        I am not certain about this, but I think that these options no longer work on UEFI machines. GRUB does not have control over what options are presented if GRUB isn't the selected bootloader. This stuff is BIOS-only.

        • ndiddy 11 days ago

          I have this working on a UEFI system. You select your Linux drive in the UEFI configuration (so the computer always boots into GRUB) and then GRUB will boot into Linux or Windows depending on the last saved option.

          • lproven 11 days ago

            Sure, but whether that GRUB entry is remembered as the default is up to the UEFI not GRUB. If you pick another entry GRUB is powerless to effect it.

            • ndiddy 11 days ago

              The GRUB_DEFAULT and GRUB_SAVE_DEFAULT settings don't affect the UEFI settings, they only affect the default boot option in GRUB's boot menu. From the UEFI configuration perspective, the boot option never changes and it's always set as the drive with GRUB installed on it.

              • lproven 4 days ago

                Yes, that is indeed what I just said. :-)

  • Denvercoder9 12 days ago

    What kind of machines are people using that entering the UEFI boot menu is difficult? On all three of mine I just press F10 during the first 5 or seconds the vendor logo shows, and I end up in a nice menu where I could select Windows, other kernels, memtest, or the EFI shell or setup.

    • mjg59 12 days ago

      One easy way to meet Microsoft's boot time requirements is to skip input device enumeration, so there's a lot of machines meeting the Windows sticker requirements where entering the firmware either requires a bunch of failed boots or getting far enough into the boot process that you can be offered an opportunity to reboot into the setup menu.

      • Dwedit 12 days ago

        I have a system where you need to hold down power when turning on the PC to get out of "Quick Boot" mode, and get the ability to get to the bios screen. It's a Sandy-Bridge-era Intel motherboard.

      • Denvercoder9 12 days ago

        Huh, today I learned. I'll consider myself lucky I didn't come across one of these machines yet.

        • Sakos 12 days ago

          I've encountered way too many of these and I hate them with all my being.

        • p_l 10 days ago

          If you want to have (legit) "Designed for Windows" and similar certification, you need to have an option to disable "fast boot" as well as option to enable it.

          The fast boot involves skipping a bunch of slower pathways using saved knowledge of minimal set of devices to bring up to boot the OS in happy path, and only reset to "slow path" if it fails.

          In fast boot, you're often unable to hit the button to enter the menu and at most get to it through windows "reboot to firmware" option.

      • account42 11 days ago

        How many of these don't have a setting to turn quick boot off?

    • pavon 12 days ago

      I was working on my Dad's Dell laptop this weekend, and no matter how quickly I spammed the correct key (F12 in this case) it would miss it and continue to a full boot about 3/4 times. I never figured out if it is just picky about timing, or if it had different types of reboots where some of them entering BIOS wasn't even an option.

      • wongarsu 12 days ago

        Newer Dell laptops have a BIOS option to artificially delay the boot process by a configurable number of seconds to give you more time to enter the menu. Which should be proof enough that the default time window is an issue.

      • LH9000 12 days ago

        I start tapping as soon as the screen blanks, probably twice a second. I find this to be best for all BIOS/UEFI interfaces.

        • vrighter 11 days ago

          Mine has a large delay between when the keypress is registered and the menu actually shows up. But, the window for pressing the key itself is quite short. Also, if you spam the key too quickly, it will hang indefinitely instead of entering the menu necessitating a hard-reboot. Good times.

    • spockz 12 days ago

      On my last two uefi boards, if I press F12 or F8 too soon after power on it either stalls the boot, or it makes it restart. When the latter happens, I’m always too careful in pressing it causing me to miss the window of opportunity and booting right to the OS. Entering the bios or choosing the boot drive regularly takes me 3 tries. (Gigabyte with Intel and Asus with AMD.)

    • Am4TIfIsER0ppos 12 days ago

      Grub is the same everywhere. Motherboard bios/uefi is not. It isn't F10 for me.

      • 8n4vidtmkvmk 12 days ago

        How many computers are you operating though? Maybe you'll have to reboot a couple times until you figure out the proper key but then you'll know it. And if you forget it, you clearly aren't doing this often enough for it to be a problem either

        • ale42 11 days ago

          It really depends on users. Personally... ~100? Servers, clients, dual-boot configurations, lost machines with PXE boot, various brands and BIOS versions, some even still boot in legacy mode because their UEFI support is bad (like PXE boot doesn't work as well as it should, and as well as it does in "BIOS" mode). So having GRUB on basically all these machines, I'm very happy.

          If I could do the same with something that is as small in terms of footprint, and is as flexible as GRUB is (we also PXE-boot into GRUB loaded from the network, both in BIOS and UEFI mode), then I'm interested.

  • prmoustache 11 days ago

    > - it allows to dual-boot with Windows easily: motherboard boot menu is often not easy to access, you need to perform some key combination in a short window, also modern bootloader save the last boot option such that if Windows reboots for an update Linux does not start

    Do people really dual boot a lot in 2024? It was a good use case when virtualization was slow but decades after the CPU started shipping with virtualization extensions there is virtually zero overhead in using VM nowadays and it is much more convenient than rebooting and losing all your open applications just to start one on another OS.

    > - it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.

    How many times in a decade are you running memtest?

    Getting to UEFI firmware or booting another OS/drive is just a matter of holding one key on my thinkpad. I would just simply not buy and bad hardware that doesn't allow me to do that. Vote with you wallet damit.

    I would also argue that you can perfectly have grub sitting alongside a direct boot to kernel in an UEFI setup. There are many other bootloaders than grub and users are still free to use them instead of what the distro is shipping. UEFI basically allows you to have as many bootloader as you have space on that small fat partition.

    • adham-omran 11 days ago

      > Do people really dual boot a lot in 2024?

      Yes.

      > there is virtually zero overhead in using VM nowadays

      Not for real-time audio production. The state of audio plugins having Linux support from vendors like EastWest, Spitfire, Native Instruments, iZotope is abysmal and even Wine does not run them nowadays.

      Even with a virtual machine that has pinned cores and USB pass-through of a dedicated audio interface, it practically locks you to one sample rate, any change causes crackles, try to load more than one plugin and you hear crackles. There is plenty of overhead.

    • messe 11 days ago

      > Do people really dual boot a lot in 2024?

      Yes, there are still use cases for it.

      The state of GPU virtualisation, for example, is a spectrum from doesn't exist/sucks to only affordable for enterprise customers.

      So unless you have a second graphics card to do pass through with, if you want to use your GPU under both OSes then you almost always have to dual boot (yes, there are other options like running Linux headless, but it's not even remotely easier to set up than dual boot)

      • prmoustache 11 days ago

        Most mainboards comes with an integrated gpu though? If you use that one for the host OS, it is easy to pass the discrete through no?

        • korhojoa 11 days ago

          Consumer motherboards haven't had gpus for a while now (IPMI usually comes with one, so servers do), they're built in to the CPU (if they are, not all cpus have them). These can't usually be easily allocated to a vm.

          • prmoustache 11 days ago

            I clicked randomly on a number of motherboards sold by the 2 brands that came to my mind, Asrock and Gigabyte, and all of them advertised hdmi and usb-c graphics output so I am surprised by your declaration that consumer motherboards don't have GPU. If I am not mistaken on AMD Ryzen architecture it comes down to choosing a CPU with a G or 3D suffix which states they have an integrated GPU.

            • eyeris 11 days ago

              It really still is the case that most if not all consumer motherboards don’t have built in graphics. For the most part especially on the intel side, they’ve relied on the iGPU in the CPU for output for probably 10 years now

              • prmoustache 11 days ago

                Well my case still stand that you still have an integrated graphics, if not by the motherboard but the GPU, that you can use on the host while you dedicate a discrete card for VM passthrough.

            • happycube 11 days ago

              Desktop Ryzen 4's and newer have a very small iGPU that's just enough to put up a desktop (and presumably a framebuffer fast enough to feed a discrete card's output into)

          • redox99 11 days ago

            He's saying the opposite: Host has integrated graphics, VM has dedicated GPU.

            • sqeaky 11 days ago

              How can the host have integrated graphics, if integrated graphics don't exist?

              Per, Korhojoa, and my personal experience plenty of desktop CPUs simply don't have integrated GPUs. Consumer mainboards simply don't come with them at all. Consider my previous workstation CPU, top of the line a few years ago and no iGPU: https://www.amd.com/en/products/processors/desktops/ryzen/50...

              Integrated GPUs is a feature of server mainboards so that there is something to display with for troubleshooting, but not on any retail mainboards I am aware of. It is a feature of some consumer grade GPUs designed for either budget or low-power gaming. It simply doesn't exist on all CPUs, consider the AMD 5600, 5600x and 5600g, last gen mid-range CPUs adequate for gaming and the x had a little more clock speed, and the g had an iGPU.

              • redox99 11 days ago

                Most AM5 CPUs have integrated graphics. It's also quite common on Intel.

              • prmoustache 11 days ago

                > and the g had an iGPU.

                So you are contradicting yourself.

                • sqeaky 11 days ago

                  This is a fundamentally dishonest take. I provided three specific CPUs that varied by just the letter at the end where some had an iGPU and some didn't. I am being honest that some have it but that it isn't ubiquitous.

                  • prmoustache 10 days ago

                    Well when you buy a desktop computer in 2024 their are usually 4 main ways:

                    - buying a ready made computer from a brand --> always come with an integrated GPU. Some will be even such a small form factor you have to use an external thunderbolt connected GPU if you want to use one.

                    - you build your computer yourself from parts --> you decide your motherboard and CPU, if VM passthrough is something you want to do, you just buy the parts that fits your use case

                    - you buy a configurable prebuilt computer from an online or local vendor --> you just have to choose the right option on the configuration tool so that you get a motherboard/cpu that offers integrated GPU.

                    - you buy second hand and you don't have an igpu: you buy the cheapest gpu available, usually around 10 to 25$ and you have your second GPU that the host can use.

                    Even when you are using laptop, having 2 GPUs is really not complicated in 2024, especially with thunderbolt external GPU cases/adapters.

                    Bottom line: you only have one GPU if you actively choose not to have 2.

                    • michaelmrose 10 days ago

                      The average PC is already a trade of that costs the average user around $800 and near 2/3 would need a substantial RAM upgrade a new GPU or both to make gaming through VM passthrough a reality. Most people aren't looking to buy new hardware and learn new tech to game.

                      It sounds like a useful toy for those whom already enjoy playing with their computer as much as playing with the game.

                      That said wouldn't limiting the host to integrated graphics (or whatever you get for $25) be a substantial limitation compared to using wine/proton or dual booting?

                      • prmoustache 10 days ago

                        > Most people aren't looking to buy new hardware and learn new tech to game.

                        Most people don't play game.

                        Most people that play game that isn't solitary or a web game just buy a playstation, xbox or Switch.

                        Only a relatively small fraction of people playing AAA games use computer for that. The most hardcore one and the most willing to spend money on a game Rig. And I am pretty sure most of them aren't the least interested in dual boot because they would have a desktop gaming rig and a laptop for everything else anyway. Only a tiny fraction of gamers is probably interested in dual booting. You are part of that tiny group. Fine. The nmbl tool presented in this conference do not prevent dual booting anyway so I am not even sure why people act like they should be offended because grub might be replaced someday by something else with more capabilities.

                        • michaelmrose 10 days ago

                          Only a tiny portion of people are interested in Linux in the first place. Of those it seems like around 25% dual boot.

                          https://linux-hardware.org/?view=os_dual_boot_win

                          It doesn't make sense to ex post facto try to justify what people SHOULD do when we can look at what in fact they actually do.

                          The idea that the only people that play PC games are ONLY play AAA games on their souped up rigs is also a counterfactual. People play games everything from 8 year old laptops to $5000 custom built rigs with RGB everything. You are oversimplifying the universe consists of many and varied irrational individuals not spherical cows.

                          Dual booting is simple and suitable for nearly 100% of machines running Linux.

                          Wine/Proton is suitable for nearly 100% of machines running Linux. Steam has reduces this complexity to a few clicks for the majority of titles.

                          GPU passthrough is unsuitable for 70-80% of configurations and by dint of complexity undesirable for nearly everyone which is why virtually nobody does this.

                          • prmoustache 10 days ago

                            > Wine/Proton is suitable for nearly 100% of machines running Linux. Steam has reduces this complexity to a few clicks for the majority of titles.

                            Why would one dual boot if games work so well?

                            • michaelmrose 10 days ago

                              Because people don't want to play "games" they oft want to play a particular game and if it doesn't work it doesn't work. Also consider how many people are new and they have an existing computer with Windows the standard play is to dual boot first and then possibly transition to only Linux if it works well enough for usage.

                        • sqeaky 10 days ago

                          Approximately half of gaming revenue is from PC customers. It wavers up and down depending on exactly what metric you want to use and when the last console refresh was.

                          You are correct on the complexity cost and how most people, even those with nice gaming computers, just don't want to deal with more complexity than needed. Even mandating a store app that works causes a significant hit to conversion rates. EA couldn't give away Dead Space a previously successful AAA title when bundled with their store.

                          • prmoustache 10 days ago

                            > Approximately half of gaming revenue is from PC customers

                            Whales.

                            • michaelmrose 9 days ago

                              You are thinking of pay to win games with microtransactions. Whereas this trash HAS come to the PC platform there is no reason to believe it represents any substantial portion of the revenue in PC gaming.

                            • sqeaky 10 days ago

                              I suppose everyone is entitled to their opinion, but most people base it on something. You are free to do whatever you're doing, but I hope no one takes you seriously.

    • drtgh 11 days ago

      > Do people really dual boot a lot in 2024?

      One of the big problems is with the graphics cards, because the vendors block a driver functionality ( SR-IOV ) for consumer GPUs that would allow single GPU passthrough for VMs.

      The alternative is to leave the system headless (reboot needed, and the VM need to run as root), or to use two graphics cards (wasting power, hardware resources, etc.), for which you also need to add an extra delay layer inside the VM for to re-send the graphics back to the screen, or to connect two wire inputs to the monitor.

    • zik 11 days ago

      > Do people really dual boot a lot in 2024?

      Yes. I work on Linux and play most games on Windows. Playing games on a VM is... pretty terrible.

    • michaelmrose 10 days ago

      > Do people really dual boot a lot in 2024

      Seems the answer is yes. https://linux-hardware.org/?view=os_dual_boot_win

      > there is virtually zero overhead in using VM nowadays

      It might be more accurate to say that if you have a fast computer with lots of resources the experience running a basic desktop experience feels perceptibly native. This means it is a great answer to running windows software that neither needs a discrete GPU nor direct access to hardware on the minority of machines that are capable enough for this to be comfortable.

      In actuality laptops are more common than desktops and the majority of computers have 8GB of RAM or less. 60% all form factors 66% laptops. This just isn't enough to comfortably run both.

      https://linux-hardware.org/?view=memory_size&formfactor=all

      Furthermore while most Linux users are comfortable installing and running windows and Linux whereas they may or may not be familiar with virtualization.

      Also probably the number one reason someone might dual boot is probably still gaming which although light years ahead of years prior still doesn't have 100% compatibility with Windows. In theory GPU passthrough is an option but in reality this is a complicated niche configuration unsuitable for the majority of use cases. Anyone who isn't happy with steam/proton/wine is probably more apt to dual boot rather than virtualize.

    • tengwar2 11 days ago

      Yes, people dual boot. Particularly people who are contemplating a move from Windows. I'd hate to see Linux take the "my way or the highway" attitude of Windows.

      • prmoustache 11 days ago

        My experience when I had a dual boot in the late 90's was that rebooting is such an interruption that you never become fully comfortable on one of the OS. You just stick to the OS you are used to and never really do the switch.

        While if don't dual boot you can switch completely to another OS and only use VM or remote desktop for the handful of use cases when you aren't ready yet (and then end ip abandoning them completely as well).

        • rty32 11 days ago

          I don't think you got the point.

          The experience of using a VM is not good, that's exactly why people are doing dual boot. They know what they are doing.

        • lolinder 11 days ago

          > the late 90's was that rebooting is such an interruption that you never become fully comfortable on one of the OS

          Keep in mind that booting takes a tiny fraction of the time today that it did in the 90s.

          • prmoustache 11 days ago

            Regardless if it takes 20 seconds or 2 minutes it is still an interruption.

      • delfinom 9 days ago

        Hilariously, the windows boot manager supports dual booting as well. So one can use it instead of grub to dual boot.

    • blincoln 11 days ago

      I dual-boot on my personal desktop. I mostly use Debian, but there's a Windows partition for games and a few other Windows-specific things. The GPU in it was way too expensive to justify buying two, and I use it under Linux for ML, hash-cracking, etc.

      My original plan was to do everything in a Windows VM, but there was too much of a performance hit for some of my purposes, and VMWare doesn't allow attaching physical disks or non-encrypted VMDKs to a Windows 11 VM, so it's actually easier to have a data drive that's accessible from both OSes with dual boot than it would be with a VM.[1] I'm still disappointed about that.

      [1] Using HGFS to map a host path through to the VM is not an option because of how slow that is, especially when accessing large numbers of files.

  • 1vuio0pswjnm7 12 days ago

    As much as I generally detest indirection, for me a bootloader is a necessity; I need the flexibity to boot different OS kernels. AFAIK, UEFI offers no such flexibility. NetBSD's bootloader is best for me. UEFI seems like an OS unto itself. A command line, some utilties and network connectivity (UNIX-like textmode environment) is, with few exceptions, 100% of what I need from a computer. To me, UEFI seems potentially quite useful. But not as a replacement for a bootloader.

    • cool_beanz 12 days ago

      >I need the flexibity to boot different OS kernels. AFAIK, UEFI offers no such flexibility.

      Yes it does, I use it with two kernels, just have different entry for each stub in UEFI. Whenever I want to boot the non-default kernel I just hit F11 (for BIOS boot menu, on my motherboard) and choose the boot option. You just need to add the boot options in UEFI, pointing to the corresponding EFI files. They also have the kernel command line parameters baked into them and you can set your desired ones (silent boot whatever).

      • 1vuio0pswjnm7 11 days ago

        Thank you.

        • fuzzfactor 9 days ago

          You can also craft a text file named startup.nsh, and if present in the root (or ~nearby) of the FAT32 EFI partition, on bootup its UEFI commands will be executed rather than the default firmware selection.

          If a motherboard doesn't have enough UEFI commands in its built-in Shell (or has no built-in Shell at all), you'll want to include your own Shell.efi file right there along with any startup.nsh you might decide to deploy.

          This can also be good for USB booting where the removable USB device is in regular MBR layout rather than GPT-style-partitioning.

          Whether or not the whole USB drive is FAT32 or not, as long as there is a proper EFI folder in a UEFI-recognizable filesystem, you can boot to any other OS on any other filesystem, depending only on the contents of the EFI folder. Unless there is a startup.nsh for the UEFI to follow instead, then you might not even need an EFI folder. As intended. Boot floppies still work as designed too. Startup.nsh is more commonly expected to contain reference to an EFI folder that is present on some recognizable filesystem, rather than work as a lone soldier though. GPT-layout partitions are not supposed to be necessary either, it's only needed when you want more partitions than legacy BIOS will handle, or partitions that are too huge for MBR representation.

          Now any alternative to GRUB would by necessity have to perform more pleasingly on legacy-compatible systems where UEFI is not enabled also, or it will remain a less-effective alternative.

          Once a geek is smart enough to handle both BIOS & UEFI, the more I would be able to trust their UEFI solution.

    • sholladay 12 days ago

      > I need the flexibity to boot different OS kernels. AFAIK, UEFI offers no such flexibility.

      Isn’t this how Apple’s Bootcamp works (at least on Intel based Macs)?

  • eru 12 days ago

    You left out the most important reason I went back to using grub: some motherboards have dodgy UEFI support, and having an extra layer of indirection seems to be more robust sometime for some reason.

  • radium3d 12 days ago

    I dual boot Win/Arch easily with EFISTUB setup. It's super quick to boot to a usb stick of arch if I need to edit anything with the configuration in an "emergency" situation as well. https://wiki.archlinux.org/title/EFISTUB

  • nerdponx 12 days ago

    rEFInd is the magic tool here.

    Personally I still use GRUB for all of the reasons you stated above. But rEFInd + kernel gets you pretty close.

    • smeg_it 10 days ago

      I've used gummiboot before systemd ate it; and I've used rEFInd. Mainly, I just followed the excellent documentation @ https://www.rodsbooks.com/; that's also how I first familiarized myself with UEFI (Thanks Rod!).

      My brain has leaked all the information I understood (unfortunately). Is rEFInd still active? Is there a gummiboot fork (besides systemd)?

      Personally, I kind of hate Redhat calling itself that now, it's IBM. You can tell because all of the online knowledge from the community on their websites are now pay-walled. RIP Redhat (CentOS) (I'll miss you)

      P.S. Thanks Rocky Linux (and others like it)

    • zekica 11 days ago

      rEFInd is great! I wish they just updated the default theme to something nicer.

  • account42 11 days ago

    > it allows to dual-boot with Windows easily: motherboard boot menu is often not easy to access, you need to perform some key combination in a short window

    Hardly a problem in my experience - just hold down the key while booting.

    And dual booting is rarely needed anyway and generally just a pita. Just always boot into your preferred OS and virtualize the other one when you really need it.

    > also modern bootloader save the last boot option such that if Windows reboots for an update Linux does not start

    You can change the EFI boot entries including priority from the OS, e.g. via efibootmgr under Linux. Should be easy to setup each OS to make itself the default on boot if that's really what you want.

    > it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it

    All motherboards I have used had an EFI shell that you can use to run EFI programs such as the Linux kernel with efistub with whatever command-line options you want.

    > it allows you to choose between multiple kernels and initrd images easily, again for recovery purposes

    EFI can have many boot entries too.

    > it has a voice for entering the UEFI setup menu

    What does "a voice" here mean? Or you meant "a choice"? Either way, same as with the boot menu you can just hold down the key while booting IME.

    > it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.

    In my experience the EFI shell has always been accessible without a bootloader.

    • littlecranky67 11 days ago

      > And dual booting is rarely needed anyway and generally just a pita. Just always boot into your preferred OS and virtualize the other one when you really need it.

      I've been dual-booting linux since the kernel 2.2.x era and being able to do it was a major driver to migrate away from windows. It is super important for onboarding of new users that can't yet get rid of windows fully - mostly because of gaming (yes proton is nice, but anything competive that uses anti-cheat won't work yet is the majority share of gaming). And that is the reason I still boot into Windows on my dual-boot machine: Gaming. For me that windows is just a glorified bootloader into GoG or Steam, yet desperately needed and virtualization won't solve anything here.

      • daemin 11 days ago

        Ideally rather than dual booting I would welcome something like running both OSes in sort of a virtual machine but being able to switch between them as easy as with a physical KVM.

        Having to actually restart a PC is a pain in the ass which is why I don't dual boot.

        • littlecranky67 11 days ago

          grubonce "osname" && reboot

          is a pain in the ass? All the virtualization solutions are moot for gaming due to anticheat (plus 3d graphics virtualization not really working for windows)

    • myworkinisgood 11 days ago

      I have experience with two different laptops: 1. Dell enterprise laptops generally have a robust EFI system which allows for all kinds of `.efi` files to boot on `vfat` partitions. Dell laptops also have a good firmware setup for stuff like mokutils to work so that people can use measured boot with their own version of linux. They also work extremely well with self-encrypting nvme drives. 2. HP consumer laptops which are the worst of lot and essentially prevent you from doing anything apart from stock configurations, almost like on purpose. 3. All other laptops which have various levels of incompetence but seems pretty harmless.

      For all laptops apart from Dell, Grub is the bootloader that EFI could never be.

  • zozbot234 12 days ago

    > - it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it

    You can use the UEFI shell for this. It's kind of a replacement for the old MS-DOG command line.

  • markandrewj 11 days ago

    It is bold of RedHat to claim this is 'their solution'. UEFI has already been used for years to boot without grub. Some examples, MacOS, HP-UX, or systemd-boot via UEFI.

  • dheera 11 days ago

    > it allows to edit the cmdline of the kernel to recover a system

    Except they've made it increasingly harder to do this over the years. Nowadays you have to guess when it is on the magic 1 second of "GRUB time" before it starts loading and then smack all the F keys and ESC key and DEL key at the same time with both hands and both feet because there is nothing on the screen that tells you which key it actually is.

    All while your monitor blanks out for 3 seconds trying to figure out what HDMI mode it is using, hoping that after those 3 seconds are over that you smacked the right key at the right time.

    And then you accidentally get into the BIOS instead of the GRUB.

    It used to be a nice long 10 seconds with a selection menu and clearly indicated keyboard shortcuts at the bottom, and you could press ENTER to skip the 10 second delay. That was a much better experience. If you're in front of the computer and care about boot time, you hit enter. If you're not in front of the computer, the 10 seconds don't matter.

    I know you can add the delay back, I just wish the defaults were better.

  • Timber-6539 11 days ago

    > - it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it

    This is an indication of bad admin choice. The kernel defaults should not corrupt the boot process and if you add further experimental flags for testing you ought to have a recovery mechanism in place beforehand.

  • Dalewyn 12 days ago

    >it allows to dual-boot with Windows easily

    Windows Boot Manager can chainload into any arbitrary bit of code if you point it where it needs to hand off.

    It's a feature that goes back to Windows NT (NTLDR) supporting dual boot for Windows 9x, but it can be repurposed to boot anything you would like so long as it can execute on its own merit.

    eg: Boot into Windows Boot Manager and, instead of booting Windows, it can hand off control to GRUB or systemd-boot to boot Linux.

    • fuzzfactor 9 days ago

      >Windows Boot Manager can chainload into any arbitrary bit of code if you point it where it needs to hand off.

      With the NT6 bootloader this appears to be limited to operating only in BIOS mode using bootmgr.exe. The traditional chainloading is still possible by pointing to a binary file which is a copy of a valid partition bootsector, whether it is a Microsoft bootsector or not.

      The equalent BCD for UEFI mode uses bootmgr.efi (instead of bootmgr.exe), and does not seem to be capable of chainloading even when there is an equivalent BOOTSECTOR bootentry on the NT6 multiboot menu.

      It would be good to see an example of the NT6 bootloader successfully handling UEFI multibooting which includes starting Linux from the EXTx partition it is installed on. Still works perfectly in BIOS since early NT, but in UEFI not so much.

  • michaelmrose 5 days ago

    The suggested system would use bootnext to allow you to boot to windows. You could also put something in front of it like reEFInd

  • herewulf 11 days ago

    It allows you to enter your passphrase to unlock your Linux LUKS partition before you even get a menu to chainload Windows.

    At least this is what an Arch Linux derivative (Artix) system of mine does, amusingly. It sort of gives an observer the impression that it's an encrypted Windows system on boot.

  • notorandit 11 days ago

    Maybe it is time to re-think the entire hardware boot process and ditch the BIOS altogether.

    • lproven 11 days ago

      It probably was, but UEFI was not a good answer.

      I'd have preferred CoreBoot or OpenFirmware, but the PC industry was too slow to move and let Intel -- still smarting from Microsoft forcing it to adopt AMD's 64-bit x86 extensions -- take control of the firmware.

      • surajrmal 11 days ago

        The problem with all of the alternatives is that they aren't friendly for alternative OS. They mostly operate on a fork model, so upstreaming support for an OS doesn't mean everyone using that bootloader will support your OS. You either need to pretend to be Linux with a sort of boot shim, or build and flash a custom bootloader with support, which might be non trivial if you cannot get access to the forked bootloader's code.

        UEFI is just a standard interface, not an implementation of a bootloader. This enables multiple UEFI compliant implementations as well as an easy way for OS to support all UEFI based bootloaders without needing to coordinate with the owner of the bootloader. While I'm sure most would agree the UEFI interface may not be ideal, it has a lot of industry momentum, and is therefore probably the best option to get behind. There are a lot of players in this space (mostly hardware vendors) and coordinating anything is very difficult and takes a very long time.

        • lproven 11 days ago

          Both the suggestions I gave were designed and built to be FOSS and work with any OS.

          UEFI is more restrictive -- and tightly controlled by large industry vendors, not the community -- than either of them.

          So, no, I totally disagree on all points.

          • p_l 10 days ago

            OpenFirmware is similar level of complexity as UEFI, to be quite honest, and lacks certain mechanisms that were designed into ACPI (and inherited by UEFI) precisely to support multiple different operating systems without requiring the OS to be have specialty drivers for every little bit.

            Sure, in the happy path, you can depend on OpenFirmware giving you parameters like locations and addresses and "this device is compatible with X so you can use its driver", but it still requires that you have the specific driver the device is compatible with, and a new hardware release was often incompatible with older versions of OSes because, unlike ACPI, you can't encode information like "Hey, I'm compatible with interface X version Y" => "here's limited functionality because your driver is not updated for interface X version Y+1 but the computer will work".

            Instead you had special "hardware support releases" to get the OS to boot at all.

            CoreBoot and uboot by itself provide even less support. They might have open source, but they provide effectively closed platform to the end user. UEFI is in practice less restrictive because I only have to program to the interface, and in absence of gross bugs, I can expect things to work - whether it's a boot-time driver to support my super-duper-special storage add-in card, or a custom OS that I want to be available for normal people to try out by running on their random home PC. Hell, if Linux kernel people didn't tell they would no longer accept "platform definition" patches, you probably still wouldn't have FDT used on ARM with uboot.

            • lproven 4 days ago

              Nothing is perfect, and you are probably right that any firmware for potentially SMP computers with multiple types of boot device is unavoidably complex.

              However, ISTM that relying on a magic partition on a fixed disk is a poor design, and while other types of firmware are not radically simpler, there are or were alternatives, and some of them are noticeably more FOSS. UEFI is EFI for x86-64, broadly, and EFI was proprietary. That is not a good thing, in my book. Something more cross-platform and less vendor-dependent would have been preferable, even if of comparable complexity.

              • p_l 2 days ago

                I had a long dissertation here destroyed by random F12 and backspace key that I can't deal with retyping again, but I fully disagree.

                UEFI mandates certain, user and developer and admin UX nice, minimums regarding boot process. You can fully expand on supported filesystems, or even non-FS sources to boot from. Even from paper tape if you want to. You're not bound to magic partition on a fixed disk[1] anymore than you're with OpenFirmware (and decidedly less than IBM PC BIOS compatibles).

                None of the comparable alternatives were really FLOSS (by the time OFW went open source, EFI was shipping on x86 and amd64[2]), coreboot/uboot/redboot/etc were too limited, by themselves being e-waste framework unless paired with upper layer to provide open platform for users and developers.

                EFI was available, back in 1.1 timeline, as open source code for x86 and IA-64 (the IA-64 specific bits were called "SAL" IIRC), then some bright mind at Intel decided to close it down. Fortunately they open sourced it back as TianoCore and we now have FLOSS solution (it's as proprietary as OFW at this point in time, and it's more of an open platform than uboot/coreboot/etc).

                The available "less proprietary" options all created closed platforms, where you need excessive porting to boot anything the vendor didn't ship for you. It's trivial to make firmware so flossy it will make RMS shed tears of nostalgia for KA-10, but it's not going to be useful for majority if they ever want to run something not provided by vendor. Minicomputer/workstation complex firmware monitors etc. happened because diagnostics were often needed, and some required at least some compatibility with third party hardware, but them - including origins of OpenFirmware - implicitly accepted a closed platform where vendor would need to ship a special "hardware enabling" OS update or ship entire OS version to match new platform.

                UEFI might have proprietary roots, but it (and ACPI) is designed specifically to provide for the case of freedom of end owner to run whatever crap they want, including older version of OS they already got used to.

                [1] Unless the hardware is too cheap, like Qualcomm ARM systems with UEFI where various critical services are patched in windows drivers to be handled through magic files on ESP, or in permissible CHRP OpenFirmware variants where magic partition on fixed disk is explicitly mentioned as an option.

                [2] EFI based firmwares started shipping by 2005~2008 timeframe on x86 and amd64, mainly due to DXE providing way easier method to integrate 3rd party code. It was also from start designed to handle multiple platforms, partially thanks to having IA-32 and IA-64 code simultaneously as early as EFI 1.0, which made it easier option to handle future 64bit platforms.

      • p_l 10 days ago

        UEFI btw is late 1990s thing, with work starting because BIOS was unwieldy chimeara that didn't match anything in hardware, and supporting things like network booting by hooking into "boot BASIC program from Cassette" subroutine was problematic.

        • lproven 4 days ago

          UEFI is a development of EFI, the proprietary firmware for the Intel Itanium.

          https://web.archive.org/web/20100105051711/http://www.intel....

          It was originally called Intel Boot Initiative, IBI.

          https://www.afterdawn.com/glossary/term.cfm/intel_boot_initi...

          You're right, work started in the late 1990s, but AFAIK nothing shipped until the 21st century: 2001.

          • p_l 2 days ago

            It's more that Itanium was the one system where it originally shipped, and that for no obvious reasons Intel closed-sourced it at some point. The Itanium proprietary firmware was, IIRC, "SAL" (somewhat related to modern UEFI's PEI layer).

            For reference, it was possible to run it on x86 (and even the ICC with EBC was provided!) ~2001, including DUET which I ran from a floppy. There was close to same level of source access as today with TianoCore, though probably with different license.

            Then someone at intel got a bright idea to close access to source and until the direction got reversed (which got us TianoCore) it was fully proprietary.

            • lproven a day ago

              Very interesting. Thanks for this!

  • 29athrowaway 12 days ago

    I need a bootloader that automatically deletes Windows partitions upon detection.

    And is also themed like XBill.

drewg123 12 days ago

I personally think they're moving in the wrong direction. I'd rather have "NMIRFS" (no more initramfs). Eg, a smarter bootloader that understands all bootable filesystems and cooperates with the kernel to pre-load modules needed for boot and obviates the need for initramfs.

FreeBSD's loader does this, and its so much easier to deal with. Eg, it understands ZFS, and can pre-load storage driver modules and zfs.ko for the kernel, so that the kernel has everything it needs to boot up. It also understands module dependencies, and will preload all modules that are needed for a module you specify (similar to modprobe).

  • aaronmdjones 12 days ago

    As other sibling comments have explained, an initramfs is usually optional for booting Linux.

    If you build the drivers for your storage media and filesystem into the kernel (not as a module), and the filesystem is visible to the kernel without any userland setup required beforehand (e.g. the filesystem is not in an LVM volume, not on an MD-RAID array, not encrypted), it is fully capable of mounting the real root filesystem and booting init directly from it.

    The only point of consideration is that it doesn't understand filesystem UUIDs or labels (this is part of libuuid which is used by userland tools like mount and blkid), so you have to specify partition UUIDs or labels instead (if you want to use UUIDs or labels). For GPT disks, this is natively available (e.g. root=PARTUUID=4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709 or root=PARTLABEL=Root). For MS-DOS disks, this is emulated for UUIDs only by using the disk ID and partition number (e.g. root=PARTUUID=11223344-02).

    You can also specify the device name directly (e.g. root=/dev/sda2) or the major:minor directly (e.g. root=08:02), but this is prone to enumeration order upset. If you can guarantee that this is the only disk it will ever see, or that it will always see that disk first, this is often the most simple approach, but these days I use GPT partition UUIDs.

    • mlyle 12 days ago

      Yes, I think he realizes it's optional for booting Linux.

      In practice, we have generic kernels which require a lot of stuff in modules for real user systems running on distributions. Instead, though, we could have a loader which doesn't require this big relatively-opaque blob and instead loads the modules necessary at boot time (and does any necessary selection of critical boot devices). i.e. like FreeBSD does.

      There's advantages each way. You can do fancier things with an initramfs than you ever could do in the loader. On the other hand, you can change what's happening during boot (e.g. loading different drivers) without a lot of ancillary tooling to recover a system.

    • StillBored 9 days ago

      Not entirely, it is possible for simple boot solutions, but to enable LUKS and various other rootfs, it usually requires some userspace probing, authentication, etc. I'm not so sure it would be easy to convince the kernel maintainers to add a user prompt for pin code to the kernel just to avoid having an initrd.

      • aaronmdjones 8 days ago

        > it is possible for simple boot solutions, but to enable LUKS and various other rootfs, it usually requires some userspace probing, authentication, etc

        I did mention that the root filesystem had to be both not encrypted and not require any userland setup.

        > I'm not so sure it would be easy to convince the kernel maintainers to add a user prompt for pin code to the kernel

        That wouldn't help, as the password is not usually used directly as a volume encryption key. In LUKS2 for example, a memory-hard KDF (Argon2) can be used to transform the password or PIN into a key-slot encryption key which is then used to decrypt that slot to obtain the volume encryption key (which is usually completely random and not at all related to or derived from any kind of password) in order to set up the dm-crypt mapping.

        > just to avoid having an initrd.

        No distribution has used an initrd in the last several years; initramfs reigns supreme now. They are not the same thing; an initrd is an image of a real filesystem (e.g. squashfs or ext2) that is read-only mounted onto / by the kernel during its startup, while an initramfs is (an optionally-compressed) cpio archive that is extracted to a read-write tmpfs (or ramfs if tmpfs is not available) mounted on / by the kernel during its startup and then the memory occupied by that cpio archive is freed because it is no longer necessary. An initramfs can also be built into the kernel image; an initrd cannot, and must be provided by a bootloader.

    • the_duke 11 days ago

      I think you just reinforced the parents point.

  • ta8645 12 days ago

    The Linux kernel does not require an initramfs. You can build a kernel with everything compiled in; with no modules needed at all. Initramfs is used for generic kernels where you don't know beforehand which features will be required. This allows you to avoid wasting RAM on features you don't use. But it is optional.

    • drewg123 12 days ago

      I realize that. But every distro I've used uses an initramfs, so unless you want to build your own kernels, you're stuck with it, and the painfully slow initramfs updates when you update packages, and dkms (or similar) updates the initramfs with the newer version of your out-of-tree modules.

      • kbolino 12 days ago

        Given the reason why "out-of-tree modules" exist, there's really no way to eliminate initramfs or something like it entirely in the general case. It might be possible to speed up the process of building the image (as long as the results are not "redistributed"), but this is a licensing and legal problem, not a technical one. FreeBSD is under a much more permissive non-copyleft license and so can legally bundle things that Linux cannot.

        • FeepingCreature 11 days ago

          You could probably build a "virtual initramfs":

          - linux tells the bootloader what folder the modules live

          - bootloader just puts them all in memory

          - linux just picks what it needs.

          That's all the initramfs is anyways. The point is there's no reason to prebuild an image from inside Linux, you can just have grub assemble a simple fs on the fly.

        • nwallin 12 days ago

          initramfs can be eliminated if no kernel modules are required to boot the system. In practice, this means drivers for the motherboard, drivers for the block storage system, and the filesystem have to be compiled in as opposed to being modules. Certain 'interesting' disk schemes that require userspace configuration tools aren't possible, including LVM2, dmraid, disk encryption, /etc/fstab has to hardcode the physical path, and probably a dozen other things I can't think of. If you want to do PXE boot over wifi and you have out of tree wifi drivers I don't think that would work, though tbh PXE over wifi sounds insane.

        • medstrom 12 days ago

          You're talking about something like ZFS, and I get that they can't just compile it in, but a distro can still ship the module, if I'm not mistaken.

          ...But to load it at boot time it absolutely must be done through an initramfs. Is that right?

          • lproven 11 days ago

            > ...But to load it at boot time it absolutely must be done through an initramfs. Is that right?

            No, not AFAICS; it is incorrect.

            On UEFI the system boots from a FAT32 partition. Put the kernel directly on that FAT32 partition, and any necessary modules such as ZFS, and the kernel can load the ZFS module from FAT32 and then mount root directly without any need for an initramfs.

            This is how systemd-boot works.

            I am not advocating systemd-boot -- I found it a pain to work with -- but the point is that it's perfectly possible and doable. The initramfs is a bodge and it's time we did away with it. It should only be needed for installation and rescue media.

            • kbolino 11 days ago

              If you can size the EFI partition yourself, or it's already big enough (e.g. you didn't install Windows first), then yes this makes more sense.

          • aaronmdjones 12 days ago

            > But to load it at boot time it absolutely must be done through an initramfs. Is that right?

            Yes, because it cannot be part of the kernel image, or it would be illegal (a violation of the GPL license) to distribute that kernel. Therefore, it must be a module, and that module has to live somewhere and be loaded by something. If root is on ZFS, this must therefore live in an initramfs and be loaded by it so that the initramfs can mount the real root filesystem on the kernel's behalf.

            • lmm 12 days ago

              One could have the equivalent of DKMS build the modules into the kernel image instead of building the initramfs. I don't know how much practical overhead there is to the initramfs and pivot_root dance, but it feels far uglier than it should need to be to just load some modules.

            • megous 12 days ago

              It doesn't have to be a module if you're building the kernel for yourself. No violation in that.

              • anticensor 11 days ago

                That would run afoul of Turkish copyright vignette laws, that have an exemption for stuff everyone can use and redistribute royalty-free but no exemption for stuff that you can use royalty free but not redistribute.

              • aaronmdjones 12 days ago

                GP was talking about distribution kernels.

            • prmoustache 11 days ago

              The distro could automatize the compilation of the kernel with ZFS on the user machine. In that case no license is violated as the kernel image is not distributed with ZFS.

              That would probably make updates a lot less slower than having zfs shipped in an initramfs though.

              • account42 11 days ago

                It doesn't really have to be slower as all that would be needed to do on installation is the final linking step. Linking prebuilt objects into a prepared kernel image shouldn't be inherently slower than assembling modules into an initramfs.

                • pests 11 days ago

                  Doesn't that program that does the linking of the pre built objects into the prepared kernel itself a violation?

                  • kbolino 11 days ago

                    IANAL but, for the most part, no.

                    The "problem" with the GPL here arises not when you, the end user, take a piece of GPL-licensed software and combine it with other software, as is your GPL-protected right, but when you try to redistribute the result. You see, every end user has the same right to obtain all of the source code for GPL-licensed software that they receive, and for all of that source code to be licensed in a way compatible with the GPL. Once the kernel and non-GPL-licensed modules have been combined into a single piece of software, you are free to use it locally as you wish, but you can't share it, because you would be unable to meet the obligations you owe to the person you give it to.

                    Bear in mind that the modules are meant to be combined with the kernel, and the method by which that happens isn't specified by the module authors. So, a tool which makes all of this easier for you to do isn't circumventing any restriction meant to stop you from doing this, because no such restriction exists.

                    • pests 11 days ago

                      > So, a tool which makes all of this easier for you to do isn't circumventing any restriction

                      Is this the case though? I thought there was an argument that in order to create that tool - you would need enough knowledge that it would require you to basically create a derivative work.

          • kbolino 12 days ago

            Yes, as this is the closest equivalent to "dynamic linking" that can happen at boot time.

      • markhahn 12 days ago

        why would initramfs updates be slow? do you mean that most initramfses are large? how much time are we talking about?

        • nextaccountic 11 days ago

          On Arch Linux, mkinitcpio is slow. I don't know why is that

    • linsomniac 12 days ago

      Is anyone really wanting to get back into the business of building their own kernels? I started using Linux heavily in '92, and I've built a lot of kernels, and am quite happy to not be building them anymore.

      • megous 12 days ago

        It's easy (2-3 commands), takes like a minute on a modern machine with trimmed down kernel configuration, and you can customize the kernel to your liking (write/patch drivers, embed firmware blobs, fix things that are broken or missing). What's to hate? :)

        Though I only do it for my ARM based devices currently.

        And if you're not throwing away build artifacts after each build, then getting stable updates is just a `git pull` and incremental make, which is usually very quick.

      • ssl-3 12 days ago

        I kind of liked compiling my own kernels. I felt I was better-connected to the state of things, and it was fun to see it all evolve from the vantage point of "make menuconfig".

        But initramfs isn't so bad, and it allows things like ZFS root to have a modicum of smoothness and integration.

      • teo_zero 12 days ago

        I build my own kernel. I did invest some time to select the right configuration, but now it's just a question of copying over the old .config and running "make". What's annoying about that?

      • account42 11 days ago

        I never stopped. What's so bad about building your own kernel?

      • checkyoursudo 11 days ago

        I understand not wanting to. I have been compiling my own kernels since about 2008, I think. I have occasionally thought about switching to something else, but really it has only gotten better (faster) over time.

        When I was young and spry, I used to compile them with every new minor revision. Now, it is just maybe a couple times per year. I think that cooling it on how often I do it has helped it not become annoying.

      • lproven 11 days ago

        > am quite happy to not be building them anymore.

        Me too. I go back to within about 3 years of that.

        But I expect my distro to handle this for me now.

        If the distro compiled and updated the kernel for my hardware then there's be no need for an initramfs.

        While initramfs was a simplistic kludge, the UKI idea does not fix it, it wraps a kludge inside a fugly ball of lack of understanding.

      • prmoustache 11 days ago

        kernel compilation is easily automated. I don't want to do that and like the initramfs approach mostly because I like the fact I can take a hard drive out of a computer and boot the system on another one in case of a hardware failure.

        That is a lot faster than recovering from a backup.

    • jolmg 12 days ago

      > Initramfs is used for generic kernels where you don't know beforehand which features will be required.

      And also for e.g. cases where you've got some custom stack of block devices that you need to set up before the root FS and other devices can be mounted. It's not just about loading kernel modules.

    • arp242 12 days ago

      What's the reason it doesn't load those modules from the regular filesystem? That's what FreeBSD does, and seems to work well enough?

      • ta8645 12 days ago

        Because there are a lot of different types of filesystems supported. And you'd have to compile them all into the kernel. Which of course you can do, that is supported by the build system today. But Distros typically prefer to keep their kernels small, and not waste the RAM that would be taken up by compiling it all into the kernel.

        • mixmastamyk 11 days ago

          It must already have vfat and the ESP, so why not just copy a basic set of modules to a subfolder there?

    • Muromec 12 days ago

      I think the idea is, since modules map to device ids statically, bootloader could have enough information to read them from the filestem one by one.

      I don’t see the point of doing so however.

  • mjg59 12 days ago

    This requires a bunch of additional logic in the bootloader (eg, providing disk encryption keys), and since you're not doing this in a fully-featured OS environment (as you are in the initramfs case) it's going to be hard providing a consistent experience through the entire boot process. Having the pre-boot environment be a (slightly) cut-down version of the actual OS means all your tooling can be shared between the two environments.

  • zauguin 12 days ago

    This is a step in that direction. What they are proposing is not so much "no bootloader" but using a small Linux as bootloader. I'm using a similar setup for some time and it gives some of these advantages. Especially you get support for all relevant filesystems (you can support everything Linux supports because it is Linux), it can dynamically build a minimal initramfs with only the needed drivers if you want to and understands module dependencies (e.g. it can just dump the list of modules it uses itself) and is generally much more flexible.

  • zdw 12 days ago

    FWIW, Grub has a read-only ZFS implementation to allow booting: https://git.savannah.gnu.org/cgit/grub.git/tree/include/grub...

    • E39M5S62 12 days ago

      Ditch grub and use Linux to boot Linux on ZFS - https://docs.zfsbootmenu.org/en/v2.3.x/ .

      • rabf 11 days ago

        This bootloader gives you some amazing features such as booting distros from different zfs datasets or snapshots and chrooting into your system. Really does make grub and ext4 feel like the stoneage.

      • prmoustache 11 days ago

        Didn't knew that one, thanks for that.

    • nubinetwork 12 days ago

      Grub uses an ancient version of zfs code, it's tied to Oracle's zfs and they refuse to update it to current openzfs.

      • yjftsjthsd-h 12 days ago

        Refuse, or legally can't? Oracle doesn't own the copyrights on commits made after illumos forked from the corpse of opensolaris.

        • nubinetwork 12 days ago

          A little of both. Everyone know about Linus' refusal to touch CDDL code, but grub isn't the kernel.

          There have been several attempts to add features to the grub zfs code over the years, but there are several maintainers of grub who happen to be employees of Oracle, and typically the attempts go nowhere.

          I personally can't recommend using grub anymore. The whole "just make 2 pools" solution is unacceptable, and until Oracle stops gatekeeping, their code becomes more obsolete in my eyes.

      • dizhn 12 days ago

        Yeah update your zfs file system to gain new features and bam, you can't boot no more.

  • Arnavion 12 days ago

    Linux has multiple choices for filesystems for root, even if you only count the most popular ones. And on top of that they could be encrypted by LUKS. Duplicating all that into the bootloader is what GRUB does, and poorly. Putting the kernel into the ESP is much better in that regard.

  • Aurornis 12 days ago

    > I'd rather have "NMIRFS" (no more initramfs).

    In many cases, you don't need initramfs. I rarely use one in embedded systems.

    • fullstop 12 days ago

      I use them in embedded systems because they allow me to mount encrypted volumes without exposing the keys.

      • account42 11 days ago

        How does that work? The keys have to be loaded from somewhere.

        • fullstop 11 days ago

          The keys to decrypt the kernel are in u-boot. u-boot's keys are in the low level boot loader, and the keys for that are sometimes burned in write-only fuses on the microcontroller itself. Other chips have OP-TEE or similar frameworks, and you just chain the keys all the way down to the initramfs and that data is wiped when you start init.

          You're reliant on the capabilities of the chip that you're working with, and a flaw in that can unravel everything that you've done. In one case, I had to disable the on-chip boot agent once things were provisioned because of flaws in their implementation.

          In short, a signed applet could be sent to the chip to do things like read/write NAND or NOR, set fuse bits, etc. When an unsigned applet was sent, it was rejected as expected but they neglected to clear the memory contents in this case. So you could send a malicious applet, let it be rejected, and then just tell it to execute. It's kind of a fascinating writeup if you want to know more [1].

          1. https://labs.withsecure.com/advisories/microchip-atsama5-soc...

  • rcxdude 12 days ago

    Does FreeBSD's loader share code with the kernel? It does seem like a lot of duplication of systems to make it work in comparison to just using the same code.

  • m463 12 days ago

    > I personally think they're moving in the wrong direction

    the other direction is to put everything in systemd. :)

    • markhahn 12 days ago

      bad thing to joke about.

  • fooker 12 days ago

    The more smartness you put here, the more it makes life difficult for non-standard operating systems.

    And if this bit is closed source, and something doesn't work, you don't have a recourse.

  • dexen 11 days ago

    Seconding this.

    Having lucked into using Lilo and no initramfs for several years now, I'm very happy with robustness and straightforwardness of the solution.

    In contrast, on the rare occasion I've dealt with somebody elses' GRUB and initramfs setups, they turn out brittle and complex.

  • sim7c00 12 days ago

    this exactly. freebsd's loader is one of the only sane ones ive seen. grub is an amazing piece of software but its really a mess to work with.

mjg59 12 days ago

A lot of the commentary here is based on misunderstandings of the capabilities and constraints of a UEFI environment and what the actual goals of this project are, and I think miss the mark to a large degree. Lennart's written some more explicit criticism at https://lwn.net/Articles/981149/ and I think that's a much more interesting set of concerns.

  • cycomanic 12 days ago

    I have to say I find Lennart's arguments quite unconvincing. As another person said, the vast majority of people just want default boot to the most recent kernel (which this proposal could do well).

    But then when it comes to the other points, yes I want to be able to reliably boot into other systems, but both systemd-boot and grub are notoriously bad at detecting other systems on disks (both use install-time detection IIRC). The only one which does a reasonable job is rEFInd. But even more a kernel with appropriate drivers could even add kernels/systems on usb disks to the selections (why do I have to go to the UEFI selection to boot from USB).

    The next thing he completely ignores is booting into zfs or btrfs snapshots, which is not possible using systemd-boot AFAIK, and again would be much nicer to do with a kernel.

    • Certhas 11 days ago

      Also, from what I understand after watching some of the video demonstration in the Q&A, I could just have another EFI entry point towards the nmbl configuration with a grub like menu, and get an exact replica of the grub experience. Having to go through the BIOS boot menu for those rare occasions where I need it is perfectly reasonable.

    • unaindz 11 days ago

      Not that it retracts from your argument but rEFInd can handle detecting bootable USBs afaik. It's just not enabled by default

  • rcxdude 12 days ago

    I feel like that post misses the biggest one that pulls people to GRUB: complicated boot sources and procedures. Filesystems that UEFI doesn't understand, more complex network boot sources, all that kind of complex messiness that GRUB enables and others don't. Now, whether those are good idea or not is a different question, but I think this is a good concept for a full replacement for GRUB, as opposed to the existing replacements which already cover the 90% case pretty well. (And I think it's got a case for handling the other cases OK: from the sounds of it they plan to lean on UEFI and A/B image to handle fallback, and it'll basically just work as a direct UEFI boot in the common case)

  • rcxdude 11 days ago

    Thinking about it a bit more, though, it does feel like a hybrid approach is probably better. For dual-booting off local disks and other simple cases, just having the kernel and initramfs alongside other OS options makes a lot of sense, and you can use the UEFI boot menu or something deliberately simple like systemd-boot to select between them for dual-boot or recovery. For more complex cases (where your rootfs is not just something the kernel can mount on its own), instead you basically just want a process for building your initramfs to do that from a config like grub (which is already how a lot of cases like that are solved, anyway), and in extreme cases where you also want to stash a kernel in some other location then you can use kexec from that. But for just a boot menu (which is aready in the minority case and 90% of users in that case need nothing more) it feels even heavier than grub for little benefit.

  • kasabali 11 days ago

    > completely useless if you care about Measured Boot

    I stopped reading there. All these engineers who help build and defend this draconian crap should be forced to used only an iPad for the rest of their lives.

    • mjg59 11 days ago

      Measured boot is, in itself, under user control - you can seal whatever secrets you want to any specific state and they'll only be accessible in that situation. This has obvious benefits in terms of being able to (for instance) tie disk encryption keys to a known boot state and so avoid needing to type in a decryption phrase while still preventing anyone from being able to simply modify your boot process to obtain that secret. The largest risk around this is from remote attestation, and that's simply not something where the infrastructure exists for anyone to implement any kind of user restriction (and also it's trivial to circumvent by simply tying any remote attestation to a TPM that's not present at boot time and so can be programmed as necessary - it's just not good at being useful DRM)

      • kasabali 10 days ago

        > in itself

        Unfortunately nothing is "in itself" in the real world. All these so called security features end up locking down users more and more in their own devices.

    • tpoacher 11 days ago

      Of all the horrible punishments you could have envisioned, you went full-on "I have no mouth and I must scream" there...

      • fuzzfactor 9 days ago

        Merciless life sentence without any chance of parole.

        Envious of trustees having netbooks.

saltcured 12 days ago

This reminds me of MILO for booting Linux on some (?) DEC Alpha systems back in the 90s. I don't remember much about the actual firmware anymore. Much like today with UEFI, the system had some low-level UI and built-in drivers to support diagnostics, disk and network booting, etc.

MILO could be installed as a boot entry in the firmware-level boot menu. MILO was a sort of stripped down Linux kernel that used its drivers to find and load the real kernel, ending with a kexec to hand over the system.

No matter how you slice it, I think you'll always come around to wanting this sort of intermediate bootloader that has a different software maintenance lifecycle from the actual kernel. It is a fine idea to reuse the same codebase and get all the breadth of drivers and capabilities, but you want the bootloader to have a very "stability" focused release cycle so it is highly repeatable.

And, I think you want a data-driven, menu/config layer to make it easy to add new kernels and allow rollback to prior kernels at runtime. I hope we don't see people eventually trying to push Android-style UX onto regular Linux computers, i.e. where the bootloader is mostly hidden and the kernel treated as if it is firmware, with at most some A/B boot selection option.

  • lagniappe 11 days ago

    I remember LILO LInux LOader

    • slackfan 11 days ago

      LILO is still perfectly functional. Works great with my slackware install on my workstation.

samsartor 12 days ago

My previous laptop was a Chromebook running Linux+Coreboot. Unfortunately the usual Tianocore UEFI BIOS people use had some bugs in the nvme and keyboard drivers, which I gave up fixing or working around (at the time). Obviously Linux had working drivers because that's all ChromeOS is, so we setup a minimal Linux install as the Coreboot payload in the firmware flash, and I wrote a little Rust TUI to mount all visible partitions and kexec anything that looked like a kernel image. It worked like a charm and had all kinds of cool features, like wifi and a proper terminal for debugging in the BIOS! Based on that experience I don't see any reason why we don't just use Linux direct instead for everything. Why duplicate all the drivers?

The code is here although it hasn't been touched it years: https://gitlab.com/samsartor/alamode-boot

userbinator 12 days ago

Does anyone still remember when you could just dd the Linux kernel to a floppy and it would be its own bootloader?

https://yosemitefoothills.com/LinuxBoot/BD-1Disk.htm

Here's some more documentation on this: https://www.kernel.org/doc/Documentation/x86/boot.txt

What's old is new again... except 100x more complex and likely more than necessary.

  • Thoreandan 11 days ago

    Just checked and amusingly I'd forgotten that boot/root predated LILO, I must've first seen LILO when I installed Softlanding Linux. Since I didn't have any networking on my home machine, Linux was basically a "Look, run GCC on your home machine!" option for '91 that didn't involve going through DJGPP's DOS port.

  • lproven 11 days ago

    That is exactly what I thought of when I read this post, yes.

linuxrebe1 12 days ago

I'm curious if they're proposal will be capable of handling multi-os boots. I know grub can, I can have Linux and windows and possibly even a third OS if I want. I am concerned that red hats solution the well-intended, may be rather myopic, and be commercial only. What I failed to understand, is what problem this solves for systems that I probably only reboot once or twice a year. (Given that it only works with Linux only systems)

  • FredFS456 12 days ago

    You can switch OS's using the UEFI menu instead. It's not always convenient, depending on your UEFI implementation, however.

  • bootsmann 11 days ago

    The issue it solves, according to the talk, is that grub presents a fairly big attack surface for something that is sparsely maintained and that could be done in the kernel, which has a lot of active devs.

  • ack_complete 12 days ago

    Yeah, look at Windows 10 if you want to see how this can be done poorly. Its boot menu works by booting Windows 10 first and then restarting the computer if you choose another OS. This includes going all the way through POST again. Took something like two minutes end-to-end to get to Windows 7.

    • zamadatix 12 days ago

      I'm not sure I experienced the same with the Windows boot loader so maybe that behavior was something case specific instead of intended?

      • ack_complete 12 days ago

        Not sure, there might have been a fast path if you were booting to another Windows 10 install. The old legacy Windows Boot Manager also doesn't have the issue since it's much simpler and it executes in faux text mode before the OS boots.

      • kasabali 11 days ago

        That's the default behavior

        • fuzzfactor 9 days ago

          If you Bcdedit, that is known as "Standard" bootmenupolicy, which is touch-screen compatible and it has to reboot in order to reach an alternative OS selection other than the current bootmgr default.

          If you bcdedit to be "Legacy" bootmenupolicy, you can select any OS from its simple non-touch text-based NT6 multiboot menu and it will boot right away without need for repost.

dataflow 12 days ago

I've thought about something like this before, but I have so many questions on just the basic premise...

First: Linux could already be booted directly from the UEFI manager. You don't need GRUB at all. So why a new scheme - why weren't they just doing that?

Second (and third, etc.): If I have multiple Linux installations along with a Windows installation, wouldn't this mean one of them now has to be the one acting as the boot loader? Could it load the other one regardless of what distro it is, without requiring e.g. an extra reboot? And wouldn't this mean they would no longer be on equal footing, since one of them would now become the "primary" one when booting? Would its kernel have to be on the UEFI partition...?

  • rcxdude 12 days ago

    Booting linux directly just boots you into that install. It doesn't give you a boot menu or any of the other functionality GRUB provides. This project is basically proposing building that in a small initramfs userland instead (which has the advantage of requiring much less effort and code duplication). It's functionally very similar to GRUB, including with regard to your last point: generally speaking at the moment one OS needs to be managing the boot menu, and when they fight over it things go badly (see the status quo where Windows will occasionally just insert itself as the default after an update). UEFI could in principle have fixed this, but the inconsistent implementation between vendors makes it an unreliable option for OS developers.

    (And in principle this system could load other linux distros assuming there was some co-ordination in how to do so. Windows is more difficult, as is interaction with secure boot)

    • dataflow 12 days ago

      > Booting linux directly just boots you into that install. It doesn't give you a boot menu or any of the other functionality GRUB provides. This project is basically proposing building that in a small initramfs userland instead

      I indeed understood that part, but their motivation for this was security. If you want security, you should want to boot directly into the kernel. And if you're the occasional user who has multiple OSes installed in parallel... you can just add more kernels from your dual-boot installs directly to the UEFI screen; there's really no need to go through any form of intermediate stage, whether kernel-based or boot-loader-based.

      What I'm trying to say is: as cool as this is from a technical standpoint, I just don't understand the root of the premise or motivation here whose optimal solution is this approach. Whom is RedHat trying to please with this? The small fraction of users who dual-boot Linux, or the rest of the users who just have a single install? And what problem are they actually trying to solve -- security, performance, or something else? Because the optimal solution to the first two doesn't feel like this one, unless they're targeting a niche use case I'm not seeing? e.g., do they have lots of enterprise users that boot off a network, but whom would rather have a local Linux install whose sole job is to boot that...?

      • rcxdude 12 days ago

        They have to cater to a pretty wide set of users, and deal with a wide array of hardware and scenarios. UEFI, especially whatever random implementations of UEFI their users have, can't cover all of it. Addressing those needs currently requires something like GRUB (or a customised initramfs, which would be my preferred solution, but it requires more know-how), but GRUB effectively has to duplicate a large subset of the work that the kernel does, and inevitably (if only due to lack of resources), does so poorly, hence their argument that this is good for security: it's better than the status quo of GRUB. Indeed, if you have a UEFI firmware and it supports your use case, and it's well implemented, then this project is of no extra use (though it seems designed to just get out of the way in that situation and more or less just boot directly), but Red Hat's userbase does not entirely consist of people who are in that situation.

E39M5S62 12 days ago

It's nice to see more people embracing the capabilities of UEFI and Linux. ZFSBootMenu has been shipping an EFI application (really, a UKI masquerading as one) for almost four years now - https://docs.zfsbootmenu.org/en/v2.3.x/ . The neat part is that the first stage kernel boots in roughly 1.5 to 2 seconds. It's not really appreciably slower than other boot methods while at the same time exposing a substantial amount of pre-boot functionality.

  • account42 11 days ago

    > The neat part is that the first stage kernel boots in roughly 1.5 to 2 seconds. It's not really appreciably slower than other boot methods while at the same time exposing a substantial amount of pre-boot functionality.

    That sounds 1.5 to 2 seconds slower than just having efistub in your main kernel image, which honestly is a LOT. Of course not possible with problematic drivers like ZFS but then you don't have to use those.

    • E39M5S62 11 days ago

      Yes, and then your main kernel image is no longer on ZFS and you lose the ability to reliably roll back your root dataset. Everything is a trade off. I reboot my workstation once a week for a kernel upgrade, so an extra 2 seconds of boot time isn't even a consideration.

201984 12 days ago

What's the point of using this over plain EFISTUB? I use it with Arch, and whenever I want to boot to Windows, I just use the BIOS menu. I don't see what benefit a Linux-based bootloader provides.

  • Delk 12 days ago

    Entering the BIOS menu takes several seconds on my ThinkPad, and getting to the EFI boot menu from there takes a few more. That's after hitting the key during the correct time during the boot process, which sometimes takes guessing.

    In principle, the EFI multiboot mechanism should be the way to handle basic multiboot options. It would of course be nicer and cleaner from a design perspective not to have redundant mechanisms on top of each other. In practice, though, using the EFI boot menu can be clumsy. The real solution would be for it to not be clumsy but it doesn't look like we're necessarily there.

  • mjg59 12 days ago

    UKIs provide mechanisms for adding additional sidecar modules which can extend the initramfs, provide additional command line modifications, and so on.

  • _ache_ 12 days ago

    I do the same. The only advantage I can think of is editing kernel boot option on boot.

    • account42 11 days ago

      If your EFI isn't shit you should be able to use the EFI shell to launch the kernel with whatever commandline you want.

  • jansommer 11 days ago

    Not that I would want to dual boot on my 64 gb surface go 2, but if I did, I'd need a bootloader with a menu, because there isn't one in the bios.

  • pzmarzly 12 days ago

    EFISTUB requires recompiling the kernel every time initramfs, microcode or commandline change, no? That would get annoying pretty quickly on desktop PCs, which are not that fast with recompiling, and would need to do all of this quite often, e.g. on nvidia driver updates.

    • 201984 12 days ago

      I've never once had to recompile the kernel on my laptop for any reason. The kernel command line is set in the bios entry, which is somewhat tedious to change, but that's just an efibootmgr command. Initramfs gets rebuilt by pacman on larger updates, but that would happen no matter what bootloader I use.

      • mjg59 12 days ago

        Vendor support for the command line coming from the EFI boot entry is of variable quality. If it works for you that's great, but unfortunately there's a bunch of boards in the wild where it doesn't. It's not a great solution for general purpose distributions as a result.

    • LoganDark 12 days ago

      The command line can be part of the UEFI boot entry depending on your particular firmware.

      I think I can recompile the Linux kernel in around 15 minutes, but I have a 12400F. And 15 minutes is still 60 times longer than most people are willing to wait.

      • account42 11 days ago

        It wouldn't be impossible to change the default kernel commandline in the image without recompiling the whole kernel if anyone cared about making that fast.

        • LoganDark 11 days ago

          I mean, even the initramfs can be stored in the ESP, can't it? How's that work with Secure Boot? (assuming you don't just use a shim that makes the TPM happy and then proceeds to not actually verify anything afterwards.)

  • zamalek 11 days ago

    This is a porcelain for EFISTUB alongside other existing things.

vlovich123 12 days ago

I really like the idea and the approach. I’m a little concerned however about the compatibility issues with kexec. For example, here’s what Arch says about the NVidia module:

> The graphics driver needs to be unloaded before a kexec, or the next kernel will not be able to gain exclusive control of the device. This is difficult to achieve manually because any programs which need exclusive control over the GPU (Xorg, display managers) must not be running. Below is an example systemd service that will unload the KMS driver right before kexec, which requires that you use systemctl kexec.

It also talks about ACPI issues and there was a question in the presentation although it was unintelligible. More generally, I could imagine more back and forward compat issues that wouldn’t arise from a simpler bootloader that is only initializing a very constrained amount of hardware whereas the kernel will try to boot the full HW twice. I hope they figure out how to make it work, but I suspect they’ll run into pretty significant challenges running this on real “legacy” HW until this is in the ecosystem enough that HW vendors will support it better. A bonus would be that kexec will become better supported and more robust over time if there’s broader adoption.

I also wonder if there’s any back/forward compat issues kexec between very different kernel versions, but I’m guessing the kexec mechanism was intentionally designed to support that as best as it can.

https://wiki.archlinux.org/title/Kexec

  • mjg59 12 days ago

    There's no reason to load things like the nvidia driver if all you want to do is offer a choice to kexec into another kernel, which makes things easier - you can continue just using the display environment the firmware set up.

    • kbolino 12 days ago

      This is true on "IBM compatible" x86 PCs and will continue to be for the foreseeable future, but it's not the case on all platforms. Some of them require graphics drivers to show anything at all, even simple text.

    • raggi 12 days ago

      you bet though that as soon as the grub types are forced into userspace they're going to want to do fancy userspace things, like give me a fancy framebuffer driver and the ability to push a shader into the gpu to animate while the second kernel stage boots, etc etc.

      the more rope given here, the more will be taken, a rich programming environment of a whole kernel will I'm sure raise temptation to new levels of stuff here, and the natural progression from the shader framebuffer is hand-off to the next kernel stage so it can keep the animation going until wayland starts or whatever. maybe i'm paranoid.

    • vlovich123 12 days ago

      I think you’re missing the broader point I was trying to make by hyperfocusing on 1 example of an issue that can arise from kexec and is solvable in a number of ways. Ultimately the critique raised in the video about focusing on the VM and not trying this on real HW yet is a very real one and is the single hardest problem here I suspect, so punting on it can’t go on for too long.

      • mjg59 12 days ago

        My broader point is that the majority of kexec issues are associated with the difficulty in quiescing the hardware, and there's simply no need to load the majority of drivers before offering this option which constrains the problem significantly.

        • vlovich123 12 days ago

          Does the kernel actually support doing that? The pitch is that they already have all the pieces and don’t need to do any kernel work to enable this.

          • mjg59 12 days ago

            Module loading is handled by udev, so udev merely needs to support enumerating a subset of the hardware to (eg) ensure input devices are available.

            • vlovich123 12 days ago

              Again, I think you’re thinking I’m saying which I’m not. I’m not saying it’s impossible. I’m suggesting the scope of work may be harder than they pitched which is that they have all the pieces and don’t really need to do much other than some packaging & some EFI integration. UDEV changes and kernel patches (more than the trivial 2 they have right now) would prove that the idea requires more work than anticipated.

              • mjg59 12 days ago

                I don't see any need for kernel patches, and the udev policy is just config rather than code as far as I can tell. Bringing kexec into this is certainly more complicated than not using kexec, but I wouldn't expect (and I do have some familiarity of working with kexec) this to be a lot of engineering work.

          • ssl-3 12 days ago

            I'm more-or-less just a dumb user in these matters, but I've been using Linux to boot Linux with my semi-elaborate desktop rig because that's how ZFSBootMenu[0] do. Keeping [fairly] quiet about unnecessary hardware (like nVidia drivers) during this bootloader phase seems to be doing the trick for me.

            Or, at least: I certainly didn't have to do anything to the kernel for it to work. I'm just running whatever Void Linux is rolling with right now.

            [0]: https://docs.zfsbootmenu.org/en/v2.3.x/

peter_d_sherman 12 days ago

>"Although GRUB is quite versatile and capable, its features create complexity that is difficult to maintain, and that both duplicate and lag behind the Linux kernel while also creating numerous security holes."

We agree thus far -- that GRUB may create unnecessary complexity and security holes (Note the relationship between "complexity" and "security holes" -- where you find one, you will usually find the other... they're intertwined like Yin and Yang -- you usually don't get one aspect alone by itself -- without also getting the other...)...

>"Loaded by the EFI stub on UEFI"

"Er, begging the question Governor" but isn't EFI and UEFI also a bootloader?

Aren't those systems also complex?

I think the article is trying to make the point that a small EFI stub program which loads a larger program, in this case, a modified version of the Linux Kernel itself could easily be audited for security issues, and yes, that's sort of true -- but remember that the EFI loader, no matter how small, still has to run in the UEFI environment, and the entire UEFI environment is anything but not complex...

Phrased another way, running a tiny open source, secure program on a gigantic, complex black-box VM with many undocumented/opaqued/"black box" parts -- may not be all that secure...

Still, any small part of a system that could be made simpler despite there being other obscure "black box" undocumented or poorly documented complex components at play -- is definitely a step in the right direction towards full future transparency and auditability...

  • mjg59 12 days ago

    The UEFI environment is a given, unless you're in a position to replace the firmware - using grub doesn't avoid it in any way. But for the most part the security properties of the underlying firmware don't matter that much if the attack surface it exposes can only be touched by trusted code, which is the case if secure boot is enabled (and if secure boot isn't enabled then there's no real reason to bother attacking the firmware, you can already just replace the OS)

    • peter_d_sherman 12 days ago

      The UEFI environment does not exist on older PC's.

      UEFI started to become mainstream around 2013 -- that is, an increasing amount of PC motherboard manufacturers started to put it on motherboards (rather than the older BIOS) around this time.

      It should be pointed out that on some motherboards the UEFI software may be placed on an IC (EPROM, EEPROM (Electrically Erasable Programmable Memory), Flash, NVRAM, ?) -- which may be writable, or writable under certain conditions (i.e., if the boot process doesn't load software which explcitly blocks this when the system starts up, or if such blocks, once existing, are bypassed, by whatever method...)

      If the UEFI-storing IC is writable (or that IC replaceable, either via socket or solder), then the UEFI (again, under the proper conditions) is subject to modification then it is modifyable; changeable; updatable, programmable; etc. etc.; use whatever linguistics you deem appropriate...

      >"The UEFI environment is a given, unless you're in a position to replace the firmware"

      If what I've written above is the case -- then any such UEFI envrionment (aka "firmware") under such conditions is very much replaceable!

      And if it is replaceable, then that firmware code can be made simpler by somone "rolling their own" -- and replacing it!

      Now that I think about it, I'm going to have to do more research for the next motherboard I buy... if it has to have UEFI on it, if I am compelled to buy a UEFI motherboard, then I want that UEFI firmware to be overwriteable/customizable/modifyable/auditable -- by me!

      Also -- I'd never trust "trusted" code implicitly...

      Didn't Ronald Reagan so eloquently say "Trust -- but verify?"

      It's the but verify part -- that's key!

      Anytime a security vendor or vendor (or any authority or "authority" for that matter) tells me to trust or "trust" something, my counterquestion is simply as follows:

      "Where is the proof that the thing asking for my trust is indeed trustworthy?"

      In other words,

      "How do I prove that trust to myself?"

      ?

      In other words,

      "Where is the proof?"

      ?

      And let's remember that proof by analogies (Bjarne Stroustrup) and proof by polled social approval consensuses ("4 out of 5 dentists recomend Dentyne for their patients that chew gum") -- are basically fraud...

      Anyway, your assessment, broadly speaking, is not wrong!

      It's just that there are additional "corner cases" which require some very nuanced understandings...

      Related:

      https://en.wikipedia.org/wiki/Open-source_hardware

      https://en.wikipedia.org/wiki/Right_to_repair

      https://en.wikipedia.org/wiki/Non-volatile_memory

      https://libreboot.org/

      https://www.coreboot.org/

      https://en.wikipedia.org/wiki/Open-source_firmware

      • mjg59 12 days ago

        UEFI didn't exist at all on older systems, so instead you had BIOS which provided no security assertions whatsoever and exposed an even larger runtime attack surface (UEFI at least as the boottime/runtime distinction, and after ExitBootServices() most of the firmware code is discarded - BIOS has no such distinction and the entire real-mode interface remains accessible at runtime).

        In terms of how modifiable UEFI is - this is what Boot Guard (Intel) and Platform Secure Boot (AMD) are intended to deal with. They both support verifying that the firmware is correctly signed with a vendor-owned key, which means it's not possible for an attacker to simply replace that code (at the obvious cost of also restricting the user from being able to replace it - I don't think this is a good tradeoff for most users, but it's easy to understand why the feature exists).

        If you want to be able to fully verify the trustworthiness of a system by having full source access then you're going to either be constrained to much older x86 (cases where Coreboot can do full hardware init without relying on a blob from the CPU vendor, ie anything supported by Libreboot) or a more expensive but open platform (eg, the Talos boards from Raptor). If you do that then you can build this entire chain of trust using keys that you control, and transitively anyone who trusts you can also trust that system.

        But there's no benefit in replacing all of the underlying infrastructure with code you trust if it's then used to boot something that can relatively easily be tricked into executing attacker-controlled code, which is why projects like this are attempting to replace components that have a large attack surface and a relatively poor security track record.

        • peter_d_sherman 9 days ago

          Anytime something (i.e., stack layer, software component, software layer, blob -- in this case, initialization/boot code for hardware which was first simple BIOS'es whcih became larger and more complex BIOS'es which later became still more complex UEFI -- a much larger attack surface by size) in technology is replaced by something new and complex which is touted as "more secure" -- it's usually less secure (bugs and attack vectors are found in hindsight), and minimally, less transparent and understood.

          Anyone interested in the subject could Google (or search on their favorite search engine) "UEFI Vulnerabilities" -- for no shortage of issues/problems/security vulnerabilities.

          Am I saying that an old BIOS is perfectly secure?

          No!

          But older BIOSes are an order of magnitude simpler, better understood, and more documented -- than UEFI is at this point.

          If UEFI morphs to something else more complex in the future, which it probably will, given the track record of hardware boot/initialization code specifically, and software generally, then my advice at that point in time (10+ years in the future) will be "go back to UEFI, it's simpler, more documented and better understood than what we have now".

          But not until that day, and then not unless every computer on the planet is, or has become absolutely incapable of initializing/booting from the older code.

          As a generalized pattern/understanding in Software Engineering, older code / older software / older codebases (of whatever form, firmware, etc. -- in this case BIOS hardware init/boot handoff code) -- are generally smaller, simpler (less bloat for approximately the same functionality), vanilla, spartan, better understood, and have had many of their security issues found, fixed, and solved than their present-day over complex and bloated counterparts... (and, did I mention better documented?)

          • mjg59 9 days ago

            Older BIOSes are much simpler, and also offer no security boundary at all - nobody talks about BIOS vulnerabilities because it wouldn't give you anything you don't already have!

            • peter_d_sherman 5 days ago

              It may be argued that a Ford Model-T (one of the earliest and probably the simplest of all mass-produced vehicles in the early 20th Century: https://en.wikipedia.org/wiki/Ford_Model_T ) had no "security boundary" at all, and that conversely, the most modern vehicle with the latest radio frequency based remote lock and key (aka "Smart Key") -- is more "secure" (has more of a "security boundary")...

              ...but if so, is that asserted "security boundary" really an actual security boundary(?)

              If the security boundary or "security boundary" -- is opaque in how it functions; if it is a "black box": (https://en.wikipedia.org/wiki/Black_box); if no one (other than potentially a few people who work for the manufacturer, or exist at the company subcontracting to build their Smart Key component (if the Smart Key is subcontracted/outsourced)) understands exactly how it works, then is it really "secure"?

              (If so, then that sounds eerily similar to the "obscurity is good" (aka "transparency bad") side of the "Security Through Obscurity" debate that the Internet had, like 5, 10, 20+ years ago: https://en.wikipedia.org/wiki/Security_through_obscurity#Cri...)

              Why not read the following:

              "Gone in 20 seconds: how ‘smart keys’ have fuelled a new wave of car crime":

              https://www.theguardian.com/money/2024/feb/24/smart-keys-car...

              And you tell me?

              My conclusion:

              Perhaps less "security" (less of an asserted "security boundary") -- is actually more actual security -- at least in some cases -- at least in the case of the Ford Model-T...

gorgoiler 12 days ago

It’s a pity we aren’t really there yet with boot loading. In 2024 if I install an OS it places a boot loader in my EFI System Partition but in a way that still feels only partially complete.

What I want is for each OS to install its loader in a unique directory to that OS instance, not unique to the OS vendor. Multiple Debians etc will argue over who controls /debian. You also have to bless UEFI with magic NVRAM variables when it could just scan my EFI System Partitions for any file named “loader” and present that as a boot option.

Perhaps I should just chain from UEFI to something smarter that skips the UEFI-standard and does this smarter thing instead? Debianised GRUB tries to be smart at update-grub time in order to detect OSs but it would be neater if the loader did it.

Edit: In fact I see this is exactly the goal of rEFInd https://www.rodsbooks.com/refind/ …in particular it laments how “EFI implementations should provide boot managers [but] are often so poor as to be useless” so it tries to do a better job for you. I’ll give it a go.

  • iam-TJ 11 days ago

    A small side-note to solve your "unique [EFI-SP] directory to that OS instance":

    In each GRUBified OS instance, in /etc/default/grub (or on Debian and derivatives, to avoid altering the distro-shipped config file, /etc/default/grub.d/local.cfg ), set:

    GRUB_DISTRIBUTOR=

    This is used by grub-install.

    If calling grub-install directly one can also pass --bootloader=id=

    The value is set via efibootmgr's --label

  • winkelmann 11 days ago

    FYI: In my experience, modern UEFI Firmware/BIOSes will scan every FAT32 partition found on attached storage devices for bootable EFI binaries, they don't even appear to care about the GUID/type marking, just that it is FAT32. I never let OSes share an ESP, each install gets its own.

  • juped 11 days ago

    refind will scan all your partitions for EFI bootable things; if you have two ext4 partitions each with a Debian on them, and each has a Linux kernel in /boot, it'll locate them both and you can boot either. Which sounds like what you want.

1oooqooq 11 days ago

I will translate the doublespeak from redhat, which is similar to how they started to push systemd (really).

> [grub] features create complexity that is difficult to maintain, and that both duplicate and lag behind the Linux kernel while also creating numerous security holes.

No mention of the alternatives. No mention how useful are those features. Handwaiving "security" arguments.

> Loaded by the EFI stub

All the talk about booting the kernel directly is moot, because by this they mean "we will use systemd-boot" ;)

IMHO, this is part of the RH wider push for PKCS11/TPM2/FIDO2 stuff. So it is not really fixing boot loader, as it is standardizing on their bootloader "as the correct one" but using the kernel reputation on the double speak.

Just like they pushed the equivalent of https://www.tenforums.com/attachments/tutorials/195499d15314... as the interface of init. (i'm not salty on systemd, in fact i already use bootd even. but if you cannot see how systemctl is the same UX as that, you are blind)

  • 1oooqooq 11 days ago

    I should say *the joint RH/Microsoft/et al wider push for PKCS11/TPM2/FIDO2 stuff

  • rini17 11 days ago

    EFI stub is an existing kernel feature, not related to systemd-boot. Of course, everything can be wired together with systemd.

    • 1oooqooq 11 days ago

      but it doesn't cover any of the unmentionable features that are so bad on grub. they will either be implemented in the kennel or the stub, because well, people will need dual boot, weird crypto, etc.

      exactly like happened with systemd. all the complexity was ignored.. then bolted on. do you miss crontab -e?

      • rini17 10 days ago

        I do miss the ability to fix thing easily when crontab -e breaks. Oh, but systemd never breaks, okay.

        • 1oooqooq 10 days ago

          crontab -e broke? you are special :)

          my point is that the alternative to "crontab -e" is writting two files with 10~20 lines in difficult to remember paths. it's impossible to do without a webbrowser nearby.

  • collinmanderson 11 days ago

    systemd-boot comes up in the Q&A at 29:50. (The main problem nmbl is trying to solve is code-duplication with the kernel and therefore security issue duplication, and just like grub or any of the alternatives, systemd-boot duplicates code that's already in the kernel. The security holes will exist in any case, but the goal is to reduce security hole duplication by reusing as much of the kernel as possible, rather than creating something separate. They also plan on reuseing grub's menu code, so it will have the exact same menu as grub.)

    > The question is: that there are CVEs everywhere, we're not unique in this sense, and whether we would use systemd-boot.

    > So, systemd-boot, it also works only on UEFI, and I believe that the plans are to keep it that way.

    > Ultimately, the thing is that the kernel CVEs will get fixed no matter what, the question is: do we want to have more work fixing more CVEs. The kernel has a lot of developers, has very high visibility, and they're able to fix the CVEs in a reasonable time period. And, those aren't going to going to go away, the kernel CVEs aren't going to go away, whether we do this or not.

    > Systemd-boot, any boot loader, that aims to replicate the things that the kernel does is ultimately going to run into the same problems as grub. We're going to have the font CVEs, we're going to have filesystem and storage and memory allocation bugs. All of that stuff is going to exist in whatever boot loader.

    > Again, for an individual user, if you want to install systemd-boot, great, go ahead and use it. It's good, it works. But as a general option it's just going to have the same issues, unfortunately.

    • blucaz 10 days ago

      > Systemd-boot, any boot loader, that aims to replicate the things that the kernel does is ultimately going to run into the same problems as grub. We're going to have the font CVEs, we're going to have filesystem and storage and memory allocation bugs. All of that stuff is going to exist in whatever boot loader. > Again, for an individual user, if you want to install systemd-boot, great, go ahead and use it. It's good, it works. But as a general option it's just going to have the same issues, unfortunately.

      This is completely wrong though - the main point of sd-boot is that it does _not_ implement any of that - no filesystems, no fonts, no themes, nothing at all, the firmware is used to do all the risky stuff via the UEFI protocols. So it is very much not reimplementing what grub or the kernel do, the exact opposite in fact, it's the number one design goal.

      • collinmanderson 10 days ago

        Ahh ok so it sounds like systemd-boot's philosophy is keeping things simple and minimal and re-using UEFI firmware as much as possible, to minimize risks with the linux kernel having issues booting, at the expense of not having as many features.

        I suppose then the hope is that nmbl would basically be a general-purpose fully-featured replacement for grub, which seems to be going in the direction of being a full kernel anyway:

        > one with quite some bells and whistles, with networking, complex storage, cryptography, http client, ca store and stuff (I mean, that's how I understand it, i.e. it should be able to load kernels from sources that require all that). It hence will need require regular updating (as much as the 2nd stage kernel most likely, if not more often, since it probably needs ca store built in), and quite possibly will break every now and then nonetheless, because it's basically a full OS you are boot as first stage.

        - https://lwn.net/Articles/981149/

        It sounds like if you don't need grub's complex features, then systemd-boot is probably the safest way to go, but if you do need grub's complex features, then nbml aims to be the safest and most reliable way to get those features.

    • joveian 11 days ago

      This seems like it could be a good use of rump kernels, although I don't think anyone has made a bootloader based on NetBSD rump yet. I've heard of efforts to rumpify Linux but I don't know if they are continuing. This seems like it could offer the benefits of each approach. The rump kernel paper has this brief paragraph on bootloaders:

      > The lowest possible target for the rump kernel hypercall layer is firmware and hardware. This adaption would allow the use of anykernel drivers both in bootloaders and lightweight appliances. A typical firmware does not provide a thread scheduler, and this lack would either mandate limited driver support, i.e. running only drivers which do not create or rely on kernel threads, or the addition of a simple thread scheduler in the rump kernel hypervisor. If there is no need to run multiple isolated rump kernels, virtual memory support is not necessary.

  • abofh 11 days ago

    To be fair, it's similar because it's the problem people wanted solved - start this thing at boot, if it dies, restart it. I know rcS.d didn't handle the 'restart it', but even for the lowliest desktop user, if they've installed a daemon, and configured it to start, it more or less implies they'd like it to keep running.

    Systemctl looks a lot like a modern init on another OS because a modern init on any OS looks a lot the same. Whether it should spawn 1800 subprojects is a different debate, but I for one am much happier maintaining trivial .ini like files than trying to teach the new engineer bash.

benstoltz 12 days ago

One can trade run-time flexibility for size, speed, and small attack surface.

Taken to the limit, Oxide Computer boots using the [Pico Host Boot Loader](https://github.com/oxidecomputer/phbl) which is probably not suitable for your personal system where you would want to boot many OS images from many devices on many different mainboards using very similar or modular boot flash images.

Phbl transfers control to a partial Unix image, also in the boot flash, which brings in the rest of the OS from a well-known boot device. There is no UEFI, CoreBoot, PXE boot etc. The AMD PSP code does run, but that's the only early external blob in the boot path. This does mean that the OS has to understand its hardware, there is minimal "free" initialization.

Aissen 12 days ago

Considering distros serious about booting are effectively shipping grub forks with tens (debian) to over a hundred (ubuntu) to hundreds (fedora) of patches on top, it might be time to invest a bit more into Open Source early-stage booting. I'm doubtful that efistub + UKIs will solve all the problems, but I'm cautiously optimistic. Wait and see!

raggi 12 days ago

An EFI stub that sets up multi-boot, kernel and initrd then jumps into it is pretty simple.

I don't know why people really need to keep putting huge intermediate loaders in every default boot path.

If you want to boot more than one OS, yes you need one of these, but if you don't then there's no need for yet another OS instance in the boot path. The mid-stage should be extremely small and simple.

There's been so much crying over the size of UEFI, well now there's an arbitrarily versioned and maintained entire Linux in there too? Mostly just to avoid some ugly UEFI APIs and a slightly different programming environment? Yuck.

  • juped 12 days ago

    >If you want to boot more than one OS, yes you need one of these

    Nope, you only need refind (a fancy menu, not a bootloader at all), and only then because of how impoverished the vendor's boot menu always is; if your configuration is simple enough you could just use that despite it sucking.

  • raggi 12 days ago

    To state this slightly differently:

    GRUB has a terrible security story, a key point in the posted presentation. GRUB is huge and has design traps which contribute to regular developer mistakes.

    Any huge solution here will suffer the same problem, the larger it is the more likely the problem is.

    You don't really need much to do work here, a UEFI program can walk through the directories in the ESP and make choices, and perform assertions, so keep your A/B/R kernel and ramfs objects in there (as UKIs, as separate files, whatever). It can make a choice and boot the thing.

    If you want user choice you could put menus into that program too, but you don't need them for most users, so leave them out, that's a ton of deps gone.

    A basic program to do this isn't more than 1000 lines, it'll be low on maintenance and exceptionally low on critical flaws.

    It's not hard writing even fairly complex things for EFI, here's Fuchsia's UEFI stage which is designed for development and has far more features (fastboot, mdns discovery, etc) than most of these things need. It's still tiny compared to the grub stuff: https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/s...

    • snvzz 12 days ago

      I'd say grub is crap, let's switch to das u-boot, which is not.

andrewstuart 12 days ago

I’m fairly technical but I have to say grasping the field of partitions , booting, boot loaders grub uefi its alternatives and the various combinations thereof in Linux defeated me.

When learning something I try to find the simple path, a reliable minimum that gets to the goal. I never found it.

Complexity is the word that comes to mind.

  • bradley13 12 days ago

    Complexity, indeed. I haven't looked into this stuff in literally decades, but: I thought the purpose of a boot loader was to pass control to code belonging to the OS - which would then be responsible for loading it's own drivers, etc.. This solution sounds like starting an entire OS, only to boot the next OS.

    But then I think UEFI is also stupidly complicated, and ought to be whacked down to its core functions. Dinosaur, am I.

    • ziml77 12 days ago

      I like that UEFI means I don't have to worry about bootloaders clobbering each other when multiple operating systems are installed on the same drive. They can all register into UEFI, rather than competing for the MBR.

  • Muromec 12 days ago

    Thankfully we have none of that on embedded. For every single board I have to figure out anew, so confusion is a purely transitive curse

  • rcxdude 12 days ago

    Honestly I'm surprised grub is still going post-UEFI. It's now pretty much entirely unnecessary. Your simplest path is probably UEFI-stub, where there is no extra bootloader, just your BIOS loads the kernel. The main disadvantage is this is subject to the whims of your hardware manufacturer to implement it in a usable manner. If you want a nicer menu then systemd-boot is your next simplest option (despite the name, it is actually more or less seperate from systemd apart from maintenance and systemd having some integration with it).

  • benwaffle 12 days ago

    Install arch with a couple of different bootloaders and disk layouts, and you'll learn it all. The simplest option is potentially systemd-boot + an unencrypted rootfs.

    • ars 12 days ago

      The simplest is LILO without an initrd

      • chefandy 12 days ago

        I actually did a ctrl-f for LILO and this was the only comment that mentioned it. Time flies.

  • andrewmcwatters 12 days ago

    It's not that it defeated you, it's literally undocumented what you're supposed to do.

cbarrick 12 days ago

I developed a tool for managing EFI boot entries for my personal use.

I've been meaning to get it ready to release publicly. It's mostly there, just a bit manual to install.

https://efiboot.cbarrick.dev

JoeAltmaier 12 days ago

Not sure why loaders are a separate beast any more.

In the bad old days, ROMs had very limited space. Lots of bootloader packages got invented, tiny things that knew just enough about ROM and the filesystem to get the 'real' code loaded, maybe un-zipped, maybe unencrypted. Later, some network-boot options which were handy.

Today? The boot flash is huge (compared to ROMs). You can put an entire OS in there! In fact, nowadays the bootloader is often a flash partition right next to other OS images.

I assert, there's nothing that a bootloader can do that an entire OS e.g. Linux image can't do. Just build a linux image, put a boot-script in there to allow network-boot or reboot-from-another-partition. And be done with it - no more u-boot, no more obscure bootloaders with limited drivers and options.

The day of the bootloader is over.

  • nickelpro 12 days ago

    The reason is very simple, you only get to call ExitBootServices() once (absent hacks that hook the function).

    If you want to be able to do anything prior to calling ExitBootServices(), such as choose what EFI application you want to use and options you want to pass it, you need a service built to provide you that interface which itself does not call ExitBootServices().

    The name of that service is the bootloader.

    • JoeAltmaier 11 days ago

      ...which should simply be another build of a real OS. Not some weird beast we inherited from the bad old days of tiny ROMs.

StillBored 9 days ago

No, please don't.

Most firmware systems don't enable interrupts, support paging, SMP, or many other things, for a long list of reasons, while at the same time, in order to enable a full Linux environment, ExitBootServices must be called, which is going to make a mess anytime the booting kernel isn't the right one. Kexec breaks a fair amount of the time if the target kernel isn't exactly what was already booted (ex, see kdump failures). Nevermind if the target OS isn't linux.

And for what? So /boot can stick around? A legacy partition left over from the days of BIOS/MBR like systems that have to have a small partition at the beginning of the disk, formatted with something that supports linux file permissions?

No, the right solution is to just place the kernel(s), and a small signed initrd (s) in the ESP, and nothing else besides a boot selector, like say systemd-boot to allow the user to select the correct kernel/OS, and optionally edit debug/etc options. Then, when the correct kernel is booted, it can bring up the network, filesystem, and whatever is needed.

That solves an endless list of Linux and firmware interaction problems that Grub currently solves, without all the file system, network code, etc., that one finds in Grub, which mainly duplicates the functionality already provided by UEFI.

The problem with this solution is that it doesn't work on systems that aren't UEFI, to which I might suggest that those systems (cough, legacy IBM junk, or hyperscalers still using BIOS boot after it has been deprecated everywhere else) have either uboot-uefi or edk2 ported so they can conform to modern boot standards.

kevinoid 12 days ago

This approach sounds similar to Petitboot <http://www.kernel.org/pub/linux/kernel/people/geoff/petitboo...> which is a kexec-based bootloader that I used on the Playstation 3 many years ago. Apparently it now targets many other systems and there is a (dead?) fork for Coreboot <https://github.com/ArthurHeymans/petitboot_for_coreboot>.

josephcsible 12 days ago

Does "security" here mean security from the computer's owner, i.e., Treacherous Computing? If not, then what kinds of security holes are even possible at the point when GRUB is running?

  • mjg59 12 days ago

    grub consumes a bunch of untrusted material (splash pictures, fonts, filesystems, executables, and more) and parses them. grub's also written in C, which is pretty much the worst case for writing parsers. Someone able to replace any of these with something that triggers a vulnerability in grub is then able to, for instance, take control of your boot process and obtain your disk encryption key or user password or any other secrets you enter.

    (I don't want to seem like I'm picking on grub here, it wasn't written with this threat model in mind and it does a lot of things and achieving all of this stuff securely is hard)

    • josephcsible 12 days ago

      Isn't everything that GRUB reads only writable by root? Is the threat model that root is the attacker?

      • mjg59 12 days ago

        Or by anyone with physical access to your system, but also root isn't the same as the kernel - if your boot chain is fully verified then even root can't replace the component asking for your disk encryption key, and can't extract it from the kernel afterwards (assuming a secure kernel)

        • josephcsible 12 days ago

          Can't someone with physical access to my system also pull out the hard drive, edit it however they want, and change Secure Boot settings too? And I don't want there to be anything even root can't do, since then there's stuff I can't do to my own computer.

          • mjg59 12 days ago

            No, because the secure boot settings are in flash and also the firmware measures the secure boot policy when booting so TPM-backed secrets will be inaccessible if someone modifies the variable store directly.

            As a device owner you have the option to recompile your kernel to disable any of the root/kernel barriers - when we designed Shim we did so in a way that ensures that you're always able to disable secure boot. Or you can simply disable secure boot entirely (another feature offered by Shim) at which point the kernel will disable most of those features. But by default the kernel will still, for example, refuse to allow even root to mmap() address regions belonging to hardware - some of those restrictions are down to "This has a high risk of causing accidental data corruption" rather than anything nefarious.

      • rodgerd 12 days ago

        root is not necessarily the owner of the system.

      • lern_too_spel 12 days ago

        Or potentially by another user loading that partition if you boot into another OS.

ForOldHack 10 days ago

Unix and Xenix did this. Single user mode. This is just a plea to have GNU/GRUB integrated into the kernel. I think it's a very good idea. Not original, but a seriously good idea. Add some more help and documentation. Pre-sets for most of the common scenaros. Multi-mode kernals? Kernals as a service?

And make no mistake, this is a compelling article.

Dwedit 12 days ago

You had the bootloader because first you needed executable code in the first sector of the partition, and you can't fit much in those 512 bytes. But moving to UEFI means you never execute that code anymore. Instead, you load a BOOT.EFI file off of a FAT16/FAT32 partition. If there's a restriction on size for that, then you proceed to a bootloader instead of the real kernel.

  • dvhh 12 days ago

    That's for the case for MBR partition type, I really hope we moved to more modern alternatives

    • creshal 11 days ago

      GPT puts no size limitations on the FAT32 EFI System Partition. Your bootloader can be as big as you want it to be, which is why just booting off of a Linux kernel image with an initrd in the same file has been a valid option for years. Not sure why Lennart feels compelled to reinvent this particular wheel again.

wkat4242 11 days ago

I use whatever i want :P that's the nice thing about FOSS. I'll move on to the new cool hipster thing when I feel like it and if I see value in it..

Though generally I prefer the approach of "extend what we have with new features" over "rip out all out and start from scratch" so I spend my time mostly with the BSDs which don't jump full steam ahead into the whatever's the new thing.

Even though something like systemd is undoubtedly better in many things, I also have to wrap my head around it, see how to get at the logs, build those unit files etc. On the BSDs I don't have to bother with that and just keep working as I was and which was not broken for me.

The point is, something new doesn't only have to offer a tangible benefit for me to be worth it. It has to be such a big benefit for me that it offsets the hassle of getting my head around all the new stuff and most of the new Linux inventions fail at this.

mmphosis 12 days ago

A removable physical key: a programmable ROM.

I have programmed the ROM to instantaneously copy my ROM to RAM and run. The entire system is running instantly as soon as I power on. There is absolutely nothing else.

Because everything else is a big mess:

Intel ME, BIOS, UEFI, kernels are signed by companies with Microsoft's blessing, EFI, FAT, TPM, anything with the word "Secure" in it, ...

  • katzenversteher 12 days ago

    Please elaborate. What kind of key are you using? What are you booting? On which architecture / machine?

    • mmphosis 9 days ago

      > What kind of key are you using?

      I want this because right now I use camera cards to boot. The key needs to be wired directly to the processor and be the first and only entry point. Instantaneous boot is what I want, not resuming from hibernation but "instant on" of the full system.

      > What are you booting?

      I am booting Linux, BSD, custom systems, lightweight bootstrap programs that do a specific task like diagnostics.

      > On which architecture / machine?

      Agnostic / various. For instance, instantaneously booting / running only on GPU(s).

nottorp 11 days ago

If I look and think that this is another move by redhat to replace a simple independently developed solution with one that's complex enough to require a red hat issued certification, am I paranoid?

Actually wait. They at least haven't proposed to replace grub with systemd. Or is that buried in one corner of the presentation?

  • mpldr 11 days ago

    UKI isn't by redhat but by the Kernel devs, iirc. No redhead certs required. If you want to run with secureboot, you can use your own certs, but you can also just skip SB.

    • nottorp 11 days ago

      Not certificates, certification ? As in Red Hat Valued Engineer or whatever they sell.

hackernudes 12 days ago

In the "what do we have so far?" slide they explain there are currently two variants of NMBL, one that does a switch_root (like a normal initramfs) and one that does kexec (to boot into a new kernel). It presents a menu for the user to select what to boot. It also will allow rolling back to the old version when boot fails.

I see some other comments in this thread about hypothetically supporting booting other UEFI targets and some ideas on how that would be implemented.

There is a question in the video about chainloading around 27 minutes -- https://youtu.be/ywrSDLp926M?t=1640 but the answer isn't clear to me - "setting FE variables". Is that frontend? firmware environment?

Shorel 12 days ago

Kudos to the developers involved in this functionality.

Faster boot times and more secure installations are always advantageous. I'm all rooting for this development.

I've been wondering for a while why grub is still used, given that its basic architecture is outdated.

  • shadowgovt 12 days ago

    I believe the two main reasons are

    - inertia (don't rewrite something if it works; who really wants to own responsibility for testing this thing on all architectures GRUB currently supports?)

    - multi-OS boot scenarios (I assume this new system will support that, but (a) I don't know for sure and (b) I don't really want to boot all the way into Linux just to throw Linux away and boot something else...)

eqvinox 11 days ago

relevant comments from Hector Martin over on Mastodon at https://social.treehouse.systems/@marcan/112754303893998372

> Reminder that not all platforms support or, indeed, can support kexec() sanely at all. Like ours. kexec() requires the ability to reset all peripheral state and that is impossible on Apple Silicon because firmware is loaded by earlier boot stages and cannot be re-loaded later to the reset state without a full system reboot.

  • TimGhost 11 days ago

    Sounds like Apple sorted itself out by themselves, while suffering from the limitation.

    So the correct response to this concern is "Okay. And? Apple will just sort itself out for themselves". I mean, what else can anyone do? Nothing because "but Apple?"

    • eqvinox 11 days ago

      This discussion is about how to boot Linux, not how Apple's OSes do things on their devices. Booting Linux on a Macbook is the affected scenario.

account42 11 days ago

I haven't used GRUB since my first EFI system. The EFI itself is already a bootloader after all, why would you need another one, especially one as bloated as the new GRUB.

  • itvision 11 days ago

    To pass kernel parameters? How would you do that without a bootloader?

    • account42 11 days ago

      EFI can pass kernel parameters just fine, both in the default boot entries or when running the kernel from the EFI shell.

      • itvision 11 days ago

        What about doing that once without using e.g. efibootmgr?

        • qhwudbebd 11 days ago

          You can just pass the custom command line as you run the kernel from UEFI shell prompt, e.g.

            fs0:linux.efi root=/dev/nvme0n1p1 initrd=ramfs.img loglevel=2
          
          In my experience the nuisance part is creating and editing boot entries, especially if you try to set them up from the UEFI shell, so I tend to compile any initramfs and the default kernel command line into my kernel so I can drop it at /boot/efi/boot/bootx64.efi and minimise contact with the UEFI monstrosity.
qhwudbebd 11 days ago

A slight tangent, but still kind of relevant: given that we're lumbered with UEFI on x86-64, are there any active projects working on a better UEFI shell?

Every time I interact with it, I am struck by how awful it is, but the shell is just an EFI application so presumably one could replace it with something better written. Searching turns up EFI menus aplenty, but no one has (yet) taken a shot at a simpler, cleaner EFI shell from what I can see?

Animats 12 days ago

Or, really, use the ROM's boot loader.

This is getting closer to the way QNX booted decades ago. The boot image has the kernel and whatever user space programs and .so files you decide to include. For a deeply embedded system, you might not have a file system or networking. For desktop QNX (discontinued), you'd have some disk drivers, a file system driver, a network driver, and a shell, along with a startup script, to get things going.

  • bregma 12 days ago

    QNX still works that way. It's just no longer free.

    • Animats 12 days ago

      And the desktop environment is gone.

teo_zero 12 days ago

I'm a big fan of compiling my own kernel with all needed drivers compiled in, with the EFI stub compiled in, no initram, no grub, a fixed cmdline that works 99% of times.

This allows a boot to happen in less than 5 seconds.

The 1% of times I need something different, I use the boot selector provided by the firmware to boot to grub (that's installed anyway), where I have the usual plethora of choices.

Is there a key to be pressed at the right moment? There is. I even have to insert a password. So what? I can go through such ordeal once in a while for that 1% of "special" boots.

  • qhwudbebd 11 days ago

    I completely agree, over time I've found myself moving to doing exactly this on every system I run. On UEFI systems, I can use the UEFI shell to add kernel command line options or launch a fallback kernel if I've screwed up badly enough to break boot. I don't need yet another layer of clunky menus and indirection.

pmorici 12 days ago

Does anyone have any tips for debugging EFI_STUB kernels when they fail to boot? I've run into BIOS before that I can't get EFI_STUB to work on but grub works fine and I'm not sure why or even how to go about getting any debug info since the bios/firmware is a block box. Is the only option to get in touch for the motherboard vendor and how they care to look into it? It's rare but happens.

jagrsw 12 days ago

I get truly confused when using GRUB. Maybe it’s just me being unwilling to dive into all the details, but seriously, why are there like 30 packages starting with 'grub' under Debian? All I want is to boot my kernel under EFI, and the package choices are overwhelming.

  grub-common
  grub2
  grub2-common
  grub-efi-amd64
  grub-efi-amd64-bin
  grub-efi-amd64-signed
  grub-efi-amd64-signed-template
  grub-efi-amd64-unsigned
  grub-efi
  grub-pc
  grub-pc-bin
Do I need to mix grub2 and grub packages to get it to work? Currently I do, and a bit afraid to remove one or the other :)

Usually, I end up trying things randomly (leaving some funny mess in /boot/EFI b/c not sure if --efi-directory should contain /boot/EFI prefix, or just /boot or nothing), then running some semi-random grub-install command, and eventually, it starts to work. But this is far from intuitive.

  • lmm 12 days ago

    > Maybe it’s just me being unwilling to dive into all the details, but seriously, why are there like 30 packages starting with 'grub' under Debian? All I want is to boot my kernel under EFI, and the package choices are overwhelming.

    Debian packaging policy is insane, that's nothing to do with grub. On a regular distribution there is one (1) grub package (e.g. I just checked Slackware and Gentoo).

lofaszvanitt 12 days ago

Also create a boot please that rivals w10's almost instant boot process. I just hate the current slow/non parallel boot process.

  • bayindirh 12 days ago

    Many of our systems boot under 10 seconds after GRUB. You can make GRUB menu to timeout quickly or be completely hidden if you want.

    A modern system with a connected network can boot quite fast. Windows' instant boot is not boot actually, it's thawing hibernation.

    One of the biggest features touted by systemd is "embarrassingly parallel" booting capabilities, which Parallel SYS-V already sported.

    IOW, Linux already can boot pretty quickly given no hardware device is holding it back.

    • lofaszvanitt 9 days ago

      Not fast enough.

      • bayindirh 9 days ago

        What I was talking about was not virtualized. VMs start even faster. On the other hand, if you need to restart that much, you're really doing something wrong.

        • lofaszvanitt 9 days ago

          There is always room for improvement. Like if you set the initramfs dep based, the whole process speeds up, but on a kernel upgrade it tends to error out. Why?

          Why enumerate usb devices on boot, when you have zero devices that need usb? Raid speed testing... and the list goes on.

          • bayindirh 9 days ago

            > Why enumerate usb devices on boot, when you have zero devices that need usb?

            Are you sure? On servers there are generally a couple of USB based devices which handle BMC, hardware debug logging, etc. Just because you don't have physical ports, this doesn't mean the USB bus is empty and silent.

            When you connect to a server via BMC and request the console, you get two USB devices at least. One for mouse, one for keyboard. More devices appear if you attach virtual volumes remotely.

            > Raid speed testing...

            Because Linux kernel is stateless. There's no guarantee that you're booting on the same processor (make, model, family, even system bus layout) as the last boot. Moreover, there's no guarantee that you're not rebooting because of an unmaskable MCE which is fired because you lost half of your vector units (SSE/AVX blocks) or half of your FPU units, or any other CPU IP block (because you fried them) and you're in limp mode now...

            I have seen tons of these events and similar ones first hand. I'll prefer my systems to spend three more seconds to boot successfully so I can debug them rather than the kernel makes some assumptions and catches fire in a completely unrecoverable state leaving me stranded.

            • lofaszvanitt 9 days ago

              This isn't about your or my use case and/or preferences.

              • bayindirh 9 days ago

                > This isn't about your or my use case and/or preferences.

                Exactly! This is why Linux kernel does and shall ship with a configuration which supports everything out of the box, even if it's slow. Because it covers everyone's use cases that way.

                If you need to trim it down to fit to your system(s), you should be able to do it. Debian has a mechanism called "Targeted Kernel" which removes the modules which won't be used on your system automatically during kernel upgrades.

                Nobody is stopping you from doing whatever you want with your system to boot it faster.

                For me, I'm fine with the Kernel as is, because some of my servers already take multiple minutes to initialize the plethora of devices on themselves. So a three second delay changes nothing on a system which is rebooted once a month at most.

                Same applies to my desktop systems, which are either on or at S3 sleep, which wake in <3 seconds anyway (I wait for the monitor to come back mostly).

                • lofaszvanitt 8 days ago

                  Sigh... we are going in circles. You like the slow boot. You like suspending things, I'm against it. I'd rather not elaborate, because you will like the opposite of it, no matter what.

                  Everything is fine...

                  • bayindirh 7 days ago

                    No, we are not. It's not a matter of slow vs. fast. It's a matter of resilient vs. fragile.

                    I prefer to have a resilient system in most cases, regardless of the form factor. If I was preparing an image for an embedded system, I'd go speed all the way down, within the limits of pragmatism.

                    I work in high performance computing. Speed is what we need, what we engineer for. However, resiliency is an equally important and valid concern. So, if I'm losing 5 seconds once a month (or once three-four months in case of desktop systems), that's a perfectly acceptable trade-off for a resilient system.

                    When a system tests its memory for 30 seconds, and initializes the motherboard and other devices for 3 minutes, losing 3 seconds on a bloody RAID speed test doesn't matter.

                    Oh, one of the latest servers we have exposes its management interface as a LAN port over USB, as I found out yesterday.

                    So, yeah.

  • dvhh 12 days ago

    As you might know windows is kind of cheating with the "instant boot" by creating an hibernate snapshot of the OS before login.

    Otherwise, I while I would not describe the boot process of most of the linux I own as "instant" they are certainly booting quite faster that windows.

egberts1 12 days ago

I prefer my bootloader (be that it may, GRUB, lilo, or even BusyBox) because thosr image will go away once the kernel is started.

Nothing for hacker to see and analyze the bootloader, assuming you did not load a driver to the NVRAM/Flash/UEFI/EFI.

Nice security compartment alization.

Redhat is smothering this easential security abstraction of 1st stage loader: not a good security model.

  • TheDong 12 days ago

    Can you explain more what security vector you're talking about here, because I just don't see it?

    Like, as far as I can tell, grub or whatever is a bundle of filesystem and device drivers, with enough info to then execute a kernel.

    Linux also is a bundle of filesystem and device drivers, but better tested ones I think.

    To me, it seems like using the kernel's filesystem drivers, which you have to use already anyway once you've booted, means you have to trust fewer total implementations of these drivers, so it seems more secure.

    What attack or threat vector are you trying to talk about here?

    • egberts1 11 days ago

      It is the same security abstraction where you don’t allow support for network socket in process ID 1.

      (Looking at you, systemd.)

      You don’t allow access to the bootloader from any kernel, thereby afford a relative security in starting 2nd stage (kernels). One abstraction is that TPM, et. al., can lockstep assurances on each stage. At a minimum, you have a bootloader, in case of SNAFU/FOOBAR.

      Bricking (or worse, malicious kernel) seems more a possibility with upcoming Redhat design.

      • TheDong 11 days ago

        Sorry, I still don't follow.

        > You don’t allow access to the bootloader from any kernel, thereby afford a relative security in starting 2nd stage

        You install and update the bootloader and its configuration from your running linux system.

        In this new world, you would also update the kernel from your running linux system. That's the same, right? To update the kernel, you need to update bootloader configuration anyway, so it's obviously required that the running system can at least update the kernel, and that's true either way.

        > Bricking (or worse, malicious kernel) seems more a possibility with upcoming Redhat design.

        If your kernel is malicious, it's game over whether or not you're using grub, right? Like, that doesn't seem like a new threat model.

        I don't really care about bricking because, frankly, I've made my system unbootable via grub bugs more often than I have through kernel bugs, and the kernel developers seem to take these bugs more seriously, so I feel like bricking is a possibility with either design, but less likely without grub.

        Either way, I need to have a liveusb off to the side to fix these issues.

        • egberts1 11 days ago

          /boot should never be mounted.

  • cool_beanz 12 days ago

    There's kernel command line parameters that can clean it up without a bootloader.

nardi 12 days ago

Meta: Can someone with Linux/bootloader knowledge tell me whether most of these comments are as clueless as they seem?

  • juped 12 days ago

    Many seem a bit confused but I have only skimmed the comments.

    I don't understand the point of the thing described in the OP (I have not watched the talk, just skimmed the notes), myself. Linux kernels can EFI load themselves; if you want more flexibility than a precompiled kernel command line, or to load from ext4/other non-FAT filesystems, refind exists, fits on the ESP (kernel + initramfs can get big; I keep mine on the ESP but wanting to keep it on a larger ext4 filesystem is very understandable) and is very high quality.

    Bootloaders are obsolete in this sense; every OS provides an EFI stub loader, except Linux where kernels are their own EFI stub; nevertheless, distros continue to install GRUB alongside themselves on UEFI systems out of inertia. If Red Hat wants to supplant it... okay, but it can be supplanted today with very good components, even if they weren't invented there.

    • creshal 11 days ago

      If I had a nickel for every time the RedHat ecosystem overengineered itself into a corner and decided the only possible solution was more overengineering, I could probably buy IBM.

wvh 11 days ago

I've been using `systemd-boot` for many years, which comes with the system. It's a bit simpler than Grub and LILO (team 90s, represent!). Most BIOSes have variable support for booting random images, but last time I got a new system, it was confusing to use and a bit of hit-and-miss.

mixmastamyk 11 days ago

I recently moved to sdboot and prefer its simplicity compared to grub. However they still missed the mark a bit, its folder tree on the ESP is a mess.

I’ll look into this but prefer kernels managed automatically by apt/dnf etc.

  • lproven 11 days ago

    You are not clear; by "sdboot" do you mean systemd-boot?

    The abbreviation is ambiguous. There are other bootloaders called "sdboot" such as this one:

    https://sourceforge.net/projects/sdboot/

    And this one:

    https://www.reddit.com/r/WiiHacks/comments/glx4dt/sdboot_eve...

    Please try to avoid ambiguous abbreviations. If you do mean systemd-boot you only saved 5 letters and could mean at least 3 different tools, or maybe more.

    • mixmastamyk 11 days ago

      Yes, sorry when installing in fedora you have to pass the string sdboot to the kernel at boot. Still kinda experimental.

  • creshal 11 days ago

    sd-boot on Debian 12+ is mostly self-configuring, and the folder structure is just one folder? Not sure what's messy about that.

    • mixmastamyk 11 days ago

      On fedora it is 10+. Do a tree command on the efi partition.

      • creshal 11 days ago

        Fedora and other Redhat-related distributions are an exercise in masochism regardless of the bootloader choice.

fsniper 12 days ago

I was about to discuss/joke about the possibility of systemd absorbing this project. And hold and behold turns out there is already systemd-boot project competing in this space. I was not aware of that at all.

  • Spivak 12 days ago

    systemd-boot is actually pretty great. If you're looking for a lean fast (multi-os) uefi bootloader systemd-boot is much easier to set up and less fiddly than grub. I haven't reached for grub in years.

    • zamadatix 12 days ago

      I nearly did systemd-boot last install but grub2 behaves better if the motherboard is upgraded/otherwise factory reset so I shied away.

  • Arnavion 12 days ago

    systemd-boot can be compiled and used independently of the booted OS using systemd. It also started out as gummiboot, unrelated to systemd.

techwiz137 11 days ago

How about people moving away from 16-bit real mode(obviously a change is needed in the CPUs), removing that A20 line forgotten from history patch and actually booting like we are supposed to?

theteapot 12 days ago

Isn't one of the use cases in GRUB choosing which kernel you want to load?

  • devit 12 days ago

    You can use kexec to load a different Linux kernel from a Linux kernel.

    Probably slower and perhaps less compatible than using GRUB though.

  • gjsman-1000 12 days ago

    That’s the neat part - you install GRUB if that’s something you care about. For the 98+% who will always use the newest kernel, and can tell the system to (hypothetically) use a different kernel on future reboots after the system has loaded, it won’t be an issue.

  • Shorel 12 days ago

    They address this use case in the video. Their loader can show a menu.

  • SahAssar 12 days ago

    Via EFI probably.

    • m463 12 days ago

      Does that mean the UI you will use to choose the kernel will probably be the bios?

      • Arnavion 12 days ago

        Yes. If your UEFI doesn't have a good enough interface for selecting entries or temporarily modifying a kernel bootline, you can still use a bootloader, but a minimal one like systemd-boot instead of GRUB. All it does is show the text menu and then execute the UEFI binary for that entry, which in this case is the kernel's UKI binary, so all the heavy-lifting of LUKS, filesystem drivers, password entry, etc is done by the kernel and there's no complexity or duplication in the bootloader.

nicman23 11 days ago

you do not use grub for being fast. you use it for when things go wrong

retrochameleon 12 days ago

I use zfsbootmenu. It allows me to boot multiple OSes from different data sets, make and rollback snapshots, and even directly boot from a snapshot. It also has a minimal shell for zfs tasks.

tristor 12 days ago

I do UKIs and direct boot them on Arch. Works great. Do have to recompile for every change, but it's very fast on a modern system, takes about 40 seconds on my laptop.

yjftsjthsd-h 12 days ago

This kinda sounds like zfsbootmenu but without the ZFS. Which makes me wonder how hard it would be to factor out the ZFS bits and just use zfsbootmenu on other filesystems.

  • E39M5S62 12 days ago

    It'd be doable, but not with out a whole lot of hacking on it. The internals of ZFSBootMenu are tied very tightly to ZFS. Though at that point you'd largely be reimplementing https://github.com/open-power/petitboot - which probably would be easier to port to x86_64 as an EFI application.

matheusmoreira 11 days ago

This is nice. I want to see Linux replace all kinds of stuff. Especially things like bootloaders, UEFI firmware and even on-device firmware.

WesolyKubeczek 11 days ago

This sounds awfully like macOS on Apple Silicon, where the "boot menu" is in fact more or less full-fat macOS with special fullscreen GUI.

michaelt 12 days ago

Ah yes, unified kernel images.

Finally, an end to the tiresome and obsolete notion of Linux running modified versions of the Linux kernel. With unified kernel images, Linux users can finally be confident knowing their kernels are signed by companies with Microsoft's blessing, such as Red Hat and Canonical - and Linux will be have proper support for the use cases of companies like TiVo, who want to run Linux, but also want to ensure the device owner can't make any modifications to the software on their device.

This will be well worth it, to protect against the ever present issue of criminals breaking into my hotel room, finding my unattended laptop, and deciding not to steal it to sell on ebay - but instead to secretly modify my initramfs. I don't know about you, but I've had two covert CIA teams rappel in through my window this week alone.

  • mjg59 12 days ago

    Any signature on a UKI is only relevant if you have secure boot enabled, and if you have secure boot enabled using the generally trusted keys then you're already not able to boot unsigned kernels. If you want to run arbitrary kernels then either use keys under your control (which UKIs support) or turn off secure boot - UKIs change absolutely nothing here.

bfung 12 days ago

Yo Dawg, I herd you like bootloaders, so I put a bootloader in your kernel so you can boot w/a kernel while you boot a kernel.

CodeWriter23 12 days ago

So, move all of GRUB’s complexity into the kernel

(or tell users to abandon all their use cases that led to the aforementioned complexity)

ahmetozer 12 days ago

I have been used similar approach at embedded system to copy data ram and kexec kernel there

pmarreck 12 days ago

No thanks. NixOS lets you pick the generation at boot via GRUB. This is extremely useful.

Arnavion 12 days ago

I can't open the .odp file right now, but:

>We (Red Hat boot loader engineering) will present our solution to this problem, which is to use the Linux kernel as its own bootloader. Loaded by the EFI stub on UEFI, and packed into a unified kernel image (UKI), the kernel, initramfs, and kernel command line, contain everything they need to reach the final boot target. All necessary drivers, filesystem support, and networking are already built in and code duplication is avoided.

That has been doable for a few years already. What's the new part?

  • saghm 12 days ago

    Right before that paragraph, they cite issues with GRUB as a motivation for this work. What confuses me is that Redhat already has a GRUB replacement in systemd-boot. Is this work intended to obviate that as well, or is it going to relate to it somehow? I imagine doing all this and tying it to systemd would generate some backlash like usual (although at this point, it seems unlikely that this would affect the plans given how few distros don't use systemd).

    • Arnavion 12 days ago

      >I imagine doing all this and tying it to systemd would generate some backlash like usual (although at this point, it seems unlikely that this would affect the plans given how few distros don't use systemd).

      systemd-boot is independent of systemd. It's called "systemd-" because it's under the same "group of core OS software" umbrella named "systemd", but otherwise it can be compiled independently, does not require the OS to be using systemd, etc.

      Edit: I also wrote originally that switching to systemd-boot would also require switching the kernel from vmlinuz+initramfs to a UKI, but I forgot systemd-boot does support vmlinuz+initramfs through explicit loader entries config.

      • benwaffle 12 days ago

        >To be clear, systemd-boot doesn't replace GRUB, in that systemd-boot can only boot other EFI binaries, so it still requires the kernel to be compiled as a UKI. A GRUB setup with a regular vmlinuz + separate initramfs in root partition (or boot partition that's not the ESP) can't be replaced with systemd-boot directly. You first need to switch to a UKI-in-ESP setup.

        That's wrong, my laptop right now uses systemd-boot with a vmlinuz and an initramfs, no UKI. See a configuration example here: https://wiki.archlinux.org/title/Systemd-boot#Adding_loaders

        • Arnavion 12 days ago

          Ah yes, I've used it with the default auto-detected UKIs for so long that I forgot about the explicit loader entries config.

      • saghm 12 days ago

        > systemd-boot is independent of systemd. It's called "systemd-" because it's under the same "group of core OS software" umbrella named "systemd", but otherwise it can be compiled independently, does not require the OS to be using systemd, etc.

        I think my confusion here is that calling something "systemd-" because it's part of the group called systemd is tautological; anything that's independent could just as easily not be included in that group and not be called that. `nmbl` sounds like a piece of "core OS software", so why couldn't it be included in that group as well? It almost sounds like the only reason not to is to avoid naming confusion between multiple things in the "systemd group of software" that are boot-related, and that seems kind of silly.

        To be clear, I'm not taking a pro- or anti-systemd in this thread; my concerns come from a place of pedantry around naming rather than anything technical. It just feels weird to me that the name "systemd-boot" could plausibly have been applied to either the bootloader or the "no-more-bootloader" if the other didn't exist, and I wish that things were named in a way that actually conveys using information rather than arbitrarily attaching confusing branding.

        • SAI_Peregrinus 11 days ago

          Think of Systemd like GNU. They stick their name on all the software they make, even if it doesn't require only using their software. E.g. you can use GNU BASH without using GNU Sed. You can use Systemd-boot without using Systemd-journald.

  • oneplane 12 days ago

    I agree, I don't think this is actually 'new' at all. We have had EFI Stubs, KExec/KSplice (Heads as a loader distro for example) and non-GRUB options for a while.

    At best, this approach doesn't make the boot loader 'go away', it just moves that task to EFI. Which means you depend on EFI instead of GRUB. This isn't really different from say, U-Boot, where you have a bootrom (usually in the SoC or ROM) that does bringup, then U-boot as an intermediary, and then the Linux Kernel. Same deal with BSP and Coreboot, or Bochs or any of the other boot protocol launchers.

    Maybe if their scope is the narrowest of all the scopes (only x86 and only UEFI 2.0 and higher and only specific distros) it might make sense, just to have it be invented in-house as a fake moat. But the end-user doesn't really benefit (as there is no change), and other distros are unlikely to care. You do get a dependency on IBVs and OEMs to implement their UEFI correctly, which most have a hard time doing as it is. And you can't re-use it anywhere else, except maybe SystemReady ARM servers.

itvision 11 days ago

Is the motherboard's NVRAM supposed to be written to so often?

I'm not sure about that.

  • eqvinox 11 days ago

    It originally used to be actual RAM with a battery backup. These days it's generally NOR flash (because it's small enough for savings from using NAND to not apply, and the complexity of NAND instead raising the cost). NOR has quite high write cycle tolerance/limits.

darby_nine 11 days ago

> Although GRUB is quite versatile and capable, its features create complexity that is difficult to maintain

The same is true of the kernel. Perhaps redhat should abandon linux and commit to grub, which has the potential to boot an even more interesting or useful kernel.

ale42 11 days ago

See also this project: https://github.com/zhovner/OneFileLinux

Not a bootloader, but a single-file, very light Linux image that can be loaded directly as an .EFI file. Not useful as an actual OS for daily use, but can have specialized uses (I used it to network boot a whole room of PCs to a Linux showing a slideshow on the framebuffer).

DEADMINCE 12 days ago

I have a bootloader signed with my own keys to boot my kernel. Nothing else will be able to boot the machine. I couldn't have this setup without a bootloader.

  • worthless-trash 12 days ago

    You absolutely can sign the kernel with your own keys. This would allow you to boot your machine into the first level kernel without the bootloader.

    Is this 'couldn't' a self imposed requirement or a technical one I can't think of ?

    • DEADMINCE 12 days ago

      > Is this 'couldn't' a self imposed requirement or a technical one I can't think of ?

      Probably not technical. There is another element, obtaining a HDD encryption key from the TPM. The idea that the HDD is encrypted outside of my laptop and nothing can boot on my laptop that isn't my signed OS to read it.

      Thinking about it I probably could do everything in the kernel directly - why not? Well, because it would be extra work to write all that, but probably not a technical limitation.

      • worthless-trash 11 days ago

        Just to be clear, this is signing for validation not encryption of the contents.

        I wrote a guide on this topic of ensure platform integrity of system level (See https://wmealing.github.io/tpm-pcr07.html ) its not too hard.

        • DEADMINCE 11 days ago

          > Just to be clear, this is signing for validation

          Yup. I was just referencing wanting to obtain keys from the TPM to decrypt a partition. This is useful for me to have the following setup:

          - Laptop turned on, no keys pressed, boots into super locked down guest OS.

          - Laptop turned on, certain key pressed within 2 seconds, boot into 'hidden' OS.

          - In both cases, HDD is encrypted, decrypted automatically via retrieving keys stored in the TPM. This means the harddrive cannot be read outside of that particular laptop, unless keys are extracted from the TPM.

          - Bootloader signed with own key, any and all existing keys wiped, so laptop cannot be booted with any external OS.

          How would I recreate that setup with nmbl?

          That's a good link by the way, thanks - saved.

AtlasBarfed 12 days ago

Another red hat "improvement" that causes another decade plus of churn and documentation and support chaos?

msla 12 days ago

I can't wait until the big distros decide multi-booting is a feature "nobody uses" that "never worked" and therefore isn't going to be supported because "everyone can use VMs" and containers and whatever other solutions that do not, in point of fact, solve the problem.

https://en.wikipedia.org/wiki/Multi-booting

  • account42 11 days ago

    It does make sense for most distros to not care about that TBH. Advanced users can always put a boot menu in front of whatever their distros provide if the EFI-provided menu isn't sufficient.

andrewmcwatters 12 days ago

The Linux kernel is pretty easy to compile[1], but the first-party documentation for getting Linux to boot is total garbage. It's embarrassingly bad.

You end up reading third-party articles to figure out what the modern approach is to building your own Linux-based operating system, but even then, it's effectively undocumented how you're supposed to get out of a RAM fs from a ISO boot. You're on your own.

I completely reject the premise that Linux From Scratch is the way to do things, as walking through those steps, it's clear there are completely arbitrary steps thrown in, and as a result, it's basically its own distribution.

What I'd like to see is official documentation for:

     + Building Linux (exists)
     - Creating an initramfs image (conflicting resources)
     - Making a bootable image (conflicting resources)
     - Installing the bootloader and Linux to a target disk (no resources available)
You can do the first three easily if you know exactly the right magic incantations, and use GRUB, but then you're once again on your own.

Once you've built Linux, the experience is you get to hold it in your hand and go "this is worthless."

The Linux documentation flips between recommending initramfs and not, and also pointing you to documentation that is so old it's completely irrelevant and should be removed from kernel.org.

I am never surprised the Linux desktop experience has been bad for decades, because no one cares about creating a decent installer process to begin with. You fail right out of the gate, and it makes the entire Linux bazar experience look like amateur hack hour.

[1]: https://github.com/andrewmcwatters/linux-workflow/blob/main/...

  • voltagex_ 12 days ago

    I'll bite.

    Official documentation from who? For which audience? For which use case?

    Making a bootable image for what kind of system? My Ryzen PC needs a very different image to my aarch64 router.

    Where have you not seen info on "installing a bootloader to a target disk"? This is what every distro installer does - this can range from putting a kernel in an EFI partition and setting a variable to building a uboot image and setting variables in NVRAM

    Lastly, what do you class as a "decent installer process"? Things have moved on from Slackware's installer. You've got everything from the Debian installer (which hasn't changed much) to Anaconda (let's run the install UI in a browser) to Ubuntu Server (everything is a container!) and many things in between.

  • rcxdude 12 days ago

    What is your actual need here? You talk about confusing documentation for a low-level process which basically no user is expected to go through if they aren't exploring the foundations of a linux system, or a developer working on their own distribution, and then complain that this is a lack of "a decent installer process".

    • andrewmcwatters 12 days ago

      It’s in the post, can’t you read? Also, something being low-level has nothing to do with documentation.

bastien2 12 days ago

Except that doesn't work in the real world, where encrypted and authenticated boot disks are increasingly common.

So you'll need a significant amount of code that isn't the permanently-resident kernel that has enough device support to access keys and decrypt and authenticate what holds the kernel that will launch the OS.

IOW, you'll just have to reinvent a bootloader anyway.

Or you can address the problems with GRUB, extend it to do what you need, and avoid doing the traditional linux folly of Yet Another Unnecessary Reinvention.

Or was systemd vendor lock-in not enough for your shareholders?

  • mjg59 12 days ago

    The EFI system partition is, by definition, either not encrypted or is unlocked by the firmware - your bootloader wouldn't work otherwise. In this setup, you just stick the UKI on the EFI system partition, and unlocking the rest of the drive is performed in the initramfs.

  • rcxdude 12 days ago

    This code already exists in UEFI in the form of secure boot. The 'bootloader' (more accurately 'boot menu' IMO) kernel and its initramfs would be authenticated and unlocked by the system firmware, and then authenticate and unlock the rootfs and (optionally) different kernel for that system. It's basically going "hey, GRUB is more or less re-inventing the linux kernel, why don't we just write a simple userland for linux that does the same job but with way less code instead?"

    • BobbyTables2 12 days ago

      Actually I don’t think UEFI firmware validates the initramfs — that is loaded by the kernel’s efi stub.

      One can make a UKI image which glues the two together in a single file along with a tiny bit of code for booting it.

  • ec109685 12 days ago

    Isn’t their argument that much of this code already exists in Linux?