Comment # 59 on bug 1175626 from
(In reply to Martin Wilck from comment #57)
> (In reply to John Shaw from comment #43)
> 
> > This seemed to be working on my machine after the last round of grub2
> > updates went through, then this morning a new nVidia driver update came
> > (along with about 20 other updates). On reboot with SecureBoot enabled, the
> > machine hung after post with a black screen. I rebooted with SecureBoot
> > disabled (ie, selected OtherOS in the ASUS bios), and it brought up the MOK
> > Manager. However, unlike before the options are now different. There is no
> > "enrole MOK", just: Continue, Reset MOK, Delete MOK, and enrole key and
> > enrole hash from disk.
> 
> I think I understand this now, at least partially. The explanation is
> similar to what I wrote in comment 48. Can you confirm that the *version* of
> the Nvidia driver was different, with only the *release* differing? Because
> the driver package was using only the *version* (and not the release) in the
> certificate file name, the package update would create both an import and a
> deletion request for the same (new!) key at the same time, rather than a
> delete request for the old and an import request for the new one. Not sure
> how MokManager would deal with that, but your comment shows that it offered
> the key only for deletion. The latest Nvidia package fixes this by adding
> the release to the certificate file name.
> 
> If it's only that, you should be able to recover from the problem by booting
> into a non-graphical environment, uninstalling the nvidia-gfx0$X package,
> and installing it again.
> 
> I don't understand why any of this would cause your GRUB menu not to be
> shown. Normally these keys wouldn't be used until user space tries to load
> the nvidia driver. By that time, you should be past grub and past lots of
> boot messages from the kernel. The only explanation I have for that is that
> mokmanager might crash while trying to deal with the "contradictory" mok
> requests.
> 
> (In reply to John Shaw from comment #37)
> > (In reply to Martin Wilck from comment #36)
> > > The question is if the BIOS fails to verify the shim, or shim fails to
> > > verify any of the follow-up binaries. Can you check this e.g. with the UEFI
> > > shell?
> > 
> > I don't know how to check this in the UEFI shell. Can you explain?
> 
> If you don't have an existing UEFI shell boot entry already, grab one. See
> e.g.
> 
> > https://wiki.archlinux.org/index.php/Unified_Extensible_Firmware_Interface#Obtaining_UEFI_Shell
> 
> Next you need to copy this to your EFI boot partition, and find a way to
> make your BIOS boot it. Either the BIOS has a way to boot a specified EFI
> binary directly, or you could use efibootmgr to craft a suitable entry for
> it.
> 
> Once in the shell, you can move around and run commands a bit like on good
> old DOS. You may have to enter a "drive" first, which would look like
> "fs0:". The "map" command can be used to see drives that the shell has
> recognized. The test would look roughly like this:
> 
> > fs0:
> > cd \EFI\BOOT
> > bootx64.efi # or whatever else you want to boot
> 
> The advantage is that you would see some error messages in this environment,
> e.g. if the binary's signature can't be verified by the BIOS.

Unfortunately I can't really confirm what the exact version was for the nVidia
update that came through. I tried hunting thoughout the saved snapshots, but
only succeeded in hanging the yast snapshot tool after looking for differences
between saved pairs. This is the first time that has happened, it may be that
there were a LOT of differences in the pair I was examining. The current
version for the nVidia driver is 450.66, but I doubt that  helps much.

I finally found compiled efi shell (from the Arch Linux repos) and copied it to
by UEFI partition. I should be able to get into to from the BIOS. I can try to
boot the opensusi shim.efi file and see what happens.


You are receiving this mail because: