[Bug 961853] New: virt-install doesn't work with option -boot uefi
http://bugzilla.suse.com/show_bug.cgi?id=961853 Bug ID: 961853 Summary: virt-install doesn't work with option -boot uefi Classification: openSUSE Product: openSUSE Tumbleweed Version: 2015* Hardware: Other OS: Other Status: NEW Severity: Normal Priority: P5 - None Component: Virtualization:Tools Assignee: virt-bugs@suse.de Reporter: mchang@suse.com QA Contact: qa-bugs@suse.de Found By: --- Blocker: --- It fails with ERROR Error: --boot uefi: Did not find any UEFI binary path for arch 'x86_64' The man page said: --boot uefi Configure the VM to boot from UEFI. In order for virt-install to know the correct UEFI parameters, libvirt needs to be advertising known UEFI binaries via domcapabilities XML, so this will likely only work if using properly configured distro packages I think that option should work with good distro defaults for mortal user rather than "--boot loader=" to service advanced one. Thanks. -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=961853
Charles Arnold
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c2
Charles Arnold
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c4
James Fehlig
Two changes are needed for this to work: - Either rename the files in the OVMF package (libvirt hardcodes a .*OVMF_CODE\.fd regexp) or change the regexp
I'm not sure what the best option is here. It would be nice if these filenames were standard, but in lieu of that a hack such as the following to virt-manager (not libvirt) seems to work Index: virt-manager-1.3.2/virtinst/domcapabilities.py =================================================================== --- virt-manager-1.3.2.orig/virtinst/domcapabilities.py +++ virt-manager-1.3.2/virtinst/domcapabilities.py @@ -101,6 +101,8 @@ class DomainCapabilities(XMLBuilder): "x86_64": [ ".*OVMF_CODE\.fd", # RHEL ".*ovmf-x64/OVMF.*\.fd", # gerd's firmware repo + ".*ovmf-x86_64-.*", # SUSE + ], "aarch64": [ ".*AAVMF_CODE\.fd", # RHEL
- Add path to ovmf to /etc/libvirt/qemu.conf (nvram section)
The defaults could be a bit better to avoid editing this file. I've made a change to the libvirt spec file to specify the locations https://build.opensuse.org/package/rdiff/home:jfehlig:branches:Virtualization/libvirt?linkrev=base&rev=2 Note I used ovmf-x86_64-opensuse-{code,vars}.bin. I'm not sure if all the ovmf-x86_64-*.bin files should be included in --with-loader-nvram= -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=961853
Andreas Taschner
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c5
Michael Chang
(In reply to Fabian Vogt from comment #1)
Two changes are needed for this to work: - Either rename the files in the OVMF package (libvirt hardcodes a .*OVMF_CODE\.fd regexp) or change the regexp
I'm not sure what the best option is here. It would be nice if these filenames were standard, but in lieu of that a hack such as the following to virt-manager (not libvirt) seems to work
I think Gary (maintainer of OVMF) should know best for how the filename was made and why stayed with it. Btw I think not really have any standard to follow and it's on the packager's will ..
Index: virt-manager-1.3.2/virtinst/domcapabilities.py =================================================================== --- virt-manager-1.3.2.orig/virtinst/domcapabilities.py +++ virt-manager-1.3.2/virtinst/domcapabilities.py @@ -101,6 +101,8 @@ class DomainCapabilities(XMLBuilder): "x86_64": [ ".*OVMF_CODE\.fd", # RHEL ".*ovmf-x64/OVMF.*\.fd", # gerd's firmware repo + ".*ovmf-x86_64-.*", # SUSE + ], "aarch64": [ ".*AAVMF_CODE\.fd", # RHEL
- Add path to ovmf to /etc/libvirt/qemu.conf (nvram section)
The defaults could be a bit better to avoid editing this file. I've made a change to the libvirt spec file to specify the locations
https://build.opensuse.org/package/rdiff/home:jfehlig:branches: Virtualization/libvirt?linkrev=base&rev=2
Note I used ovmf-x86_64-opensuse-{code,vars}.bin. I'm not sure if all the ovmf-x86_64-*.bin files should be included in --with-loader-nvram=
It seems to me sufficient to include only OVMF images with pflash (nvram) support, but let's also check with Gary. Thanks. -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c6
Gary Ching-Pang Lin
(In reply to Fabian Vogt from comment #1)
Two changes are needed for this to work: - Either rename the files in the OVMF package (libvirt hardcodes a .*OVMF_CODE\.fd regexp) or change the regexp
I'm not sure what the best option is here. It would be nice if these filenames were standard, but in lieu of that a hack such as the following to virt-manager (not libvirt) seems to work
Index: virt-manager-1.3.2/virtinst/domcapabilities.py =================================================================== --- virt-manager-1.3.2.orig/virtinst/domcapabilities.py +++ virt-manager-1.3.2/virtinst/domcapabilities.py @@ -101,6 +101,8 @@ class DomainCapabilities(XMLBuilder): "x86_64": [ ".*OVMF_CODE\.fd", # RHEL ".*ovmf-x64/OVMF.*\.fd", # gerd's firmware repo + ".*ovmf-x86_64-.*", # SUSE + ], "aarch64": [ ".*AAVMF_CODE\.fd", # RHEL
- Add path to ovmf to /etc/libvirt/qemu.conf (nvram section)
The defaults could be a bit better to avoid editing this file. I've made a change to the libvirt spec file to specify the locations
https://build.opensuse.org/package/rdiff/home:jfehlig:branches: Virtualization/libvirt?linkrev=base&rev=2
Note I used ovmf-x86_64-opensuse-{code,vars}.bin. I'm not sure if all the ovmf-x86_64-*.bin files should be included in --with-loader-nvram=
I'm not a user of libvirt, but I can explain the usage of those files. OMVF needs a place to store the UEFI variables, and there are two ways to store the variables. 1. A fake NVRAM in the harddisk partition If the virtual machine boots with "-bios ovmf.bin", OVMF will store the variables in the file, NVvars, in the EFI system partition. This is more like a workaround and not reliable, so it's not recommended to use "-bios". 2. Pflash mode in QEMU The recent version of QEMU supports the pflash mode which means the virtual machine can store data directly into the firmware file. To use pflash mode on OVMF, just add this to the qemu command: -drive if=pflash,format=raw,file=ovmf.bin However, this raises another problem. Since the virtual machine has to write data into the file, the user must have the write access to the file. Copying the whole file to the user's home could work around the problem, but upgrading firmware would erase everything in the firmware file. To fix this issue, the OVMF upstream separates the code part and vars part from the firmware file. ovmf-x86_64-*-code.bin is the code of the firmware and ovmf-x86_64-*-vars.bin is the storage. The user just needs to make ovmf-x86_64-*-vars.bin writable, and then the firmware upgrading will be applied automatically to the virtual machine without erasing the data while the system upgrades OVMF. In this case, the qemu command would be like this: -drive if=pflash,format=raw,readonly,file=/usr/share/qemu/ovmf-x86_64-code.bin \ -drive if=pflash,format=raw,file=ovmf-x86_64-vars.bin Besides the code/vars, there are several different flavors of ovmf for secure boot. ovmf-x86_64.bin is the "pure" OVMF without adding any keys. ovmf-x86_64-{ms,suse,opensuse,opensuse-4096}.bin are almost the same as ovmf-x86_64.bin except the preloaded vendor keys. ovmf-86_64-ms.bin is closest to the shipping machines since it's contains the Microsoft UEFI keys and the black list, so it's recommended to use "ms" if you want to emulate the environment of the shipping machines. -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c7
James Fehlig
-drive if=pflash,format=raw,readonly,file=/usr/share/qemu/ovmf-x86_64-code.bin \ -drive if=pflash,format=raw,file=ovmf-x86_64-vars.bin
This is exactly how libvirt invokes qemu when domain configuration contains <loader> and <nvram>. E.g. -drive file=/usr/share/qemu/ovmf-x86_64-suse-code.bin,if=pflash,format=raw,unit=0,readonly=on -drive file=/var/lib/libvirt/qemu/nvram/test-uefi_VARS.fd,if=pflash,format=raw,unit=1
Besides the code/vars, there are several different flavors of ovmf for secure boot.
ovmf-x86_64.bin is the "pure" OVMF without adding any keys. ovmf-x86_64-{ms,suse,opensuse,opensuse-4096}.bin are almost the same as ovmf-x86_64.bin except the preloaded vendor keys. ovmf-86_64-ms.bin is closest to the shipping machines since it's contains the Microsoft UEFI keys and the black list, so it's recommended to use "ms" if you want to emulate the environment of the shipping machines.
While a good description of the flavors, it doesn't answer the question of which flavor is the best default for openSUSE and SLE. Being openSUSE, I *think* the opensuse flavor is an appropriate default for openSUSE. Antoine, can you recommend a default flavor for our SLE customers? -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c8
James Fehlig
I think Gary (maintainer of OVMF) should know best for how the filename was made and why stayed with it. Btw I think not really have any standard to follow and it's on the packager's will ..
One downside of that is the need for apps like virt-manager, libguestfs, etc. to account for the packagers will. See the "Implementing support in applications" section of Cole's blog post introducing UEFI support in virt-manager and virt-install http://blog.wikichoon.com/2016/01/uefi-support-in-virt-install-and-virt.html Olaf, looks like we'll need to patch libguestfs to support our ovmf path names. -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c9
--- Comment #9 from Gary Ching-Pang Lin
(In reply to Gary Ching-Pang Lin from comment #6)
-drive if=pflash,format=raw,readonly,file=/usr/share/qemu/ovmf-x86_64-code.bin \ -drive if=pflash,format=raw,file=ovmf-x86_64-vars.bin
This is exactly how libvirt invokes qemu when domain configuration contains <loader> and <nvram>. E.g.
-drive file=/usr/share/qemu/ovmf-x86_64-suse-code.bin,if=pflash,format=raw,unit=0, readonly=on -drive file=/var/lib/libvirt/qemu/nvram/test-uefi_VARS.fd,if=pflash,format=raw, unit=1
Besides the code/vars, there are several different flavors of ovmf for secure boot.
ovmf-x86_64.bin is the "pure" OVMF without adding any keys. ovmf-x86_64-{ms,suse,opensuse,opensuse-4096}.bin are almost the same as ovmf-x86_64.bin except the preloaded vendor keys. ovmf-86_64-ms.bin is closest to the shipping machines since it's contains the Microsoft UEFI keys and the black list, so it's recommended to use "ms" if you want to emulate the environment of the shipping machines.
While a good description of the flavors, it doesn't answer the question of which flavor is the best default for openSUSE and SLE. Being openSUSE, I *think* the opensuse flavor is an appropriate default for openSUSE.
If you expect different distros to be installed, "ms" is actually better since the most secure-boot supported distros have the MS/UEFI signature. If you don't care about secure boot, the pure one should work.
Antoine, can you recommend a default flavor for our SLE customers?
-- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c10
Antoine Ginies
Besides the code/vars, there are several different flavors of ovmf for secure boot.
ovmf-x86_64.bin is the "pure" OVMF without adding any keys. ovmf-x86_64-{ms,suse,opensuse,opensuse-4096}.bin are almost the same as ovmf-x86_64.bin except the preloaded vendor keys. ovmf-86_64-ms.bin is closest to the shipping machines since it's contains the Microsoft UEFI keys and the black list, so it's recommended to use "ms" if you want to emulate the environment of the shipping machines.
While a good description of the flavors, it doesn't answer the question of which flavor is the best default for openSUSE and SLE. Being openSUSE, I *think* the opensuse flavor is an appropriate default for openSUSE.
If you expect different distros to be installed, "ms" is actually better since the most secure-boot supported distros have the MS/UEFI signature.
If you don't care about secure boot, the pure one should work.
Antoine, can you recommend a default flavor for our SLE customers?
UEFI and secure boot is a major change of SLE12. So i would be in favor of using ovmf-x86_64-ms.bin as the default one for SLE. Using others will be more "restictive". -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c11
--- Comment #11 from James Fehlig
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c12
--- Comment #12 from Fabian Vogt
Based on Gary and Antoine's comments, I think the 'ms' flavor should be out-of-the-box default for SLE and the 'pure' flavor the default for openSUSE. Do folks listening on this bug agree?
I agree, as a VM should work like real hardware. However, is there a specific reason to use different flavors for openSUSE and SLE? -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c13
--- Comment #13 from James Fehlig
However, is there a specific reason to use different flavors for openSUSE and SLE?
I suppose not. I was thinking there may be some licensing issue, but I guess the ms flavors wouldn't be there in the first place if that was the case. I'll change both openSUSE and SLE to use the ms flavor by default. -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c14
James Fehlig
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c15
James Fehlig
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c17
Gary Ching-Pang Lin
FYI, a summary of the issues related to OVMF:
1. libivrt: Use of upstream default for UEFI firmware path (/usr/share/OVMF/OVMF_CODE.fd)
2. virt-install and libguestfs: also expecting /usr/share/OVMF/OVMF_CODE*
3. ovmf: changes made to some firmware settings are not persisted
I have fixed 1 by configuring libvirt to use the 'ms' flavors from the SUSE ovmf package as the default firmware. domcapabilties now advertises it
<loader supported='yes'> <value>/usr/share/qemu/ovmf-x86_64-ms-code.bin</value> <enum name='type'> <value>rom</value> <value>pflash</value> </enum> <enum name='readonly'> <value>yes</value> <value>no</value> </enum> </loader>
The virt-manager patch in comment #4 fixes the virt-install half of 2. I'll submit a formal patch to Virtualization/virt-manager.
I noticed 3 while testing the fixes for 1 and 2. Some settings are persisted (e.g. Device Manager -> OVMF Platform Configuration -> Preferred Resolution), while others are not (e.g. Boot Maintenance Manager -> Set Time Out Value). It seems like a problem in ovmf. When saving changes in the firmware UI, the VARS file access time is updated, so something was written to the file. But in the case of "Time Out Value", simply going to the top level menu then back to Set Time Out Value is enough to lose the setting. Power-cycling the VM also resets the "Time Out Value" to 0. Gary, does that sound like and ovmf issue to you?
I checked OVMF and the timeout actually relies on fwcfg from qemu, i.e. -boot menu=off/on, not any UEFI variable. The menu of Boot Maintenance Manager is a general menu for different platforms, and OVMF happens to ignore the option. -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c18
--- Comment #18 from Andreas Taschner
(In reply to James Fehlig from comment #14)
I noticed 3 while testing the fixes for 1 and 2. Some settings are persisted (e.g. Device Manager -> OVMF Platform Configuration -> Preferred Resolution), while others are not (e.g. Boot Maintenance Manager -> Set Time Out Value). It seems like a problem in ovmf. When saving changes in the firmware UI, the VARS file access time is updated, so something was written to the file. But in the case of "Time Out Value", simply going to the top level menu then back to Set Time Out Value is enough to lose the setting. Power-cycling the VM also resets the "Time Out Value" to 0. Gary, does that sound like and ovmf issue to you?
I appreciate the confirmation of my findings :-)
I checked OVMF and the timeout actually relies on fwcfg from qemu, i.e. -boot menu=off/on, not any UEFI variable. The menu of Boot Maintenance Manager is a general menu for different platforms, and OVMF happens to ignore the option.
Can you help me understand this, please. Is it so that the Time Out Value in the Boot Maintenance Manager menu is of no use ? We will (at least in case OVMF gets supported on SLES) inevitably run into support inquiries when customers try changing the value and observe the same as us. In my opinion an option should either work or not be there. At the very least the "context-hint" text on the right side of the screen should inform that the option has no effect on virtual setups (IIUC).... which then leads to the question why is it then there ? Please forgive if I am missing (parts of) the concept, being a complete novice on UEFI. -- You are receiving this mail because: You are on the CC list for the bug.
(In reply to Gary Ching-Pang Lin from comment #17)
(In reply to James Fehlig from comment #14)
I noticed 3 while testing the fixes for 1 and 2. Some settings are persisted (e.g. Device Manager -> OVMF Platform Configuration -> Preferred Resolution), while others are not (e.g. Boot Maintenance Manager -> Set Time Out Value). It seems like a problem in ovmf. When saving changes in the firmware UI, the VARS file access time is updated, so something was written to the file. But in the case of "Time Out Value", simply going to the top level menu then back to Set Time Out Value is enough to lose the setting. Power-cycling the VM also resets the "Time Out Value" to 0. Gary, does that sound like and ovmf issue to you?
I appreciate the confirmation of my findings :-)
I checked OVMF and the timeout actually relies on fwcfg from qemu, i.e. -boot menu=off/on, not any UEFI variable. The menu of Boot Maintenance Manager is a general menu for different platforms, and OVMF happens to ignore the option.
Can you help me understand this, please. Is it so that the Time Out Value in the Boot Maintenance Manager menu is of no use ? We will (at least in case OVMF gets supported on SLES) inevitably run into support inquiries when customers try changing the value and observe the same as us. In my opinion an option should either work or not be there. At the very least the "context-hint" text on the right side of the screen should inform that the option has no effect on virtual setups (IIUC).... which then leads to the question why is it then there ? Please forgive if I am missing (parts of) the concept, being a complete novice on UEFI. OVMF is a subproject of edk2(*) which is a reference implementation of UEFI. There are a lot of "Pkg"s in the project and most of Pkgs are
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c19
--- Comment #19 from Gary Ching-Pang Lin
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c20
--- Comment #20 from Andreas Taschner
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c21
James Fehlig
http://bugzilla.suse.com/show_bug.cgi?id=961853
http://bugzilla.suse.com/show_bug.cgi?id=961853#c22
James Fehlig
participants (1)
-
bugzilla_noreply@novell.com