[Bug 1183767] New: libvirt segfault in qemuDomainDefAddDefaultDevices when invoked via vagrant
http://bugzilla.opensuse.org/show_bug.cgi?id=1183767 Bug ID: 1183767 Summary: libvirt segfault in qemuDomainDefAddDefaultDevices when invoked via vagrant Classification: openSUSE Product: openSUSE Distribution Version: Leap 15.2 Hardware: Other OS: Other Status: NEW Severity: Normal Priority: P5 - None Component: Virtualization:Other Assignee: virt-bugs@suse.de Reporter: dcermak@suse.com QA Contact: qa-bugs@suse.de Found By: --- Blocker: --- Created attachment 847447 --> http://bugzilla.opensuse.org/attachment.cgi?id=847447&action=edit libvirt coredump recorded by systemd-coredump libvirt recently started segfaulting on my Leap 15.2 CI machine in openstack when a new VM is started via vagrant & vagrant libvirt. How to reproduce: - vagrant init opensuse/Tumbleweed.x86_64 # the actual vagrant box does not appear to matter - vagrant up --provider libvirt - check the system journal for a segfault installed components: $ rpm -qa|grep libvirt libvirt-libs-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-network-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-storage-scsi-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-storage-6.0.0-lp152.9.6.2.x86_64 ruby2.5-rubygem-ruby-libvirt-0.7.1-lp152.11.1.x86_64 libvirt-daemon-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-storage-gluster-6.0.0-lp152.9.6.2.x86_64 libvirt-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-interface-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-storage-core-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-storage-disk-6.0.0-lp152.9.6.2.x86_64 ruby2.5-rubygem-fog-libvirt-0.8.0-lp152.9.1.x86_64 libvirt-daemon-driver-nodedev-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-secret-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-storage-logical-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-config-nwfilter-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-qemu-debuginfo-6.0.0-lp152.9.6.2.x86_64 libvirt-libs-debuginfo-6.0.0-lp152.9.6.2.x86_64 libvirt-client-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-libxl-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-debuginfo-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-nwfilter-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-storage-iscsi-6.0.0-lp152.9.6.2.x86_64 vagrant-libvirt-0.2.1-lp152.32.3.x86_64 libvirt-daemon-driver-qemu-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-config-network-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-storage-rbd-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-lxc-6.0.0-lp152.9.6.2.x86_64 libvirt-daemon-driver-storage-mpath-6.0.0-lp152.9.6.2.x86_64 The journal contains the following stack trace(s): Mar 19 09:16:30 dcermak-vagrant-tester-leap kernel: libvirtd[2331]: segfault at 0 ip 00007fcf90a8be2c sp 00007fcfc67fb810 error 4 in libvirt_driver_qemu.so[7fcf909f3000+19f000] Mar 19 09:16:30 dcermak-vagrant-tester-leap kernel: Code: 00 f3 a6 0f 97 c0 1c 00 84 c0 0f 94 45 90 45 31 f6 e9 95 fe ff ff 49 8b b4 24 18 02 00 00 48 8d 3d 0b 6b 0a 00 b9 06 00 00 00 <f3> a6 0f 97 c0 1c 00 84 c0 0f 84 44 ff ff ff 4c 89 e7 e8 2d fc fa Mar 19 09:16:30 dcermak-vagrant-tester-leap systemd[1]: Started Process Core Dump (PID 5268/UID 0). Mar 19 09:16:31 dcermak-vagrant-tester-leap systemd[1]: libvirtd.service: Main process exited, code=killed, status=11/SEGV Mar 19 09:16:31 dcermak-vagrant-tester-leap systemd[1]: libvirtd.service: Unit entered failed state. Mar 19 09:16:31 dcermak-vagrant-tester-leap systemd[1]: libvirtd.service: Failed with result 'signal'. Mar 19 09:16:31 dcermak-vagrant-tester-leap systemd[1]: libvirtd.service: Service RestartSec=100ms expired, scheduling restart. Mar 19 09:16:31 dcermak-vagrant-tester-leap systemd[1]: Stopped Virtualization daemon. Mar 19 09:16:31 dcermak-vagrant-tester-leap systemd[1]: Starting Virtualization daemon... Mar 19 09:16:31 dcermak-vagrant-tester-leap systemd[1]: Started Virtualization daemon. Mar 19 09:04:02 dcermak-vagrant-tester-leap systemd-coredump[1912]: Process 1767 (libvirtd) of user 0 dumped core. Stack trace of thread 1777: #0 0x00007f10dc546e2c qemuDomainDefAddDefaultDevices (libvirt_driver_qemu.so) #1 0x00007f111157fc99 virDomainDefPostParse (libvirt.so.0) #2 0x00007f111159bf08 virDomainDefParseNode (libvirt.so.0) #3 0x00007f111159c013 virDomainDefParse (libvirt.so.0) #4 0x00007f10dc5bdfbe qemuDomainDefineXMLFlags (libvirt_driver_qemu.so) #5 0x00007f11116eabbc virDomainDefineXMLFlags (libvirt.so.0) #6 0x000055572f7472ea remoteDispatchDomainDefineXMLFlags (libvirtd) #7 0x00007f111161b9ed virNetServerProgramDispatchCall (libvirt.so.0) #8 0x00007f1111620a98 virNetServerProcessMsg (libvirt.so.0) #9 0x00007f111153c751 virThreadPoolWorker (libvirt.so.0) #10 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #11 0x00007f11108144f9 start_thread (libpthread.so.0) #12 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1770: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c863 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1786: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c863 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1773: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c863 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1775: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c798 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1783: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c863 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1774: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c798 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1785: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c863 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1793: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f10deec3894 n/a (libvirt_driver_nodedev.so) #3 0x00007f111153bb82 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1778: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c798 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1772: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c863 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1771: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c863 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1769: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c863 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1776: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c798 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1767: #0 0x00007f11105425fb __poll (libc.so.6) #1 0x00007f11114de24a poll (libvirt.so.0) #2 0x00007f11114dce01 virEventRunDefaultImpl (libvirt.so.0) #3 0x00007f11116202cd virNetDaemonRun (libvirt.so.0) #4 0x000055572f73b862 main (libvirtd) #5 0x00007f111047534a __libc_start_main (libc.so.6) #6 0x000055572f73bb3a _start (libvirtd) Stack trace of thread 1787: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c863 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) Stack trace of thread 1784: #0 0x00007f111081a87d pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0) #1 0x00007f111153bd96 virCondWait (libvirt.so.0) #2 0x00007f111153c863 virThreadPoolWorker (libvirt.so.0) #3 0x00007f111153bb58 virThreadHelper (libvirt.so.0) #4 0x00007f11108144f9 start_thread (libpthread.so.0) #5 0x00007f111054cecf __clone (libc.so.6) -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1183767 Dan ��erm��k <dcermak@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dcermak@suse.com -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1183767 http://bugzilla.opensuse.org/show_bug.cgi?id=1183767#c1 James Fehlig <jfehlig@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(dcermak@suse.com) --- Comment #1 from James Fehlig <jfehlig@suse.com> --- From the core file #0 0x00007f10dc546e2c in qemuDomainDefAddDefaultDevices (qemuCaps=0x7f10d800d2e0, def=0x7f10d8009730) at ../../src/qemu/qemu_domain.c:4210 4210 if (STREQ(def->os.machine, "isapc")) { (gdb) p def->os.machine $1 = 0x0 The VM config does not explicitly specify a machine, so one should be provided by virQEMUCapsGetPreferredMachine(), called in qemuDomainDefPostParse(). virQEMUCapsGetPreferredMachine() can return NULL, and commit 67b973b510 will fix the crash by failing VM creation if os.machine == NULL https://gitlab.com/libvirt/libvirt/-/commit/67b973b510ad68da06e8eb744d97b3e1... Is this a regression for you? Were you able to previously create VMs with a similar configuration? The config (pasted below for easy viewing) worked fine for me on SLES15 SP2 and Leap 15.2. What qemu packages are installed? Do you have any capabilities files in /var/cache/libvirt/qemu/capabilities/? If so, do they list machine types for kvm? E.g. <machine type='kvm' name='pc-i440fx-4.2' .../>? VM config extracted from core: <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>root_default</name> <title></title> <description></description> <uuid></uuid> <memory>524288</memory> <vcpu>1</vcpu> <cpu mode='host-model'> <model fallback='allow'></model> </cpu> <os> <type>hvm</type> <kernel></kernel> <initrd></initrd> <cmdline></cmdline> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <devices> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='default'/> <source file='/var/lib/libvirt/images/root_default.img'/> <target dev='vda' bus='virtio'/> </disk> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' listen='127.0.0.1' keymap='en-us'/> <video> <model type='cirrus' vram='9216' heads='1'/> </video> </devices> </domain> -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1183767 http://bugzilla.opensuse.org/show_bug.cgi?id=1183767#c2 --- Comment #2 from James Fehlig <jfehlig@suse.com> --- I've added commit 67b973b510 to the SLE15 SP2 libvirt package and queued it for a future maintenance release. -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1183767 http://bugzilla.opensuse.org/show_bug.cgi?id=1183767#c3 Dan ��erm��k <dcermak@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(dcermak@suse.com) | --- Comment #3 from Dan ��erm��k <dcermak@suse.com> --- (In reply to James Fehlig from comment #1)
Is this a regression for you? Were you able to previously create VMs with a similar configuration?
Yes, this is definitely a regression. I have been using vagrant to spin up libvirt based VMs on that machine for ~2 years now and it just started to fail recently.
The config (pasted below for easy viewing) worked fine for me on SLES15 SP2 and Leap 15.2. What qemu packages are installed?
# rpm -qa|grep qemu qemu-ipxe-1.0.0+-lp152.9.9.2.noarch qemu-x86-4.2.1-lp152.9.9.2.x86_64 qemu-seabios-1.12.1+-lp152.9.9.2.noarch qemu-4.2.1-lp152.9.9.2.x86_64 libvirt-daemon-driver-qemu-debuginfo-6.0.0-lp152.9.6.2.x86_64 qemu-microvm-4.2.1-lp152.9.9.2.noarch qemu-guest-agent-4.2.1-lp152.9.9.2.x86_64 qemu-ovmf-x86_64-201911-lp152.6.8.1.noarch libvirt-daemon-driver-qemu-6.0.0-lp152.9.6.2.x86_64 qemu-vgabios-1.12.1+-lp152.9.9.2.noarch qemu-tools-4.2.1-lp152.9.9.2.x86_64 qemu-kvm-4.2.1-lp152.9.9.2.x86_64 qemu-sgabios-8-lp152.9.9.2.noarch
Do you have any capabilities files in /var/cache/libvirt/qemu/capabilities/?
Yes, there is a single file: # ll /var/cache/libvirt/qemu/capabilities/ total 72 -rw------- 1 root root 71998 Mar 19 08:59 926803a9278e445ec919c2b6cbd8c1c449c75b26dcb1686b774314180376c725.xml
If so, do they list machine types for kvm? E.g. <machine type='kvm' name='pc-i440fx-4.2' .../>?
No, unfortunately not (I have attached the file). -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1183767 http://bugzilla.opensuse.org/show_bug.cgi?id=1183767#c4 --- Comment #4 from Dan ��erm��k <dcermak@suse.com> --- Created attachment 847896 --> http://bugzilla.opensuse.org/attachment.cgi?id=847896&action=edit taken from /var/cache/libvirt/qemu/capabilities/ -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1183767 http://bugzilla.opensuse.org/show_bug.cgi?id=1183767#c5 James Fehlig <jfehlig@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(dcermak@suse.com) --- Comment #5 from James Fehlig <jfehlig@suse.com> --- (In reply to Dan ��erm��k from comment #3)
Yes, this is definitely a regression. I have been using vagrant to spin up libvirt based VMs on that machine for ~2 years now and it just started to fail recently.
Something must have changed on the host to make kvm unavailable. Your capabilities file only has cpus and machines of type tcg. If kvm was available you would see similar lines with e.g. <cpu type='kvm' .../> ... <machine type='kvm' .../> Trying to start a VM with <domain type='kvm' .../> is doomed to fail when kvm is not available on the host. Is the kvm module loaded? Does the user:group qemu is running under (qemu:qemu by default) have access to /dev/kvm? -- You are receiving this mail because: You are on the CC list for the bug.
participants (1)
-
bugzilla_noreply@suse.com