[Bug 1177763] New: Cannot install an existing virtual machine two error messages with Virtual Machine Manager
https://bugzilla.suse.com/show_bug.cgi?id=1177763 Bug ID: 1177763 Summary: Cannot install an existing virtual machine two error messages with Virtual Machine Manager Classification: openSUSE Product: openSUSE Tumbleweed Version: Current Hardware: x86-64 OS: Other Status: NEW Severity: Critical Priority: P5 - None Component: Virtualization:Tools Assignee: virt-bugs@suse.de Reporter: peter.posts@gmx.net QA Contact: qa-bugs@suse.de Found By: --- Blocker: --- 20201010, Tumbleweed, no additional repositories, fresh install, fresh dup Procedure (German) Hypervisor und Tools installieren KVM-Server - KVM -Werkzeuge sind installiert "Die KVM-Komponenten sind installiert. Auf dem Host können nun KVM-Gastsysteme installiert werden." Start: Virtual Machine Manager Error messages: The libvirtd service does not appear to be installed. Install and run the libvirtd service to manage virtualization on this host. Test: systemctl status libvirtd Result: lux-tw:~ # systemctl status libvirtd ● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: disabled) Active: inactive (dead) since Thu 2020-10-15 16:55:11 CEST; 4min 7s ago TriggeredBy: ● libvirtd.socket ● libvirtd-ro.socket ● libvirtd-admin.socket Docs: man:libvirtd(8) https://libvirt.org Process: 1258 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=exited, status=0/SUCCESS) Main PID: 1258 (code=exited, status=0/SUCCESS) Okt 15 16:53:11 lux-tw systemd[1]: Starting Virtualization daemon... Okt 15 16:53:11 lux-tw libvirtd[1258]: libvirt version: 6.8.0 Okt 15 16:53:11 lux-tw libvirtd[1258]: hostname: lux-tw Okt 15 16:53:11 lux-tw libvirtd[1258]: Failed to initialize libnetcontrol. Management of interface devices is d> Okt 15 16:53:11 lux-tw systemd[1]: Started Virtualization daemon. Okt 15 16:55:11 lux-tw systemd[1]: libvirtd.service: Succeeded. Later compared installed packages to an existing and working installation on Leap 15.2. I added the missing packetes, error messages remained -- You are receiving this mail because: You are on the CC list for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1177763 Peter McDonough <peter.posts@gmx.net> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |peter.posts@gmx.net -- You are receiving this mail because: You are on the CC list for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1177763 Charles Arnold <carnold@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |carnold@suse.com Assignee|virt-bugs@suse.de |jfehlig@suse.com -- You are receiving this mail because: You are on the CC list for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1177763 https://bugzilla.suse.com/show_bug.cgi?id=1177763#c1 James Fehlig <jfehlig@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(peter.posts@gmx.n | |et) --- Comment #1 from James Fehlig <jfehlig@suse.com> --- (In reply to Peter McDonough from comment #0)
lux-tw:~ # systemctl status libvirtd ● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: disabled)
So far so good, libvirtd is enabled and started after running "Install hypervisor and tools".
Active: inactive (dead) since Thu 2020-10-15 16:55:11 CEST; 4min 7s ago
The service is currently inactive due to the '--timeout 120' option passed to libvirtd. As seen below, the service started at 16:53:11 and successfully exited at 16:55:11 (2 minutes later) after no activity.
TriggeredBy: ● libvirtd.socket ● libvirtd-ro.socket
But restarting the service should be triggered by a connection to libvirtd's unix sockets.
● libvirtd-admin.socket Docs: man:libvirtd(8) https://libvirt.org Process: 1258 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=exited, status=0/SUCCESS) Main PID: 1258 (code=exited, status=0/SUCCESS)
Okt 15 16:53:11 lux-tw systemd[1]: Starting Virtualization daemon... Okt 15 16:53:11 lux-tw libvirtd[1258]: libvirt version: 6.8.0 Okt 15 16:53:11 lux-tw libvirtd[1258]: hostname: lux-tw Okt 15 16:53:11 lux-tw libvirtd[1258]: Failed to initialize libnetcontrol. Management of interface devices is d> Okt 15 16:53:11 lux-tw systemd[1]: Started Virtualization daemon. Okt 15 16:55:11 lux-tw systemd[1]: libvirtd.service: Succeeded.
Does libvirt's virsh command line tool work? E.g. 'virsh capabilities' and 'virsh list'? -- You are receiving this mail because: You are on the CC list for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1177763 https://bugzilla.suse.com/show_bug.cgi?id=1177763#c2 --- Comment #2 from Peter McDonough <peter.posts@gmx.net> --- Here are both outputs virsh list Id Name Status --------------------- virsh capabilities ------------------- <capabilities> <host> <uuid>60b880a2-295d-456c-874b-b2a82672e35b</uuid> <cpu> <arch>x86_64</arch> <model>EPYC-IBPB</model> <vendor>AMD</vendor> <microcode version='134222136'/> <counter name='tsc' frequency='3643156000' scaling='no'/> <topology sockets='1' dies='1' cores='8' threads='2'/> <feature name='ht'/> <feature name='osxsave'/> <feature name='xsaves'/> <feature name='cmp_legacy'/> <feature name='extapic'/> <feature name='skinit'/> <feature name='wdt'/> <feature name='tce'/> <feature name='topoext'/> <feature name='perfctr_core'/> <feature name='perfctr_nb'/> <feature name='invtsc'/> <feature name='clzero'/> <feature name='xsaveerptr'/> <feature name='npt'/> <feature name='lbrv'/> <feature name='svm-lock'/> <feature name='nrip-save'/> <feature name='tsc-scale'/> <feature name='vmcb-clean'/> <feature name='flushbyasid'/> <feature name='decodeassists'/> <feature name='pause-filter'/> <feature name='pfthreshold'/> <pages unit='KiB' size='4'/> <pages unit='KiB' size='2048'/> <pages unit='KiB' size='1048576'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> <suspend_hybrid/> </power_management> <iommu support='yes'/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> <uri_transport>rdma</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <memory unit='KiB'>16311428</memory> <pages unit='KiB' size='4'>4077857</pages> <pages unit='KiB' size='2048'>0</pages> <pages unit='KiB' size='1048576'>0</pages> <distances> <sibling id='0' value='10'/> </distances> <cpus num='16'> <cpu id='0' socket_id='0' die_id='0' core_id='0' siblings='0,8'/> <cpu id='1' socket_id='0' die_id='0' core_id='1' siblings='1,9'/> <cpu id='2' socket_id='0' die_id='0' core_id='2' siblings='2,10'/> <cpu id='3' socket_id='0' die_id='0' core_id='3' siblings='3,11'/> <cpu id='4' socket_id='0' die_id='0' core_id='4' siblings='4,12'/> <cpu id='5' socket_id='0' die_id='0' core_id='5' siblings='5,13'/> <cpu id='6' socket_id='0' die_id='0' core_id='6' siblings='6,14'/> <cpu id='7' socket_id='0' die_id='0' core_id='7' siblings='7,15'/> <cpu id='8' socket_id='0' die_id='0' core_id='0' siblings='0,8'/> <cpu id='9' socket_id='0' die_id='0' core_id='1' siblings='1,9'/> <cpu id='10' socket_id='0' die_id='0' core_id='2' siblings='2,10'/> <cpu id='11' socket_id='0' die_id='0' core_id='3' siblings='3,11'/> <cpu id='12' socket_id='0' die_id='0' core_id='4' siblings='4,12'/> <cpu id='13' socket_id='0' die_id='0' core_id='5' siblings='5,13'/> <cpu id='14' socket_id='0' die_id='0' core_id='6' siblings='6,14'/> <cpu id='15' socket_id='0' die_id='0' core_id='7' siblings='7,15'/> </cpus> </cell> </cells> </topology> <cache> <bank id='0' level='3' type='both' size='8' unit='MiB' cpus='0-3,8-11'/> <bank id='1' level='3' type='both' size='8' unit='MiB' cpus='4-7,12-15'/> </cache> <secmodel> <model>none</model> <doi>0</doi> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/bin/qemu-system-i386</emulator> <machine maxCpus='255'>pc-i440fx-5.1</machine> <machine canonical='pc-i440fx-5.1' maxCpus='255'>pc</machine> <machine maxCpus='255'>pc-i440fx-2.12</machine> <machine maxCpus='255'>pc-i440fx-2.0</machine> <machine maxCpus='1'>xenpv</machine> <machine maxCpus='288'>pc-q35-4.2</machine> <machine maxCpus='255'>pc-i440fx-2.5</machine> <machine maxCpus='255'>pc-i440fx-4.2</machine> <machine maxCpus='255'>pc-i440fx-1.5</machine> <machine maxCpus='255'>pc-q35-2.7</machine> <machine maxCpus='255'>pc-i440fx-2.2</machine> <machine maxCpus='255'>pc-1.1</machine> <machine maxCpus='255'>pc-i440fx-2.7</machine> <machine maxCpus='128'>xenfv-3.1</machine> <machine canonical='xenfv-3.1' maxCpus='128'>xenfv</machine> <machine maxCpus='255'>pc-q35-2.4</machine> <machine maxCpus='288'>pc-q35-2.10</machine> <machine maxCpus='288'>pc-q35-5.1</machine> <machine canonical='pc-q35-5.1' maxCpus='288'>q35</machine> <machine maxCpus='255'>pc-i440fx-1.7</machine> <machine maxCpus='288'>pc-q35-2.9</machine> <machine maxCpus='255'>pc-i440fx-2.11</machine> <machine maxCpus='288'>pc-q35-3.1</machine> <machine maxCpus='288'>pc-q35-4.1</machine> <machine maxCpus='255'>pc-i440fx-2.4</machine> <machine maxCpus='255'>pc-1.3</machine> <machine maxCpus='255'>pc-i440fx-4.1</machine> <machine maxCpus='255'>pc-i440fx-2.9</machine> <machine maxCpus='1'>isapc</machine> <machine maxCpus='255'>pc-i440fx-1.4</machine> <machine maxCpus='255'>pc-q35-2.6</machine> <machine maxCpus='255'>pc-i440fx-3.1</machine> <machine maxCpus='288'>pc-q35-2.12</machine> <machine maxCpus='255'>pc-i440fx-2.1</machine> <machine maxCpus='255'>pc-1.0</machine> <machine maxCpus='255'>pc-i440fx-2.6</machine> <machine maxCpus='288'>pc-q35-4.0.1</machine> <machine maxCpus='255'>pc-i440fx-1.6</machine> <machine maxCpus='288'>pc-q35-5.0</machine> <machine maxCpus='288'>pc-q35-2.8</machine> <machine maxCpus='255'>pc-i440fx-2.10</machine> <machine maxCpus='288'>pc-q35-3.0</machine> <machine maxCpus='288'>pc-q35-4.0</machine> <machine maxCpus='128'>xenfv-4.2</machine> <machine maxCpus='288'>microvm</machine> <machine maxCpus='255'>pc-i440fx-2.3</machine> <machine maxCpus='255'>pc-1.2</machine> <machine maxCpus='255'>pc-i440fx-4.0</machine> <machine maxCpus='255'>pc-i440fx-5.0</machine> <machine maxCpus='255'>pc-i440fx-2.8</machine> <machine maxCpus='255'>pc-q35-2.5</machine> <machine maxCpus='255'>pc-i440fx-3.0</machine> <machine maxCpus='288'>pc-q35-2.11</machine> <domain type='qemu'/> <domain type='kvm'/> </arch> <features> <pae/> <nonpae/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine maxCpus='255'>pc-i440fx-5.1</machine> <machine canonical='pc-i440fx-5.1' maxCpus='255'>pc</machine> <machine maxCpus='255'>pc-i440fx-2.12</machine> <machine maxCpus='255'>pc-i440fx-2.0</machine> <machine maxCpus='1'>xenpv</machine> <machine maxCpus='288'>pc-q35-4.2</machine> <machine maxCpus='255'>pc-i440fx-2.5</machine> <machine maxCpus='255'>pc-i440fx-4.2</machine> <machine maxCpus='255'>pc-i440fx-1.5</machine> <machine maxCpus='255'>pc-q35-2.7</machine> <machine maxCpus='255'>pc-i440fx-2.2</machine> <machine maxCpus='255'>pc-1.1</machine> <machine maxCpus='255'>pc-i440fx-2.7</machine> <machine maxCpus='128'>xenfv-3.1</machine> <machine canonical='xenfv-3.1' maxCpus='128'>xenfv</machine> <machine maxCpus='255'>pc-q35-2.4</machine> <machine maxCpus='288'>pc-q35-2.10</machine> <machine maxCpus='288'>pc-q35-5.1</machine> <machine canonical='pc-q35-5.1' maxCpus='288'>q35</machine> <machine maxCpus='255'>pc-i440fx-1.7</machine> <machine maxCpus='288'>pc-q35-2.9</machine> <machine maxCpus='255'>pc-i440fx-2.11</machine> <machine maxCpus='288'>pc-q35-3.1</machine> <machine maxCpus='288'>pc-q35-4.1</machine> <machine maxCpus='255'>pc-i440fx-2.4</machine> <machine maxCpus='255'>pc-1.3</machine> <machine maxCpus='255'>pc-i440fx-4.1</machine> <machine maxCpus='255'>pc-i440fx-2.9</machine> <machine maxCpus='1'>isapc</machine> <machine maxCpus='255'>pc-i440fx-1.4</machine> <machine maxCpus='255'>pc-q35-2.6</machine> <machine maxCpus='255'>pc-i440fx-3.1</machine> <machine maxCpus='288'>pc-q35-2.12</machine> <machine maxCpus='255'>pc-i440fx-2.1</machine> <machine maxCpus='255'>pc-1.0</machine> <machine maxCpus='255'>pc-i440fx-2.6</machine> <machine maxCpus='288'>pc-q35-4.0.1</machine> <machine maxCpus='255'>pc-i440fx-1.6</machine> <machine maxCpus='288'>pc-q35-5.0</machine> <machine maxCpus='288'>pc-q35-2.8</machine> <machine maxCpus='255'>pc-i440fx-2.10</machine> <machine maxCpus='288'>pc-q35-3.0</machine> <machine maxCpus='288'>pc-q35-4.0</machine> <machine maxCpus='128'>xenfv-4.2</machine> <machine maxCpus='288'>microvm</machine> <machine maxCpus='255'>pc-i440fx-2.3</machine> <machine maxCpus='255'>pc-1.2</machine> <machine maxCpus='255'>pc-i440fx-4.0</machine> <machine maxCpus='255'>pc-i440fx-5.0</machine> <machine maxCpus='255'>pc-i440fx-2.8</machine> <machine maxCpus='255'>pc-q35-2.5</machine> <machine maxCpus='255'>pc-i440fx-3.0</machine> <machine maxCpus='288'>pc-q35-2.11</machine> <domain type='qemu'/> <domain type='kvm'/> </arch> <features> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> </features> </guest> </capabilities> -- You are receiving this mail because: You are on the CC list for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1177763 https://bugzilla.suse.com/show_bug.cgi?id=1177763#c3 --- Comment #3 from James Fehlig <jfehlig@suse.com> --- So libvirtd is working fine and the capabilities look good. How are you connecting to libvirtd with virt-manager? Are you starting virt-manager on the same host running libvirtd, connecting to qemu:///system ? Please attach /root/.cache/virt-manager/virt-manager.log from the machine running virt-manager. Hopefully it contains some hints. -- You are receiving this mail because: You are on the CC list for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1177763 https://bugzilla.suse.com/show_bug.cgi?id=1177763#c4 --- Comment #4 from Peter McDonough <peter.posts@gmx.net> --- Last first: from Tumbleweed /root/.cache/virt-manager/virt-manager.log as root: lux-tw:~ # ls -la ~/.cache/ insgesamt 1356 drwx------ 3 root root 4096 9. Okt 22:31 . drwx------ 9 root root 4096 20. Okt 11:36 .. -rw-r--r-- 1 root root 10547304 19. Okt 21:49 icon-cache.kcache drwxr-xr-x 43 root root 4096 12. Okt 12:08 mesa_shader_cache There is no virt-manager/virt-manager.log. (!) I did nothing complex with virt-manager, just plain Tumbleweed plasma and standard kvm + tool from Yast Start with virtual manager and " The libvirtd service does not appear to be installed. ..." I did the same in leap 15.2, no problem. The difference I can see between both and I suspect there is the root of the problem with rpm -qa libvirt\* Yast pulls in Leap 15.2 - 26 packages in Tumbleweed only - 22 packages, check below: lux-tw:~ # rpm -qa libvirt\* libvirt-glib-1_0-0-3.0.0-1.3.x86_64 libvirt-bash-completion-6.8.0-1.1.noarch libvirt-libs-6.8.0-1.1.x86_64 libvirt-client-6.8.0-1.1.x86_64 libvirt-daemon-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-core-6.8.0-1.1.x86_64 libvirt-daemon-driver-secret-6.8.0-1.1.x86_64 libvirt-daemon-driver-qemu-6.8.0-1.1.x86_64 libvirt-daemon-driver-nwfilter-6.8.0-1.1.x86_64 libvirt-daemon-driver-nodedev-6.8.0-1.1.x86_64 libvirt-daemon-driver-network-6.8.0-1.1.x86_64 libvirt-daemon-driver-interface-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-scsi-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-rbd-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-mpath-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-logical-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-iscsi-direct-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-iscsi-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-disk-6.8.0-1.1.x86_64 libvirt-daemon-config-network-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-6.8.0-1.1.x86_64 libvirt-daemon-qemu-6.8.0-1.1.x86_64 -- You are receiving this mail because: You are on the CC list for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1177763 https://bugzilla.suse.com/show_bug.cgi?id=1177763#c5 --- Comment #5 from James Fehlig <jfehlig@suse.com> --- (In reply to Peter McDonough from comment #4)
There is no virt-manager/virt-manager.log. (!)
Odd. Try starting it manually with debug. E.g. 'virt-manger --debug'.
Yast pulls in Leap 15.2 - 26 packages in Tumbleweed only - 22 packages, check below:
I recently disabled some drivers that have limited functionality and minimal upstream development. I also dropped the hard 'Requires' on the gluster storage backend. These changes likely account for the package count difference.
lux-tw:~ # rpm -qa libvirt\* libvirt-glib-1_0-0-3.0.0-1.3.x86_64 libvirt-bash-completion-6.8.0-1.1.noarch libvirt-libs-6.8.0-1.1.x86_64 libvirt-client-6.8.0-1.1.x86_64 libvirt-daemon-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-core-6.8.0-1.1.x86_64 libvirt-daemon-driver-secret-6.8.0-1.1.x86_64 libvirt-daemon-driver-qemu-6.8.0-1.1.x86_64 libvirt-daemon-driver-nwfilter-6.8.0-1.1.x86_64 libvirt-daemon-driver-nodedev-6.8.0-1.1.x86_64 libvirt-daemon-driver-network-6.8.0-1.1.x86_64 libvirt-daemon-driver-interface-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-scsi-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-rbd-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-mpath-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-logical-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-iscsi-direct-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-iscsi-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-disk-6.8.0-1.1.x86_64 libvirt-daemon-config-network-6.8.0-1.1.x86_64 libvirt-daemon-driver-storage-6.8.0-1.1.x86_64 libvirt-daemon-qemu-6.8.0-1.1.x86_64
We already verified this set of packages gives you a working libvirt configuration, otherwise virsh would not work. -- You are receiving this mail because: You are on the CC list for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1177763 https://bugzilla.suse.com/show_bug.cgi?id=1177763#c6 --- Comment #6 from Peter McDonough <peter.posts@gmx.net> --- peter@lux-tw:~> virt-manager --debug [Di, 20 Okt 2020 19:38:41 virt-manager 3452] DEBUG (cli:204) Launched with command line: /usr/bin/virt-manager --debug [Di, 20 Okt 2020 19:38:41 virt-manager 3452] DEBUG (virtmanager:166) virt-manager version: 3.1.0 [Di, 20 Okt 2020 19:38:41 virt-manager 3452] DEBUG (virtmanager:167) virtManager import: /usr/share/virt-manager/virtManager [Di, 20 Okt 2020 19:38:42 virt-manager 3452] DEBUG (virtmanager:204) PyGObject version: 3.36.1 [Di, 20 Okt 2020 19:38:42 virt-manager 3452] DEBUG (virtmanager:208) GTK version: 3.24.22 [Di, 20 Okt 2020 19:38:42 virt-manager 3452] DEBUG (systray:77) Using AppIndicator3 for systray [Di, 20 Okt 2020 19:38:42 virt-manager 3452] DEBUG (systray:463) Showing systray: False [Di, 20 Okt 2020 19:38:42 virt-manager 3452] DEBUG (inspection:206) python guestfs is not installed [Di, 20 Okt 2020 19:38:42 virt-manager 3452] DEBUG (engine:111) No stored URIs found. [Di, 20 Okt 2020 19:38:42 virt-manager 3452] DEBUG (engine:461) processing cli command uri= show_window=manager domain= [Di, 20 Okt 2020 19:38:42 virt-manager 3452] DEBUG (engine:464) No cli action requested, launching default window [Di, 20 Okt 2020 19:38:42 virt-manager 3452] DEBUG (manager:185) Showing manager [Di, 20 Okt 2020 19:38:42 virt-manager 3452] DEBUG (engine:316) window counter incremented to 1 [Di, 20 Okt 2020 19:38:42 virt-manager 3452] DEBUG (engine:211) Initial gtkapplication activated [Di, 20 Okt 2020 19:38:43 virt-manager 3452] DEBUG (engine:135) Probed default URI=qemu:///system I closed the virtual-manager [Di, 20 Okt 2020 19:40:09 virt-manager 3452] DEBUG (manager:196) Closing manager [Di, 20 Okt 2020 19:40:09 virt-manager 3452] DEBUG (engine:323) window counter decremented to 0 [Di, 20 Okt 2020 19:40:09 virt-manager 3452] DEBUG (engine:343) No windows found, requesting app exit /usr/share/virt-manager/virtManager/baseclass.py:190: Warning: Source ID 30 was not found when attempting to remove it GLib.source_remove(handle) [Di, 20 Okt 2020 19:40:09 virt-manager 3452] DEBUG (engine:369) Exiting app normally. peter@lux-tw:~> -- You are receiving this mail because: You are on the CC list for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1177763 https://bugzilla.suse.com/show_bug.cgi?id=1177763#c13 --- Comment #13 from Peter McDonough <peter.posts@gmx.net> --- (In reply to James Fehlig from comment #8)
(In reply to John Doe from comment #7)
sudo systemctl status libvirtd: ... virt-manager --debug:
Ah, a small hint that virt-manager is run as a normal user. Is this the case? Same question for you Peter.
I initially saw the problem when starting virt-manager as a normal user. I tried adding a new 'qemu:///system' connection using the "Add Connection" wizard and it worked after providing root passwd. I then added a 'qemu:///session' connection and it also worked. And I no longer see the problem when restarting virt-manager. ...
Sorry, I'm late. Now I notice it, too. In Leap 15.2 the root password is requested when starting the virtual-manager, not so in Tumbleweed. So, adding a new 'qemu:///system' connection using the "Add Connection" wizard and just clicking connect (Verbinden) does it. Everything works as in Leap. Thanks. -- You are receiving this mail because: You are on the CC list for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1177763 https://bugzilla.suse.com/show_bug.cgi?id=1177763#c14 --- Comment #14 from Peter McDonough <peter.posts@gmx.net> --- (In reply to John Doe from comment #9)
I added myself to the libvirt group to see if it would change anything but the issue remains the same.
Solution: LIBVIRT GROUP to USER Isn't it always the obvious, which is hardest to see? I didn't add the libvirt group to the user, so the user needs root for starting libvirt. Test: I reinstalled Tumbleweed from a backup, pre-libvirt status. Installed kvm+tools added libvirt group to the user Just to make sure, rebooted Tumbleweed Called up virtual-manager as user, installed and run a virtual machine. No problems at all, apart from the missing KVM Default Network, which needed. sudo virsh net-start default sudo virsh net-autostart --network default Peter -- You are receiving this mail because: You are on the CC list for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1177763 https://bugzilla.suse.com/show_bug.cgi?id=1177763#c15 James Fehlig <jfehlig@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(peter.posts@gmx.n | |et) | --- Comment #15 from James Fehlig <jfehlig@suse.com> --- (In reply to Peter McDonough from comment #14)
(In reply to John Doe from comment #9)
I added myself to the libvirt group to see if it would change anything but the issue remains the same.
Solution: LIBVIRT GROUP to USER
There is no need to add your user to the libvirt group unless you set auth_unix_{ro,rw} to 'none' in /etc/libvirt/libvirtd.conf. On SUSE distros libvirtd uses polkit auth by default. Socket permissions on the read-only and read-write sockets are set to 06660, allowing anyone to connect, but they are subjected to polkit auth. The default polkit privileges allow any user to connect to read-only socket, but only root to the read-write socket. virt-manager connects to the read-write socket, so you should be prompted for root passwd if running it as a normal user. If auth on libvirtd's read-only or read-write sockets is set to 'none', then standard unix socket permissions and group membership checks apply. BTW, I'm clearing your needinfo since AFAIK we no longer need info from you :-). -- You are receiving this mail because: You are on the CC list for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1177763 https://bugzilla.suse.com/show_bug.cgi?id=1177763#c16 Charles Arnold <carnold@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |RESOLVED Resolution|--- |FIXED --- Comment #16 from Charles Arnold <carnold@suse.com> --- The virt-manager fix for this is now in Factory. It should show up soon in Tumbleweed (maybe already there?). -- You are receiving this mail because: You are on the CC list for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1177763 https://bugzilla.suse.com/show_bug.cgi?id=1177763#c17 --- Comment #17 from John Doe <nmr_privat@tutanota.com> --- (In reply to Charles Arnold from comment #16)
The virt-manager fix for this is now in Factory. It should show up soon in Tumbleweed (maybe already there?).
Seems to work fine on tw now for me at least :) -- You are receiving this mail because: You are on the CC list for the bug.
participants (1)
-
bugzilla_noreply@suse.com