Hi, I have to update our xen compute nodes (OpenStack Mitaka on Leap 42.1, Ceph as storage backend), so I tried that with the first one. Now libvirt updated to version 3.0.0 and qemu to version 2.7.0 (from repo [1]), and as usual, we had to build the qemu-block-rbd package ourselves. Unfortunately, the compute node doesn't want to start instances anymore. Although nova reports "Instance started successfully", qemu reports: ---cut here--- compute3:~ # tail /var/log/xen/qemu-dm-minisuse.log xen be: qdisk-51712: xen be: qdisk-51712: error: Unknown protocol 'rbd' error: Unknown protocol 'rbd' xen be: qdisk-51712: xen be: qdisk-51712: initialise() failed initialise() failed ---cut here--- I already know that message and it was the reason for our previous conversation here in this mailing list. Now I'm wondering, why we're at the same point again and what can I do to get that compute back up? The other xen compute node has libvirt version 2.5.0-617.1 installed, together with self-compiled qemu-2.7.0. Should I try to downgrade libvirt back to 2.5? Or has rbd support been disabled for version 3.0? Thanks for any hints! Regards, Eugen [1] http://download.opensuse.org/repositories/Virtualization/openSUSE_Leap_42.1/ Zitat von Jim Fehlig <jfehlig@suse.com>:
Eugen Block wrote:
Hi all,
I'm running Openstack Mitaka in a Leap environment, 1 controller, 2 compute nodes (for testing purposes one with xen, the other with kvm)
Are you using libvirt 1.2.18 and xen 4.5 from the Leap updates, or something newer? I added support for network based block devices (including rbd) to the libvirt libxl driver in the libvirt 1.3.2 release cycle, so you'll need libvirt
= 1.3.2 for this to work with xen.
Options are:
* use Tumbleweed * wait for Leap 42.2 * update your Leap xen compute nodes to the packages in the Virtualization repo
http://download.opensuse.org/repositories/Virtualization/openSUSE_Leap_42.1/
The last option requires updating all the virt-related packages. We have no automated tests for such a configuration, so your mileage may vary.
Regards, Jim
and the storage backend is a ceph cluster consisting of three nodes, all the relevant services use ceph as storage backend (glance, cinder-volume, nova). I have uploaded two different images to glance, a small cirros image and a leap image.
Now when I launch an instance from image and choose to create a new volume for that, it's no problem. But if I try to boot from image without a volume, I get an error on my xen-compute nova-compute.log:
---cut here--- libvirtError: internal error: libxenlight failed to create new domain 'instance-00000229' ---cut here---
and libxl reports:
---cut here--- 2016-05-19 11:38:44 CEST libxl: error: libxl_device.c:300:libxl__device_disk_set_backend: Disk vdev=xvda failed to stat: rbd:images/39c61537-52e5-487c-9ec4-457b3612f549_disk:<CEPH-CREDENTIALS: No such file or directory 2016-05-19 11:38:44 CEST libxl: error: libxl_create.c:930:initiate_domain_create: Unable to set disk defaults for disk 0 ---cut here---
I've been debugging this and found out that the resulting xml config is missing the driver_name, which has to be 'qemu' in this case. I find the libxl error saying there is no such file kind of misleading, but it's the consequence of nova being unaware of the right backenddriver.
I tried different ways to test it, I used the xml config of a failing instance (from debug output) and wanted to "virsh define" that VM, it failed until I added "name='qemu'" to the driver tag. Then as a workaround I added "qemu" directly into the python code, which works fine now.
---cut here--- compute1:/usr/lib/python2.7/site-packages/nova/virt/libvirt # diff -u config.py.dist config.py --- config.py.dist 2016-05-07 20:35:01.000000000 +0200 +++ config.py 2016-05-19 13:34:54.564961720 +0200 @@ -745,6 +745,8 @@
dev.set("type", self.source_type) dev.set("device", self.source_device) + if (self.target_bus == 'xen'): + self.driver_name = 'qemu' if (self.driver_name is not None or self.driver_format is not None or self.driver_cache is not None or ---cut here---
If you use a volume to launch that instance from, there is a function call libvirt_utils.pick_disk_driver_name which also provides qemu to the xml string and the instance gets started successfully.
The difference to kvm is that there is no driver name in the generated xml config, but virsh dumpxml <instance> provides:
---cut here--- <disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <auth username='openstack'> ---cut here---
So I have to assume that libxl does not add the driver name to the xen config but it does to the kvm config.
@Jim Fehlig: As I found your name many times in the changelogs of libvirt I hoped you could give some advice or any comment on that.
Regards, Eugen
-- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : eblock@nde.ag Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983 -- To unsubscribe, e-mail: opensuse-cloud+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-cloud+owner@opensuse.org