I would like to add some more detailed information to my last reply. ---cut here--- compute1:~ # rpm -qi python-nova-13.0.1~a0~dev46-1.1.noarch Name : python-nova Version : 13.0.1~a0~dev46 Release : 1.1 Architecture: noarch Install Date: Di 10 Mai 2016 12:38:57 CEST Group : Development/Languages/Python Size : 16549000 License : Apache-2.0 Signature : RSA/SHA256, Mo 09 Mai 2016 13:39:20 CEST, Key ID 893a90dad85f9316 Source RPM : openstack-nova-13.0.1~a0~dev46-1.1.src.rpm Build Date : Mo 09 Mai 2016 13:38:09 CEST Build Host : cloud113 Relocations : (not relocatable) Vendor : obs://build.opensuse.org/Cloud:OpenStack URL : https://launchpad.net/nova Summary : OpenStack Compute (Nova) - Python module Description : This package contains the core Python module of OpenStack Nova. Distribution: Cloud:OpenStack:Mitaka / openSUSE_Leap_42.1 ################################################################# compute1:~ # rpm -qi xen-libs Name : xen-libs Version : 4.7.0_03 Release : 440.1 Architecture: x86_64 Install Date: Di 10 Mai 2016 13:59:52 CEST Group : System/Kernel Size : 1560640 License : GPL-2.0 Signature : RSA/SHA256, Fr 06 Mai 2016 16:33:12 CEST, Key ID a193fbb572174fc2 Source RPM : xen-4.7.0_03-440.1.src.rpm Build Date : Fr 06 Mai 2016 16:31:47 CEST Build Host : build74 Relocations : (not relocatable) Vendor : obs://build.opensuse.org/Virtualization ################################################################# compute1:~ # rpm -qi qemu-block-rbd Name : qemu-block-rbd Version : 2.5.93 Release : 327.6 Architecture: x86_64 Install Date: Di 10 Mai 2016 14:53:26 CEST Group : System/Emulators/PC Size : 84024 License : BSD-3-Clause and GPL-2.0 and GPL-2.0+ and LGPL-2.1+ and MIT Signature : (none) Source RPM : qemu-2.5.93-327.6.src.rpm Build Date : Di 10 Mai 2016 14:42:57 CEST Build Host : compute1.cloud.hh.nde.ag ---cut here--- As you can see, we're running a self-compiled version of qemu - I'm not sure if it still holds true with above version, but at least with earlier versions, we had to modify the spec file to enable RBD support. Considering your reply, Xen in conjuction with RBD should work, but my tests show that it doesn't. For completeness sake, I ran an additional test case and now see the following behavior: - KVM, boot from volume: "driver_name" provided by Nova - KVM, boot from image: "driver_name" provided by libvirt - Xen, boot from volume: "driver_name" provided by Nova - Xen, boot from image: error, no-one provides "driver_name" So to achieve some kind of consistency it seems that it would be necessary to change libvirt to provide the driver name if an instance is launched from an image without creating a new volume. What is the way to go for me now? Should I file a bug report? Best regards, Eugen Zitat von Eugen Block <eblock@nde.ag>:
Hi, thanks for your quick response!
Are you using libvirt 1.2.18 and xen 4.5 from the Leap updates
I'm using libvirt version 1.3.4 from obs://build.opensuse.org/Virtualization.
If I use the option boot from image (creates a new volume) it works just fine and that driver_name 'qemu' is provided, but it's missing if I don't use a volume.
Regards, Eugen
Zitat von Jim Fehlig <jfehlig@suse.com>:
Eugen Block wrote:
Hi all,
I'm running Openstack Mitaka in a Leap environment, 1 controller, 2 compute nodes (for testing purposes one with xen, the other with kvm)
Are you using libvirt 1.2.18 and xen 4.5 from the Leap updates, or something newer? I added support for network based block devices (including rbd) to the libvirt libxl driver in the libvirt 1.3.2 release cycle, so you'll need libvirt
= 1.3.2 for this to work with xen.
Options are:
* use Tumbleweed * wait for Leap 42.2 * update your Leap xen compute nodes to the packages in the Virtualization repo
http://download.opensuse.org/repositories/Virtualization/openSUSE_Leap_42.1/
The last option requires updating all the virt-related packages. We have no automated tests for such a configuration, so your mileage may vary.
Regards, Jim
and the storage backend is a ceph cluster consisting of three nodes, all the relevant services use ceph as storage backend (glance, cinder-volume, nova). I have uploaded two different images to glance, a small cirros image and a leap image.
Now when I launch an instance from image and choose to create a new volume for that, it's no problem. But if I try to boot from image without a volume, I get an error on my xen-compute nova-compute.log:
---cut here--- libvirtError: internal error: libxenlight failed to create new domain 'instance-00000229' ---cut here---
and libxl reports:
---cut here--- 2016-05-19 11:38:44 CEST libxl: error: libxl_device.c:300:libxl__device_disk_set_backend: Disk vdev=xvda failed to stat: rbd:images/39c61537-52e5-487c-9ec4-457b3612f549_disk:<CEPH-CREDENTIALS: No such file or directory 2016-05-19 11:38:44 CEST libxl: error: libxl_create.c:930:initiate_domain_create: Unable to set disk defaults for disk 0 ---cut here---
I've been debugging this and found out that the resulting xml config is missing the driver_name, which has to be 'qemu' in this case. I find the libxl error saying there is no such file kind of misleading, but it's the consequence of nova being unaware of the right backenddriver.
I tried different ways to test it, I used the xml config of a failing instance (from debug output) and wanted to "virsh define" that VM, it failed until I added "name='qemu'" to the driver tag. Then as a workaround I added "qemu" directly into the python code, which works fine now.
---cut here--- compute1:/usr/lib/python2.7/site-packages/nova/virt/libvirt # diff -u config.py.dist config.py --- config.py.dist 2016-05-07 20:35:01.000000000 +0200 +++ config.py 2016-05-19 13:34:54.564961720 +0200 @@ -745,6 +745,8 @@
dev.set("type", self.source_type) dev.set("device", self.source_device) + if (self.target_bus == 'xen'): + self.driver_name = 'qemu' if (self.driver_name is not None or self.driver_format is not None or self.driver_cache is not None or ---cut here---
If you use a volume to launch that instance from, there is a function call libvirt_utils.pick_disk_driver_name which also provides qemu to the xml string and the instance gets started successfully.
The difference to kvm is that there is no driver name in the generated xml config, but virsh dumpxml <instance> provides:
---cut here--- <disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <auth username='openstack'> ---cut here---
So I have to assume that libxl does not add the driver name to the xen config but it does to the kvm config.
@Jim Fehlig: As I found your name many times in the changelogs of libvirt I hoped you could give some advice or any comment on that.
Regards, Eugen
-- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : eblock@nde.ag
Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983
-- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : eblock@nde.ag Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983 -- To unsubscribe, e-mail: opensuse-cloud+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-cloud+owner@opensuse.org