Hey guys,
it's been a while since my last response, I wasn't on vacation all
this time ;-) We are fighting with our cloud environment and have an
open Service Request for an issue where openvswitch causes problems.
I didn't count how many times I re-installed our cloud nodes from
scratch in the last two weeks, but last week happened something
strange. I don't know why, but one single installation worked and I
had the chance to see what the network configuration is supposed to
look like. Of course I tried to reproduce it, but it didn't work at
all since last week.
I just wanted to share what I saw, maybe one of you is facing the same
question. The difference to our settings when I posted this question
lies in the neutron barclamp. When we set up our first environments we
used "linuxbridge", but as you read in my question, the instances had
no IP and there was a lot of modification required to get it all
working as desired. Then we tried openvswitch with vlan, but we are
facing major problems while trying to deploy nova barclamp. As I
already mentioned, we got it working only once, but it's not
reproducable. Anyway, with openvswitch the instance get their IP
injected and you can see it directly in the login prompt. If you
configure your security groups and rules correctly, the instances can
ping your external network etc. So if - and only if - you get your
cloud working with openvswitch, the cloud is able to handle all the
traffic correctly, just as Rossella described in her first answer.
I just wish that there was a hint or a recommendation or a description
which settings should be used when deploying neutron. Anyway, now I
understand the description here:
http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html
The configuration on control (network) and compute nodes looked
exactly like described.
Now I just need to get the cloud working again, which seems to be
quite difficult, our SR is open vor weeks now.
Thanks again for your answers!
Regards,
Eugen
Zitat von Adam Spiers
No problem! Whenever suits you guys.
Please note I'll also be away on vacation from next Thursday; however if you send to this list then others will see, and of course if you submit an official support request to SUSE then you can even get 24x7 support if you want it :)
Eugen Block
wrote: Hi Adam,
I can't answer all your questions, I'll have to ask a colleague for help. Unfortunately, this may take a while since I'm on vacation for the next two weeks. I forwarded this issue to my colleague, so maybe he'll come back to you and will be able to give a more detailed answer, if not I'll get back to you after my vacation.
Regards, Eugen
Zitat von Adam Spiers
: Eugen Block
wrote: Thanks, Adam and Rossella.
But the VM's eth0 still needs an IP, right? Otherwise the VM's kernel has no way of communicating over IP. So it has to be configured via DHCP. It sounds to me like your VM's eth0 isn't configured to use DHCP.
That was exactly my thought: if eth0 has no IP, how am I supposed to communicate with the VM?
You can't :-)
I tried an image which has DHCP enabled, still no success without changing it manually.
Please give details. You are saying that it definitely made a DHCP request but got no response? Did you try packet sniffing to see whether neutron's dnsmasq received the request and offered a response?
I created that SLES11-SP3 image with virt-manager from scratch, it had an IP assigned in the admin network during installation.
A static IP or DHCP? Static won't work.
Interesting is: when I upload that image to glance and start a VM, it has no IP on eth0. I really don't get it at this point.
Again we can't help you without more details. Do you have a support contract? If so we could even arrange a remote support session.
In your previous email you said you added a vlan in the compute node and connected it to the floating network > bridge. You don't need to do these steps. The compute node (unless you are using DVR) has no access to the external network. The network node has access to it and this is configured automatically by SUSE Cloud. You don't need to manually set up VLANs or linux bridges. VMs should be accessible if you assign them a floating IP and if you allow ssh in the security groups.
The network configuration is still the same, so maybe we'll have to change it and give it a try. But it is still not clear to me, how our network settings have to look like. You say, I don't have to configure anything on the compute node, so what do I have to do on control node?
Nothing, Crowbar should do it all for you.
Before the current installation we had a setup without VLAns on Compute Node and it didn't work either. Now we are stuck and don't know what's right and what's wrong in our environment. Any further help would be really appreciated!
It's very hard to help without proper debug. Ideally we would help you via remote access, but at very least we'll need supportconfig tarballs from the admin server and neutron nodes, and debug from the VM network configuration (supportconfig tarballs if the guest VMs are SLES-based). Use of a support contract will be by far the best way to solve this. Help on these forums is best-effort only, as I'm sure you can understand. -- To unsubscribe, e-mail: opensuse-cloud+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-cloud+owner@opensuse.org
-- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : eblock@nde.ag
Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983
-- To unsubscribe, e-mail: opensuse-cloud+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-cloud+owner@opensuse.org
-- To unsubscribe, e-mail: opensuse-cloud+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-cloud+owner@opensuse.org
-- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : eblock@nde.ag Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983 -- To unsubscribe, e-mail: opensuse-cloud+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-cloud+owner@opensuse.org