Mailinglist Archive: opensuse (2459 mails)

< Previous Next >
Re: [opensuse] Re: Xen on production environments
  • From: Ian Marlier <ian.marlier@xxxxxxxxxxxxxxxxxxx>
  • Date: Thu, 13 Mar 2008 12:40:22 -0400
  • Message-id: <C3FED136.89661%ian.marlier@xxxxxxxxxxxxxxxxxxx>



On 3/13/08 11:23 AM, "Jose" <jtc@xxxxxxxxxxxxxxxxxxxxxxxx> wrote:

Jack Malone wrote:

mourik jan c heupink <heupink@xxxxxxxxxxxxx> 3/13/2008 3:51 AM >>>

No Xen adopters?, is that bad?.

We are running Xen here. Performance is good, though I guess that very
much depends on what you want to do in your virtual machines.

We have a double quad core machine, 8 gig ram, and one virtual machine
is being used to run statistical analysis on. (stata multicore)

This works _very_ fast.

But I'm sure many other people here have much more interesting stories
to tell?

I'm looking at putting in either Xen or vmware enterprise here at work at
the moment myself. I will be running 3 OES 2 servers an at least 3 windows
2003 vm servers. I have not played with the Xen that comes in SuSe enterprise
10 yet but have played with the Xen enterprise version an love it. With the
enterprise version its the main os for the server ( a modified version of
Linux ) an you control an install all vm's from a windows work station. very
nice interface. I have a good friend who is a Novell consultant show me the
Xen stuff in SuSe enterprise 10 this past weekend an it was pretty nice
looking. I'm hoping to go with Xen here an save the company lots of money,
but its not up to me to make the whole decision.

Just my thoughts on Xen for what its worth.

jack

We played with XEN, but found it difficult to operate, one of the major
problems we had is that when we create a new machine, allows us to point
to the source of installation (DVD/CD no virtual) for the first time,
but if had a problem during install due to compatibility problems, when
we would try to restart the virtual engine again, it would try to boot
to disk and the CD/DVD declaration would be gone, don't know if this is
the normal behavior for XEN, but we end up using vmware for Linux, more
flexible and allows us to play with the virtual hardware way easier than
XEN tools,

Just my take, may be different for some other people.

Jose

I've got a whole mess o' virtual machines (well north of 100), running on
(currently) 23 dual- and quad-core servers. OpenSUSE 10.1 is the OS for
most of them, with opensuse 10.3 in the middle of rollout.

Performance has been pretty great, in my estimation. We've upped our useful
capacity by quite a bit, and managed to spread VMs around in such a way that
utilization of resources is pretty even.

For most of our applications, I don't worry about HA, since almost
everything is redundant anyway. If hardware dies, I just rebuild the
virtual machines somewhere else that has some spare cycles (using an
autoyast setup -- takes about 20 minutes from the time of failure to having
a replacement VM up and running), and we're off and going.

For those machines that do need HA, I use drbd version 8 to replicate the
block devices on each machine, effectively a network RAID 1, and heartbeat
to monitor and failover. It's only been triggered once, but it worked
flawlessly in that case. Because of our setup there is downtime when the
failover happens, as the machine has to be started up fresh (as opposed to
migrated), but that downtime proved to be around 30 seconds for all
services.

As of opensuse 10.3, the vm-install and virt-manager packages are included
as part of the OS, and provide for a very simple installation and management
mechanism.

I can't really speak to network storage. I briefly played with iSCSI as a
way to provide shared storage, but in the end decided that the overhead just
wasn't worth it -- if you assume that any machine could possibly die, having
a single storage server gives you a big ol' single-point-of-failure, and
that's bad. DRBD gets around that.

TCO (in terms of management time, etc) is similar to a bunch of individual
machines, though there is a bit of an upfront investment to learn how it all
works. And, as mentioned, I've been able to improve our utilization quite a
bit, which means less power needed for fewer servers, less heat (so less
cooling), a simpler physical network infrastructure, and less cost for
space...

The only big issue that I've run into has to do with the clock; there's a
bug in xen 3.1 that can cause kernel panics on a CPU that does frequency
scaling, because the hypervisor's internal clock goes nuts. The workaround
is to add this to /etc/init.d/boot.local:

echo "jiffies" >>
/sys/devices/system/clocksource/clocksource0/current_clocksource

Related to that, I've seen some issues with Windows Server running on Xen,
related to the clock (though I have very little experience with this setup
overall -- only 2 windows machines running on Xen at all, only 1 of them Win
Server). Basically, I've come to the conclusion that with xen 3.1, one
ought not to put the Domain Controller role on a virtual machine. (This may
well be different now that there are Windows drivers designed to work with
qemu hardware, and now that xen 3.2 is out.)

HTH,

- Ian

--
To unsubscribe, e-mail: opensuse+unsubscribe@xxxxxxxxxxxx
For additional commands, e-mail: opensuse+help@xxxxxxxxxxxx

< Previous Next >
References