>I am relatively new to linux and suse. i've been running Suse 9.1 on an
>AMD64 as a guest os under VMWare. We've been testing linux capabilities
>but keeping windows as main os for windows network admin etc. Well I'm
>ready to switch to Suse 9.1 64 as host os and put windows on as guest.
>
>the problem is RAID. My mother board, ASUS K8V has on board raid which i
>have been told is not "TRUE" RAID and therfore not supported. So now, I'm
>considering software RAID or buying an inexpensive 2 channel SATA RAID
>controller. My question is,
> - Is hardware RAID worth the extra money in
>performance/reliability/etc or should I go with software raid
Most of those inexpensive RAID controllers (like the 2 channel types)
are nothing more than 2 channel ATA or SATA cards with either hardware
or software based RAID0 or RAID1 capabilities. These are either simple
striping (RAID0) or mirroring (RAID1). The hardware really doesn't do
much in most cases. I think most of them can be configured to present the
disks straight to the OS, and let you treat them as regular disks. I am
told that configuring RAID0 or RAID1 from Linux can sometimes get better
performance out of them than if you let the hardware do the same things.
Again, the Linux OS does not have to do much to perform those functions.
But my experience is that if one of the drives fail, it's much harder
to recover a RAID1 set from Linux than if you had let the hardware take
care of the RAIDing. (If you lose a drive in a RAID0 set, you've lost
all the data anyway.)
The value that a high performance RAID controller gives you is the ability
to combine many drives (5 or more) together to provide greater throughput,
higher capacity, and redundancy to protect against single drive failures.
Such a RAID controller will have a number of features:
- Multiple separate disk channels to allow each drive to communicate
uninterrupted with the controller. All the new RAID systems using
PATA or SATA drives will provide a separate channel for each drive, so
each drive becomes a bus master.
- A hardware CRC calculator to generate the redundancy code in real-time.
Any RAID systems that have to use your CPU to generate the code will be
much slower, and sort of defeats the performance aspect of RAID. I am
told that the speed of most x86 based processors today may out perform
most hardware CRC generators. This may be true, but I still would not
want to dedicate my host CPU to processing RAID data.
- Some amount of cache memory to buffer host requests. This may be the
most important factor for getting good response from a RAID controller.
Today's RAID systems can make use of two levels of cache; a large chunk
on the controller, and whatever the drives come with.
Write requests are usually set up with write-back caching, so host
response is pretty instantenous. Without the write-back cache, write
requests will have to wait for the actual disk operations to complete.
In this case, the more drives there are in the RAID set, the greater the
percentage of latency; A write operation from the host will be divided
by the number of drives in the set, and the more drives in the set,
the smaller chunk of data goes to each drive. But each request to each
drive still incurs the same amount of latency. A potential problem with
write-back cache is data inconsistency in the event of a power failure.
So the RAID systems that guarentee reliability will have some kind of
battery backup for the cache in the controller. (But that doesn't do
much for the cache in the disks.)
Read requests will have to depend on cache look-ahead strategies on the
RAID controller. The first of a series of sequential read requests
will be slow, as the controller has to wait for seeks and reads from
all the drives to complete before handing the data back to the host.
Subsequent requests for sequential should be much faster. Some RAID
controllers can optimize this with the disk cache.
(More than you wanted to know, I'm sure.)
eyc