Mailinglist Archive: opensuse (2348 mails)

< Previous Next >
Re: [opensuse] Software IDE RAID versus hardware IDE RAID
  • From: Sam Clemens <clemens.sam1@xxxxxxxxx>
  • Date: Wed, 02 Apr 2008 18:16:52 -0400
  • Message-id: <47F405D4.5000807@xxxxxxxxx>
Jason Bailey, Sun Advocate Webmaster wrote:
On Tue, Apr 1, 2008 at 6:48 PM, Jason Bailey, Sun Advocate Webmaster
<webmaster@xxxxxxxxx> wrote:

I've always felt software raid was too much of a tax on the system, not to
mention less reliable (regardless of OS).


Wrong on both counts.

Not so fast... just hold on there.


So wrong that merely stating that indicates you have never even used
software raid.

Not so. I have used software RAID on many systems, just not on a SUSE Linux machine. I've simply never used YaST to set it up. I have mixed feelings about software raid, regardless of platform.

You just went out and spent the money, and never looked back.


Wrong again. Boy, you're making some big conclusions here. My tech partner and I actually did quite a bit of research before coming to the same conclusion. That aside, past experiences speak volumes. You stick with what you know works - what you have had good experience with.

I manage several servers all of them loafing along at under 2%
utilization under heavy
data access traffic from a multitude of work stations. That's 2%
total system utilization
for both samba and RAID combined. Software raid is fast, and lightweight.

Linux raid is faster and more lightweight than that of Windows, at least in my experience. But it's still CPU cycles that could instead be performed by a RAID card with onboard processing.

Minimal. *ALL* disk controller cards these days are bus-master
cards. The O/S gives the controller cards 4 pieces of information:
1. The physical memory address of the I/O buffer

2. The size of the I/O operation to perform

3. Operation type (READ/WRITE) (in well-designed disk
controllers,this is the high-bit in the same data word
as the size data, like so:

R S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S
/ 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
W 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0



4. The logical location of the blocks to read or write.

Once the controller card has that information, it performs
the rest of the I/O operation on its own, and signals the
CPU when the operation is finished.

For software RAID, in the event of a write, the CPU
sends the 4 pieces of control information (in 3 writes)
for each mirroring.


The only time software RAID is really a load on the CPU
is in the event of write operations in RAID 5 or RAID 6,
due to having to XOR all of the parallel blocks....and
in this day and age, I can't see even that being a
significant load compared to either a database engine
or any non-trivial application + filesystem overhead.


> And 2% is much lower
than what I have experienced. If you're going to make that argument, then I guess hardware accelerated video cards are worthless too... let's just stick with software rendered openGL from now on....

That's a completely in-apt comparison.



With a raid card your system never gets any faster than that card.
Beef up the server,
and its still as slow as the card. Software raid improves with each
server upgrade.

That's a stretch if I've ever seen one. Whether you're taking SATA or EIDE, you have the physical speed of the channel. A SATA1 RAID card runs at SATA1 speeds, along with the connected SATA1 channel. Same goes for SATA2 or EIDE. Upgrading operating systems won't change that. You've gotta change hardware to make that happen.

I think he was talking about the motherboard.
With hardware RAID, you have to upgrade your RAID card, too.



And if ever have one disk crash or a controller fail with hardware raid you have
to go find another card to match, because the remaining raid drives are recorded
using a proprietary scheme that is not transportable.

Could the card fail? Yeah. But individual drive can still be mounted as an individual drive and the data can be salvaged, if needed. Been there, done that.

Proprietary schemes? My card saves the array data on the firmware of the card, not on the hard drives in the RAID array. So I don't know what RAID cards you've been using. In any event, the raids I use aren't dependent upon kernel modules and drivers that can go awry because of the onboard processing. A physical card is less prone to software related issues that can creep up.

That much is true.
Of course, if the RAID card fails...



With software raid you can mix scsi, ide, sata drives, and actually
gain performance
by doing so. With hardware raid you are locked into a specific type
of disk, and
usually a specific SIZE of disk.


Performance gains? That's a real stretch. When the card can take away processing cycles away from your processor, the hardware raid will speed things up, not the other way around.

Other than RAID 5 and RAID 6, the difference is trivial.


Now, remember... not all hardware raid cards are created equal. Some cards don't have onboard processors, which means they have to have drivers (i.e. kernel modules) to power them (the processing is done by the PC's CPU. In those cases, some of your arguments are more arguable. But I'm talking about RAID cards that have onboard processors. These are not the same beasts.


But other than XOR operations for RAID 5/6, the amount
of processing that these RAID cards do saves the main
CPU a few hundred CPU cycles per additional disk to write,
which isn't even one microsecond of difference.
Considering that disk head seek times are still on
the order of milliseconds, it's not something which
would keep me awake at night if I was just doing some
combination of mirroring and/or striping.

As for less reliable, I've had disks fail over time 8 years using
software raid,
but regardless of raid type (1 or 5) I've never lost any data.


Linux software raid, in my opinion, is more reliable than that of Windows (yeah, a real shocker). But I still don't think software raid can outperform our outlast hardware raid with onboard processing. I think a majority of system admins out there would agree.

But when it does fail.....hardware RAID can fail in a much
more spectacular manner.

Personally, I'll put up with some minor annoyances if
it avoids spectacular failures.

The best system admins are the ones who nobody knows,
because users never even become aware of problems.



And when drives did fail, recovery of the array was as simple as installing
the replacement disk (or inserting a new hot-spare). Raid rebuild happened
totally in the back ground without any down time beyond what was necessary
for the actual disk swap. In one server, with hot removable drives, a drive swap
on the fly was a total non-event. The users never knew it happened.


I have had my RAID break once. I entered the RAID card's BIOS and it was a trivial fix. I plugged the new drive in and told the BIOS to replace the bad drive in the array and wallah... it was done. RAID cards don't necessarily make additions or repair to your array more complicated. In fact one could argue that with some raid cards, it's EASIER to fix or setup than with software raid.

Provided the RAID card doesn't fail AND you have a spare on site,
AND you have all the partitioning data stored someplace else.

There's a reason that HP and SUN use software RAID even in
their high end machines.

IBM seems to prefer hardware RAID, but then again, when
it comes to HARDWARE, IBM is the industry leader (and in
my 27 years of experience, their software tends to suck),
and the hardware was originally developed for their
mainframe lines....some of which don't even have an
on/off switch.





Whether or not to use hardware or software raid is a personal decision, and there are pros and cons to each. The biggest benefit of software raid, in my opinion, is the cost (no extra hardware to buy). But I think if reliability and performance are your priorities, hardware raid is the way to go.

If your RAID cards are of IBM quality, yes.
All others... I regard with suspicion.


But then again, obviously not everyone sees things the same way...

--
To unsubscribe, e-mail: opensuse+unsubscribe@xxxxxxxxxxxx
For additional commands, e-mail: opensuse+help@xxxxxxxxxxxx

< Previous Next >