On 2021/01/27 03:43, Carlos E. R. wrote:
On 27/01/2021 04.11, David C. Rankin wrote:
On 1/26/21 6:37 AM, Carlos E. R. wrote:
On 26/01/2021 01.27, David C. Rankin wrote:
If this is windows on motherboard raid, it's
probably dmraid (otherwise known as Fake RAID or BIOS RAID) -- it's not really Fake,
it's just the moniker dmraid ended up with from the hardware RAID snobs...
It is really fake, because it doesn't run in hardware: it runs in
software, on the computer CPU, with read support on BIOS so that it can boot. Once booted
it gets write code from the driver, running on the mainboard CPU, not on the raid
A true hardware raid doesn't use the mainboard CPU, and is transparent to the
I'm sure Neil Brown and the rest on the linux-raid list would be
surprised to learn it is fake...
It's software... (and the overhead ceased being measurable when 486 came out) Fake
is far superior to hardware.
Of course it is :-)
> Just have a battery die on your hardware
> card and drop from write-back to write-through... and then find out your battery was
discontinued 3 years ago.
Nothing like people overgeneralizing.
1) Fake Raid -- Since it's not called Fake raid by the OEM's its hard
to really say what you are talking about, but Dell ships a
BIOS-FIRMware operated RAID though it doesn't operate in all the
modes of their HW solutions. But RAID0 and RAID1 are fairly
trivial to do, though don't know about combo RAID10(0+1) being
supported. Their RAID is supported by pre-OS BIOS HW, so
it works with Linux, Windows or whatever. It just looks like
an oversized HD to OS's.
2) Whether or not something is better depends on your usage, and
the type of RAID you are using. As far as reliability goes, I've
had the _dated_ experience of linux kernel crashes back before it
was fully SMP and 2 cores weren't as fast as a single same-clock
CPU for many peak-speed related tasks, though they were usually
able to process more work due to the multitasking nature of most
loads. But back in that timeframe, I had the experience more
than once (twice) in their 1st year of use of my software RAID5
(linux MD) disks becoming corrupt and unrecoverable
before I switched to HW RAID. I had had HWRAID fail once in the
following 2 decades due to a "re-manufactured" LSI card that
had the heat-sink super-glued on (as I later found out) rather
than connected/held using 4 screws with stiff springs + thermal
paste as it came new.
Problem there, was that I didn't know what a new card was
supposed to look like, and I've seen enough motherboards+cards
where random chips were epoxied onto the MB with opaque epoxy to prevent
reading details from the card or removing the chip
in a recoverable fashion to know what was supposed to be spring
mounted vs. epoxied-to-prevent tampering.
I also found another good difference that made a huge difference
in speed between SW+HW raid setups that I used.
The SW raids would be very tolerant of diskspeed differences between
different disks, but that also meant that striped access didn't
measure up to performance. A SW RAID5, with 4 data disks, ran at
about the speed of 2-3 single disks in writing & reading.
The same disks put in a HW RAID showed that about 9-10 out of 12
were measured as "bad" when attached to a HW RAID card. The reason:
they were Deskstars, sold for the home market rather than
Ultrastars sold for enterprise. The deskstars varied in speed
from the stated 7200 RPMs by as much as 15%, with about 9/12
disks failing due to speed variance. Ultrastars, at the time
rand about 33% more for same size, but were within about 1-2%
of each other in speed.
Second big area of difference -- HW cards can do their own
checksumming for RAID5/50/6/60. Beefier cards will have dual
cpu's on the RAID card and performed noticeably better on
RAID6/RAID60 configs and slightly better than on RAID5/RAID50.
Battery backed RAM can allow averaging out write-bursts and
higher I/O-ops for greater parallel usage by doing write-back
and buffering write-bursts than RAM that is used in write
through mode. RAM in the card (or somewhere) is still needed
to calculate parity stripes in RAID5+6 modes. It is likely
that at least 1 stripe's width is used in card-memory so
a full stripe can be written in parallel to each disk.
I'm sorta guessing, but RAID0, RAID1 and RAID10 (stripe of mirrors)
can keep data in a write buff for the least time since no
calculations need be done, but basically, a HW RAID card
can abstract out parallel writes from the OS-CPU that can
appear to write multiple data disks in the time the OS would
normally be able to write 1.
Anyway, it hasn't been my experience that SW raid is better
than HW raid, but that may be due, in part, to use a common
RAID card in my setups (LSI->Avago->Broadcom).
FWIW, though 1 PCIe-SSD may well outperform many RAID setups
in single-user tasks, with large RAID's using 2.5" disks
possibly benefitting needs of DB-users and web-hosting.