On July 9, 2014 11:54:59 PM EDT, Dirk Gently <dirk.gently00@gmail.com> wrote:
Carlos E. R. wrote:
On 2014-07-09 22:44, Greg Freemyer wrote:
On Wed, Jul 9, 2014 at 2:42 PM, Carlos E. R.
To understand this we have to dig into some mathematical theory, so forget about reads and writes for a second and just focus on the math.
:-} !
== Beware: math below ==
Ok...
My maths are rusty, but I understand.
==== Alright time to talk about disks.
...
The end result is 2 simultaneous reads (P and D2) followed by 2 simultaneous writes (Pn and D2n).
I see...
The cool part about that is works regardless of how many disks are in the raid 5 array. A single data stride update always requires exactly 2 reads and 2 writes.
I thought that in the Linux raid 5, there is not a fixed or dedicated parity disk, but that a stride may be in one disk, the next on another. That should distribute the load, and not force to always read write to the P disk, on all ops.
That's RAID 6.
Raid 6 has 2 parity drive and can survive the failure of 2 strides out of a single stripe. That is important if working with large drives (1 TB) because with raid 5 it is relatively common to have a read error with one of the drives during rebuild. With raid 5 some arrays abort the rebuild as soon as they hit a single read error. Raid 6 will continue on. Thus I think of raid 5 as surviving a single drive failure, but not being tolerant of localized sector read errors. I think of raid 6 as being able to survive a single drive error while simultaneously handling localized sector read errors on the surviving disks. Greg -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org