On 2014-07-09 22:44, Greg Freemyer wrote:
On Wed, Jul 9, 2014 at 2:42 PM, Carlos E. R.
To understand this we have to dig into some mathematical theory, so forget about reads and writes for a second and just focus on the math.
:-} !
== Beware: math below ==
Ok... My maths are rusty, but I understand.
==== Alright time to talk about disks.
...
The end result is 2 simultaneous reads (P and D2) followed by 2 simultaneous writes (Pn and D2n).
I see...
The cool part about that is works regardless of how many disks are in the raid 5 array. A single data stride update always requires exactly 2 reads and 2 writes.
I thought that in the Linux raid 5, there is not a fixed or dedicated parity disk, but that a stride may be in one disk, the next on another. That should distribute the load, and not force to always read write to the P disk, on all ops.
===== I don't actually know how raid 6 works, so I can't do the same walk thru, but my understanding is a single data stride update with raid 6 involves 3 reads (P1, P2, D2) and 3 writes (P1n, P2n, D2n).
The rest of the data strides don't have to be read to do the calculations.
=====
That was a fun exercise. I hope at least a couple of people learned something.
I did. As long as there is not a quiz coming ;-) Thanks :-) -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)