On July 9, 2014 5:29:17 PM EDT, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 2014-07-09 22:44, Greg Freemyer wrote:
On Wed, Jul 9, 2014 at 2:42 PM, Carlos E. R.
To understand this we have to dig into some mathematical theory, so forget about reads and writes for a second and just focus on the math.
:-} !
== Beware: math below ==
Ok...
My maths are rusty, but I understand.
==== Alright time to talk about disks.
...
The end result is 2 simultaneous reads (P and D2) followed by 2 simultaneous writes (Pn and D2n).
I see...
The cool part about that is works regardless of how many disks are in the raid 5 array. A single data stride update always requires exactly 2 reads and 2 writes.
I thought that in the Linux raid 5, there is not a fixed or dedicated parity disk, but that a stride may be in one disk, the next on another. That should distribute the load, and not force to always read write to the P disk, on all ops.
What I was trying to say is if you have a 10 disk raid 5 then one specific stripe might look like: D1 D2 D3 D4 P D5 D6 D7 D8 D9 A write to D2 would only require a read of D2 and P, then a write of D2n and Pn. That D2 and P are not on fixed disks is an unrelated truth.
===== I don't actually know how raid 6 works, so I can't do the same walk thru, but my understanding is a single data stride update with raid 6 involves 3 reads (P1, P2, D2) and 3 writes (P1n, P2n, D2n).
The rest of the data strides don't have to be read to do the
calculations.
=====
That was a fun exercise. I hope at least a couple of people learned
something.
I did.
As long as there is not a quiz coming ;-)
Thanks :-)
-- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org