On Dec 6, 2007 10:26 AM, Aaron Kulkis
Chris Worley wrote:
On Dec 5, 2007 5:12 PM, Aaron Kulkis
wrote: Chris Worley wrote:
On Dec 4, 2007 11:49 AM, Aaron Kulkis
wrote: Chris Worley wrote:
On Dec 4, 2007 10:22 AM, Jc Polanycia
wrote: >> Off topic, as I seldom partition anything (unpartitioned drives >> perform best), but, you're setting yourself up for disaster using LVM >> (any corruption to the LVM layer is not recoverable... you'll loose >> everything... been there done that), and the performance is poor, and >> MD RAID5/6 devices can be grown (add more disks). >> >> Chris >> > Fair enough. I appreciate the input because I haven't run across any > real-world stories about LVM corruption. I have personally encountered > corruption problems with RAID5/6 as well as problems with decreased > performance as a RAID5 structure gets more members added to it. I saw some RAID6 issues last year, so I use RAID5... but recent tests have shown MD RAID6 as solid. "Decreased performance as more members get added to it"? Bull!!! I'm guessing you have another bottleneck that has led you to this conclusion.
While the performance increase doesn't scale linearly as disks are added (some CPU verhead is added with each additional drive), the more disks, the better the performance. I'm sure there is some Amdahl's law limit to the increased performance scalability, but I run RAIDS up to 12 drives, and see performance added w/ each new member.
You're hallucinating. That defies basic information theory.
Your assertion is akin to suggesting that you power your computers with a perpetual motion machine (despite the fact that such would violate the 1st, 2nd, and 3rd laws of thermodynamics). Amdahl's law defies "Information theory"? How so?
If you've got one disk that can perform at 70MB/s on a 320MB/s bus, then on that bus you should be able to stripe at least four drives with less-than-linear scalability... add more busses w/ more dirves... more scalability... of course, not linear. Add caching effects, and get superlinear scalabiltiy (but that doesn't count). Your analysis is flawed because it assumes zero time for disk-head seeks.
How does it assume zero time?
If you've got multiple disks all seeking simultaneously, it is a parallel, not serial, operation.
That doesn't matter. Your assumption that the data blocks from several disks can be XOR'ed together, and written
Do you mean "to" where you say "from"? It's like you're mixing reads and writes in the same sentence.
to one of those disks, and the parity partition on yet
Note that RAID5 is rotating parity... there is no one "parity partition".
another disk is FASTER than not doing so is just patently ridiculous.
I'm guessing you've fallen victim to some RAID card with a chicklet for a processor. An MD device in Linux is much faster (given modern processors) at calculating the checksum, and the calculation is insignificant compared to the time it takes to write the data. It still represents a serial portion of the operation, but my "Amdahl's Law" disclaimer was clearly posted in your first objection. Have you come to terms with your thermodynamic issue?
That doesn't even count the matter of increasing the bandwidth usage by a factor of N for N disks in the RAID 5 configuration.
You must have some very slow busses too. If you're using PCI-X or PCI-E busses, with multiple (or even single) SCSI U320 or SCA busses... it takes a lot of N to saturate the bandwidth.
You must be assuming that striping across a RAID is somehow a serial operation.
No, I'm asuming that you're using RAID 5, which is what you said.
Which is striped, in parallel. Chris -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org