Re: [opensuse] Re: [OT] vmware and fake scsi devs
Chris Worley wrote:
On Dec 4, 2007 11:49 AM, Aaron Kulkis
wrote: Chris Worley wrote:
On Dec 4, 2007 10:22 AM, Jc Polanycia
wrote: Off topic, as I seldom partition anything (unpartitioned drives perform best), but, you're setting yourself up for disaster using LVM (any corruption to the LVM layer is not recoverable... you'll loose everything... been there done that), and the performance is poor, and MD RAID5/6 devices can be grown (add more disks).
Chris
Fair enough. I appreciate the input because I haven't run across any real-world stories about LVM corruption. I have personally encountered corruption problems with RAID5/6 as well as problems with decreased performance as a RAID5 structure gets more members added to it. I saw some RAID6 issues last year, so I use RAID5... but recent tests have shown MD RAID6 as solid.
"Decreased performance as more members get added to it"? Bull!!! I'm guessing you have another bottleneck that has led you to this conclusion.
While the performance increase doesn't scale linearly as disks are added (some CPU verhead is added with each additional drive), the more disks, the better the performance. I'm sure there is some Amdahl's law limit to the increased performance scalability, but I run RAIDS up to 12 drives, and see performance added w/ each new member.
You're hallucinating. That defies basic information theory.
Your assertion is akin to suggesting that you power your computers with a perpetual motion machine (despite the fact that such would violate the 1st, 2nd, and 3rd laws of thermodynamics).
Amdahl's law defies "Information theory"? How so?
If you've got one disk that can perform at 70MB/s on a 320MB/s bus, then on that bus you should be able to stripe at least four drives with less-than-linear scalability... add more busses w/ more dirves... more scalability... of course, not linear. Add caching effects, and get superlinear scalabiltiy (but that doesn't count).
Your analysis is flawed because it assumes zero time for disk-head seeks.
I do believe your the one who's full of s***.
I'm not the one making crazy statements that rely on disk-heads using instant teleportation from track to track. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 5, 2007 5:12 PM, Aaron Kulkis
Chris Worley wrote:
On Dec 4, 2007 11:49 AM, Aaron Kulkis
wrote: Chris Worley wrote:
On Dec 4, 2007 10:22 AM, Jc Polanycia
wrote: Off topic, as I seldom partition anything (unpartitioned drives perform best), but, you're setting yourself up for disaster using LVM (any corruption to the LVM layer is not recoverable... you'll loose everything... been there done that), and the performance is poor, and MD RAID5/6 devices can be grown (add more disks).
Chris
Fair enough. I appreciate the input because I haven't run across any real-world stories about LVM corruption. I have personally encountered corruption problems with RAID5/6 as well as problems with decreased performance as a RAID5 structure gets more members added to it. I saw some RAID6 issues last year, so I use RAID5... but recent tests have shown MD RAID6 as solid.
"Decreased performance as more members get added to it"? Bull!!! I'm guessing you have another bottleneck that has led you to this conclusion.
While the performance increase doesn't scale linearly as disks are added (some CPU verhead is added with each additional drive), the more disks, the better the performance. I'm sure there is some Amdahl's law limit to the increased performance scalability, but I run RAIDS up to 12 drives, and see performance added w/ each new member.
You're hallucinating. That defies basic information theory.
Your assertion is akin to suggesting that you power your computers with a perpetual motion machine (despite the fact that such would violate the 1st, 2nd, and 3rd laws of thermodynamics).
Amdahl's law defies "Information theory"? How so?
If you've got one disk that can perform at 70MB/s on a 320MB/s bus, then on that bus you should be able to stripe at least four drives with less-than-linear scalability... add more busses w/ more dirves... more scalability... of course, not linear. Add caching effects, and get superlinear scalabiltiy (but that doesn't count).
Your analysis is flawed because it assumes zero time for disk-head seeks.
How does it assume zero time? If you've got multiple disks all seeking simultaneously, it is a parallel, not serial, operation. You must be assuming that striping across a RAID is somehow a serial operation. Chris
--
To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Chris Worley wrote:
On Dec 5, 2007 5:12 PM, Aaron Kulkis
wrote: Chris Worley wrote:
On Dec 4, 2007 11:49 AM, Aaron Kulkis
wrote: Chris Worley wrote:
On Dec 4, 2007 10:22 AM, Jc Polanycia
wrote: > Off topic, as I seldom partition anything (unpartitioned drives > perform best), but, you're setting yourself up for disaster using LVM > (any corruption to the LVM layer is not recoverable... you'll loose > everything... been there done that), and the performance is poor, and > MD RAID5/6 devices can be grown (add more disks). > > Chris > Fair enough. I appreciate the input because I haven't run across any real-world stories about LVM corruption. I have personally encountered corruption problems with RAID5/6 as well as problems with decreased performance as a RAID5 structure gets more members added to it. I saw some RAID6 issues last year, so I use RAID5... but recent tests have shown MD RAID6 as solid.
"Decreased performance as more members get added to it"? Bull!!! I'm guessing you have another bottleneck that has led you to this conclusion.
While the performance increase doesn't scale linearly as disks are added (some CPU verhead is added with each additional drive), the more disks, the better the performance. I'm sure there is some Amdahl's law limit to the increased performance scalability, but I run RAIDS up to 12 drives, and see performance added w/ each new member.
You're hallucinating. That defies basic information theory.
Your assertion is akin to suggesting that you power your computers with a perpetual motion machine (despite the fact that such would violate the 1st, 2nd, and 3rd laws of thermodynamics). Amdahl's law defies "Information theory"? How so?
If you've got one disk that can perform at 70MB/s on a 320MB/s bus, then on that bus you should be able to stripe at least four drives with less-than-linear scalability... add more busses w/ more dirves... more scalability... of course, not linear. Add caching effects, and get superlinear scalabiltiy (but that doesn't count). Your analysis is flawed because it assumes zero time for disk-head seeks.
How does it assume zero time?
If you've got multiple disks all seeking simultaneously, it is a parallel, not serial, operation.
That doesn't matter. Your assumption that the data blocks from several disks can be XOR'ed together, and written to one of those disks, and the parity partition on yet another disk is FASTER than not doing so is just patently ridiculous. That doesn't even count the matter of increasing the bandwidth usage by a factor of N for N disks in the RAID 5 configuration.
You must be assuming that striping across a RAID is somehow a serial operation.
No, I'm asuming that you're using RAID 5, which is what you said. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 6, 2007 10:26 AM, Aaron Kulkis
Chris Worley wrote:
On Dec 5, 2007 5:12 PM, Aaron Kulkis
wrote: Chris Worley wrote:
On Dec 4, 2007 11:49 AM, Aaron Kulkis
wrote: Chris Worley wrote:
On Dec 4, 2007 10:22 AM, Jc Polanycia
wrote: >> Off topic, as I seldom partition anything (unpartitioned drives >> perform best), but, you're setting yourself up for disaster using LVM >> (any corruption to the LVM layer is not recoverable... you'll loose >> everything... been there done that), and the performance is poor, and >> MD RAID5/6 devices can be grown (add more disks). >> >> Chris >> > Fair enough. I appreciate the input because I haven't run across any > real-world stories about LVM corruption. I have personally encountered > corruption problems with RAID5/6 as well as problems with decreased > performance as a RAID5 structure gets more members added to it. I saw some RAID6 issues last year, so I use RAID5... but recent tests have shown MD RAID6 as solid. "Decreased performance as more members get added to it"? Bull!!! I'm guessing you have another bottleneck that has led you to this conclusion.
While the performance increase doesn't scale linearly as disks are added (some CPU verhead is added with each additional drive), the more disks, the better the performance. I'm sure there is some Amdahl's law limit to the increased performance scalability, but I run RAIDS up to 12 drives, and see performance added w/ each new member.
You're hallucinating. That defies basic information theory.
Your assertion is akin to suggesting that you power your computers with a perpetual motion machine (despite the fact that such would violate the 1st, 2nd, and 3rd laws of thermodynamics). Amdahl's law defies "Information theory"? How so?
If you've got one disk that can perform at 70MB/s on a 320MB/s bus, then on that bus you should be able to stripe at least four drives with less-than-linear scalability... add more busses w/ more dirves... more scalability... of course, not linear. Add caching effects, and get superlinear scalabiltiy (but that doesn't count). Your analysis is flawed because it assumes zero time for disk-head seeks.
How does it assume zero time?
If you've got multiple disks all seeking simultaneously, it is a parallel, not serial, operation.
That doesn't matter. Your assumption that the data blocks from several disks can be XOR'ed together, and written
Do you mean "to" where you say "from"? It's like you're mixing reads and writes in the same sentence.
to one of those disks, and the parity partition on yet
Note that RAID5 is rotating parity... there is no one "parity partition".
another disk is FASTER than not doing so is just patently ridiculous.
I'm guessing you've fallen victim to some RAID card with a chicklet for a processor. An MD device in Linux is much faster (given modern processors) at calculating the checksum, and the calculation is insignificant compared to the time it takes to write the data. It still represents a serial portion of the operation, but my "Amdahl's Law" disclaimer was clearly posted in your first objection. Have you come to terms with your thermodynamic issue?
That doesn't even count the matter of increasing the bandwidth usage by a factor of N for N disks in the RAID 5 configuration.
You must have some very slow busses too. If you're using PCI-X or PCI-E busses, with multiple (or even single) SCSI U320 or SCA busses... it takes a lot of N to saturate the bandwidth.
You must be assuming that striping across a RAID is somehow a serial operation.
No, I'm asuming that you're using RAID 5, which is what you said.
Which is striped, in parallel. Chris -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Chris Worley wrote:
On Dec 6, 2007 10:26 AM, Aaron Kulkis
wrote: Chris Worley wrote:
On Dec 5, 2007 5:12 PM, Aaron Kulkis
wrote: Chris Worley wrote:
On Dec 4, 2007 11:49 AM, Aaron Kulkis
wrote: Chris Worley wrote: > On Dec 4, 2007 10:22 AM, Jc Polanycia
wrote: >>> Off topic, as I seldom partition anything (unpartitioned drives >>> perform best), but, you're setting yourself up for disaster using LVM >>> (any corruption to the LVM layer is not recoverable... you'll loose >>> everything... been there done that), and the performance is poor, and >>> MD RAID5/6 devices can be grown (add more disks). >>> >>> Chris >>> >> Fair enough. I appreciate the input because I haven't run across any >> real-world stories about LVM corruption. I have personally encountered >> corruption problems with RAID5/6 as well as problems with decreased >> performance as a RAID5 structure gets more members added to it. > I saw some RAID6 issues last year, so I use RAID5... but recent tests > have shown MD RAID6 as solid. > > "Decreased performance as more members get added to it"? Bull!!! I'm > guessing you have another bottleneck that has led you to this > conclusion. > > While the performance increase doesn't scale linearly as disks are > added (some CPU verhead is added with each additional drive), the more > disks, the better the performance. I'm sure there is some Amdahl's > law limit to the increased performance scalability, but I run RAIDS up > to 12 drives, and see performance added w/ each new member. > You're hallucinating. That defies basic information theory. Your assertion is akin to suggesting that you power your computers with a perpetual motion machine (despite the fact that such would violate the 1st, 2nd, and 3rd laws of thermodynamics). Amdahl's law defies "Information theory"? How so?
If you've got one disk that can perform at 70MB/s on a 320MB/s bus, then on that bus you should be able to stripe at least four drives with less-than-linear scalability... add more busses w/ more dirves... more scalability... of course, not linear. Add caching effects, and get superlinear scalabiltiy (but that doesn't count). Your analysis is flawed because it assumes zero time for disk-head seeks.
How does it assume zero time?
If you've got multiple disks all seeking simultaneously, it is a parallel, not serial, operation. That doesn't matter. Your assumption that the data blocks from several disks can be XOR'ed together, and written
Do you mean "to" where you say "from"? It's like you're mixing reads and writes in the same sentence.
No, I'm not. To do a write to block #X on Drive A, Block #X on the parity drive must be updated, which means that Block #X on drives B, C, ... must also be read, and all of them XOR'ed together before you can write the parity block on the parity device. Now, since you seem to be COMPLETELY unaware of the methods by which RAID 5 works, and how that method effects system performance, please STOP POSTING YOUR BULLSHIT CLAIMS that RAID 5 is a high-performance configuration. It's a low-performance alternative to mirroring, because it is substantially cheaper.
to one of those disks, and the parity partition on yet
Note that RAID5 is rotating parity... there is no one "parity partition".
For every block, one of the devices is acting as the parity device block for the other disks in a RAID5 configuration.
another disk is FASTER than not doing so is just patently ridiculous.
I'm guessing you've fallen victim to some RAID card with a chicklet for a processor.
Or alternatively, you're just making shit up, and posting it as fact.
An MD device in Linux is much faster (given modern processors) at calculating the checksum, and the calculation is insignificant compared to the time it takes to write the data. It still represents a serial portion of the operation, but my "Amdahl's Law" disclaimer was clearly posted in your first objection.
But you STILL have to write MORE data (the parity blocks) than without RAID 5, which adds to I/O overhead. Any argument to the contrary is PURE FANTASY.
Have you come to terms with your thermodynamic issue?
I'm still not the one claiming that writing to N+1 disks has less overhead than writing to 1 disk.
That doesn't even count the matter of increasing the bandwidth usage by a factor of N for N disks in the RAID 5 configuration.
You must have some very slow busses too. If you're using PCI-X or PCI-E busses, with multiple (or even single) SCSI U320 or SCA busses... it takes a lot of N to saturate the bandwidth.
Disk-head motion is not free. In fact, it is the most expensive part (delay-wise) of using a disk drive.
You must be assuming that striping across a RAID is somehow a serial operation.
No, I'm asuming that you're using RAID 5, which is what you said.
Which is striped, in parallel.
Not according to the definition if RAID 5. RAID 5 is rotating Distributed Parity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
But you STILL have to write MORE data (the parity blocks) than without RAID 5, which adds to I/O overhead.
Any argument to the contrary is PURE FANTASY.
Write back cache is the only performance enhancing feature to help RAID write performance in comparison to single drive writes. In terms of pure I/O RAID terms a RAID configuration would always be slower than a single drive write. Luckily we have some very intelligent people making RAID controller hardware. Cheers Todd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Dec 6, 2007 2:58 PM, Aaron Kulkis
Chris Worley wrote:
On Dec 6, 2007 10:26 AM, Aaron Kulkis
wrote: Chris Worley wrote:
On Dec 5, 2007 5:12 PM, Aaron Kulkis
wrote:
<snip>
To do a write to block #X on Drive A, Block #X on the parity drive must be updated, which means that Block #X on drives B, C, ... must also be read, and all of them XOR'ed together before you can write the parity block on the parity device.
I think you saying that to update a sector on a 10-disk raid5 you would have to read from most of the drives to recreate the parity. Fortunately XOR is a reversible process. ie A ^ B ^ B = A (Where A is the parity without the data on sector B) So just 2 reads and 2 writes are needed. (See my earlier post). Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
participants (4)
-
Aaron Kulkis
-
Chris Worley
-
Greg Freemyer
-
todd@sohovfx.com