[opensuse] Four disks RAID 0 performance
Hi, I've made several tests of RAID 0 performance for upcoming server upgrade, and got strange results. So I ask some experieced users for opinion and suggestions. I've used IBM x3400 with 4 SATA 250GB disks, software RAID and hdparm -tT for test on OpenSUSE 11.0 x64. 1) one disk shows about 60 MB/s, and that's OK 2) two disks in RAID 0 shows about 120 MB/s and that's OK, too 3) three disks in RAID 0 show about 180 MB/s and that's OK, too, but 4) four disks in RAID 0 show 180 MB/s again, and that I can't explain, 5) I tried also nested RAID 0 of two 2-disks RAID 0, which also shows not more that 180 MB/s. I checked mdadm syntax several times (so I can be pretty sure that there is no problem), tried to use several combinations, but I never get more than 180 MB/s, which is far for approx. upper limit 240 MB/s that I expected and wanted. Why I can't see any improvemet by adding fourth disk to RAID 0? Is there any other SW/HW limit that can apply? I tried built-in hardware RAID 0 on all 4 disks. hdparm -tT shows something betwen 90-130 MB/s, what is even more strange. I saw several benchmarks on the Web which cleary shows 240 MB/s in similar configuration like mine. Is hdparm good enough for testing? Is there some proven better benchmark test that somebody can suggest? I have only 4 disks now, so I can't test RAID 50. Thaks for any opinion and suggestion. -- Ivan Guštin
On Tuesday 21 October 2008 14:26:12 Ivan Gustin wrote:
I checked mdadm syntax several times (so I can be pretty sure that there is no problem), tried to use several combinations, but I never get more than 180 MB/s, which is far for approx. upper limit 240 MB/s that I expected and wanted. Why I can't see any improvemet by adding fourth disk to RAID 0? Is there any other SW/HW limit that can apply?
It almost sounds like you have the disks configured as SATA 1, which has an upper speed limit of around 180 MB/s. SATA 2 doubles that So check what kind of SATA you have configured. SATA 2 controllers can be in SATA 1 compatibility mode, so just because it's a SATA 2 controller, you can't assume it's configured correctly. Check the BIOS settings Anders -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tuesday 21 October 2008 14:26:12 Ivan Gustin wrote:
I checked mdadm syntax several times (so I can be pretty sure that there is no problem), tried to use several combinations, but I never get more than 180 MB/s, which is far for approx. upper limit 240 MB/s that I expected and wanted. Why I can't see any improvemet by adding fourth disk to RAID 0? Is there any other SW/HW limit that can apply?
It almost sounds like you have the disks configured as SATA 1, which has an upper speed limit of around 180 MB/s. SATA 2 doubles that
So check what kind of SATA you have configured. SATA 2 controllers can be in SATA 1 compatibility mode, so just because it's a SATA 2 controller, you can't assume it's configured correctly. Check the BIOS settings
And the jumper on the disk (if there is one). Seagate disks have a small jumper, that sets the drive (if it is coming from factory) default to SATA 1 mode.
Anders -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-- L. de Braal BraHa Systems NL - Terneuzen T +31 115 649333 F +31 115 649444 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tue, Oct 21, 2008 at 2:26 PM, Ivan Gustin
Hi,
I've made several tests of RAID 0 performance for upcoming server upgrade, and got strange results. So I ask some experieced users for opinion and suggestions.
I've used IBM x3400 with 4 SATA 250GB disks, software RAID and hdparm -tT for test on OpenSUSE 11.0 x64.
1) one disk shows about 60 MB/s, and that's OK 2) two disks in RAID 0 shows about 120 MB/s and that's OK, too 3) three disks in RAID 0 show about 180 MB/s and that's OK, too, but 4) four disks in RAID 0 show 180 MB/s again, and that I can't explain, 5) I tried also nested RAID 0 of two 2-disks RAID 0, which also shows not more that 180 MB/s.
I checked mdadm syntax several times (so I can be pretty sure that there is no problem), tried to use several combinations, but I never get more than 180 MB/s, which is far for approx. upper limit 240 MB/s that I expected and wanted. Why I can't see any improvemet by adding fourth disk to RAID 0? Is there any other SW/HW limit that can apply?
I tried built-in hardware RAID 0 on all 4 disks. hdparm -tT shows something betwen 90-130 MB/s, what is even more strange.
I saw several benchmarks on the Web which cleary shows 240 MB/s in similar configuration like mine.
Is hdparm good enough for testing? Is there some proven better benchmark test that somebody can suggest?
I have only 4 disks now, so I can't test RAID 50.
Thaks for any opinion and suggestion.
-- Ivan Guštin
Hi Well, you have probably met the maximum throughput your FAKEraid chip can provide. People have met the limits of their raid hardware often enough: HotHardware had a test in wich they met the 600MB/s of their raid card.... The only solutions would be: 1. Install a real hardware card. They often have a lot more to give. These things are expensive. (below $100 is probably useless for what you want. There are cards that lie about being hardware raid). Ask the real wizards here for compatible brands. 2. Try to use the same hardware, but with a Linux softraid setup. If the south bridge of your motherboard can handle the data, just the RAID part is restricted this would be a good solution, but you can't boot from it. 3. Buy a cheap Sata-300 card to handle 2 of the drives and put them in Linux softraid. This is usefull if the south bridge can't handle the data from all of the disks directly (it still goes through the south bridge, but in a different way) Hope it helps Neil -- There are three kinds of people: Those who can count, and those who cannot count ----------------------------------------------------------------------- ** Hi! I'm a signature virus! Copy me into your signature, please! ** ----------------------------------------------------------------------- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tue, Oct 21, 2008 at 8:26 AM, Ivan Gustin
Hi,
I've made several tests of RAID 0 performance for upcoming server upgrade, and got strange results. So I ask some experieced users for opinion and suggestions.
I've used IBM x3400 with 4 SATA 250GB disks, software RAID and hdparm -tT for test on OpenSUSE 11.0 x64.
1) one disk shows about 60 MB/s, and that's OK 2) two disks in RAID 0 shows about 120 MB/s and that's OK, too 3) three disks in RAID 0 show about 180 MB/s and that's OK, too, but 4) four disks in RAID 0 show 180 MB/s again, and that I can't explain, 5) I tried also nested RAID 0 of two 2-disks RAID 0, which also shows not more that 180 MB/s.
I checked mdadm syntax several times (so I can be pretty sure that there is no problem), tried to use several combinations, but I never get more than 180 MB/s, which is far for approx. upper limit 240 MB/s that I expected and wanted. Why I can't see any improvemet by adding fourth disk to RAID 0? Is there any other SW/HW limit that can apply?
I tried built-in hardware RAID 0 on all 4 disks. hdparm -tT shows something betwen 90-130 MB/s, what is even more strange.
I saw several benchmarks on the Web which cleary shows 240 MB/s in similar configuration like mine.
Is hdparm good enough for testing? Is there some proven better benchmark test that somebody can suggest?
I have only 4 disks now, so I can't test RAID 50.
Thaks for any opinion and suggestion.
-- Ivan Guštin
At 200 MB/sec range it could easily be a bottle neck in the sata controller, i/o bus(PCI / PCIexpress / PCI-X), or even the Northbridge, etc. Of course it could also be the raid software/firmware not efficiently handling 4 drives. Or as you said, hdparm may not have the ability to report speeds above 180MB/sec. Also, I doubt you have an application that can either produce or consume data at 180MB/sec or faster, so this is all just a benchmark war I assume. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Also, I doubt you have an application that can either produce or consume data at 180MB/sec or faster, so this is all just a benchmark war I assume.
Amazing leap. I didn't hear him say anything about his usage other than to use the word "server" in a sentence. I would say the opposite, that disk i/o is the main bottle neck for most services these days and so, lacking specific details, if you had to assume anything, it's safer to assume that his application IS disk i/o bound than not. Every one of my servers, being multi-user real time database driven application servers performing unholy numbers of small random access transactions to many files at once, can always use all the disk bandwidth that can possibly be provided. 24 10krpm sas spindles on a brand new adaptec 5000 series hardware raid, configured for raid10 (5 is too slow for unavoidable reasons that no hardware can get around) is still not too much bandwidth. It's great. It's sinificantly faster than the 8 7200rpm sata2 spindles on adaptec 3000 series cards most of my other servers have, which themselves are already "good enough" up to a certain number of users on average. But there really is no point at which more is pointless. Even with that 24 drive screamer, many common actions take time, and the more disk bandwdth and the more spindles to paralellize operations, the faster those actions go. (reports that touch lost of files, rsync and other backups, df/du/find etc utils that need to scan through everything and in doing so end up blowing away the various caches in the os and in the hadware) Larger cache on the drives, larger cache on the card, faster channels to the drives, it all helps even though the platters can really only deliver about 60-70 mbit and even though the pci-e interface can't actually deliver the full 24x3gbit theoretical bandwidth to the cpu or ram. Lots of ops get coalesced within the raid card and the raid card can use lots of bandwidth itself just between the disks and the card just maintaining the raid. He didn't say all that about his application, but he didn't say otherwise either. He may not even know whether the increased speed will help his application very much. He may be in the process of finding out by testing, which is not only perfectly valid but the rightest thing in the world. It's just about never right to presume to tell anyone else that they don't need what they are asking for unless they provide a truly ridiculous amount of detail and background and context about their situation, such that you could actually evaluate the whole system and decree things like "N seconds per transaction is more than good enough, because this and this other factor limits things anyways, and so more won't make a difference..." -- Brian K. White brian@aljex.com http://www.myspace.com/KEYofR +++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++. filePro BBx Linux SCO FreeBSD #callahans Satriani Filk! -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tue, Oct 21, 2008 at 6:27 PM, Brian K. White
Also, I doubt you have an application that can either produce or consume data at 180MB/sec or faster, so this is all just a benchmark war I assume.
Amazing leap. I didn't hear him say anything about his usage other than to use the word "server" in a sentence.
I would say the opposite, that disk i/o is the main bottle neck for most services these days and so, lacking specific details, if you had to assume anything, it's safer to assume that his application IS disk i/o bound than not.
Every one of my servers, being multi-user real time database driven application servers performing unholy numbers of small random access transactions to many files at once, can always use all the disk bandwidth that can possibly be provided.
With your servers, what does iostat show your MB/sec rate to be? I issue the command as "iostat -d 10". The blocks/second column is in sectors. 180MB/sec * 2,000 sectors/MB ==> 360,000 sectors/sec. If iostat is showing above 360,000 sectors/second with a real world app I will be very surprised. Much more likely you are limited by some other bottleneck than raw data throughput at that rate. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
participants (6)
-
Anders Johansson
-
Brian K. White
-
Greg Freemyer
-
Ivan Gustin
-
Leen de Braal
-
Neil