SuSE 9.1: No performance increase using software Raid-0?
Hello all, I've got a SuSe 9.1 system here, with 4 disks, 2 RAID-0 (2 disks each), and a number of non-RAID partitions on it. Neither hdparm nor manual file copying seems to indicated a significant difference in speed between the RAID and the non-RAID partitions. The two drives of each RAID-0 partition are not on the same IDE connector. Any thoughts on what might cause this, and how I might fix it? I thought there'd at least be _some_ advantage to using RAID-0. Regards, Pieter Hulshoff
On Thu, 6 Jan 2005 11:13:58 +0100, Pieter Hulshoff
Hello all,
I've got a SuSe 9.1 system here, with 4 disks, 2 RAID-0 (2 disks each), and a number of non-RAID partitions on it. Neither hdparm nor manual file copying seems to indicated a significant difference in speed between the RAID and the non-RAID partitions. The two drives of each RAID-0 partition are not on the same IDE connector. Any thoughts on what might cause this, and how I might fix it? I thought there'd at least be _some_ advantage to using RAID-0.
Regards,
Pieter Hulshoff
Hi, RAID-0 is just stripe, i.e. concatenating partitions to form larger logical unit. I can not see how you can get performance increase in that situation. RAID-1 (mirror) is going to speed up the things when reading, because it is going to read at 2 places at the same time. While writing will be the same (or slower), as it have to write the same thing twice on 2 different places. Cheers Sunny -- Get Firefox http://www.spreadfirefox.com/?q=affiliates&id=10745&t=85
On Thursday 06 January 2005 16:32, Sunny wrote:
On Thu, 6 Jan 2005 11:13:58 +0100, Pieter Hulshoff
wrote: Hello all,
I've got a SuSe 9.1 system here, with 4 disks, 2 RAID-0 (2 disks each), and a number of non-RAID partitions on it. Neither hdparm nor manual file copying seems to indicated a significant difference in speed between the RAID and the non-RAID partitions. The two drives of each RAID-0 partition are not on the same IDE connector. Any thoughts on what might cause this, and how I might fix it? I thought there'd at least be _some_ advantage to using RAID-0.
Regards,
Pieter Hulshoff
Hi, RAID-0 is just stripe, i.e. concatenating partitions to form larger logical unit. I can not see how you can get performance increase in that situation.
I think you may have misunderstood RAID-0. Have a look at e.g. http://www.intel.com/support/chipsets/iaa_raid/sb/CS-009337.htm Regards, Pieter Hulshoff
On Thu, 6 Jan 2005 11:13:58 +0100, Pieter Hulshoff
Hello all,
I've got a SuSe 9.1 system here, with 4 disks, 2 RAID-0 (2 disks each), and a number of non-RAID partitions on it. Neither hdparm nor manual file copying seems to indicated a significant difference in speed between the RAID and the non-RAID partitions. The two drives of each RAID-0 partition are not on the same IDE connector. Any thoughts on what might cause this, and how I might fix it? I thought there'd at least be _some_ advantage to using RAID-0.
Regards,
Pieter Hulshoff
I agree with your thought and have no suggestion re: raid 0. For pure disk performance, one tuning trick I found with 9.2 is to use "noapic" on the kernel boot line. It increased my disk speed 20-30%. If you have more than one CPU, you may not want to do this. i.e. APIC is used in SMP setups, and I don't know what happens if you disable it. Also, how are you measuring performance? I use "iostat -d 5", but you have to install that via yast. For a read test, I do md5sum /dev/hdc1. Without noapic, I get 70,000-80,000 blocks/sec. With noapic, I get 100,000 or a little more For a copy test, I do "dd if=/dev/hdc of=/dev/hde". It tends to be 50-60,000 without noapic, and 70-80,000 with noapic. The above numbers are from memory, so I may have them wrong, but the concept is right. FYI: I am using the Adpatec PCI dual-channel ATA/133 controller with each disk on a different controller. My CPU is a P4 3.1 GHz. Greg -- Greg Freemyer
On Thursday 06 January 2005 16:58, Greg Freemyer wrote:
On Thu, 6 Jan 2005 11:13:58 +0100, Pieter Hulshoff
For pure disk performance, one tuning trick I found with 9.2 is to use "noapic" on the kernel boot line. It increased my disk speed 20-30%.
Done that already. My soundcard's giving me headaches if I don't use noapic.
Also, how are you measuring performance?
copying: time cp <file1> <file2> hdparm: hdparm -tT /dev/hd<drive> Regards, Pieter Hulshoff
On Thu, 6 Jan 2005 17:10:17 +0100, Pieter Hulshoff
On Thursday 06 January 2005 16:58, Greg Freemyer wrote:
On Thu, 6 Jan 2005 11:13:58 +0100, Pieter Hulshoff
For pure disk performance, one tuning trick I found with 9.2 is to use "noapic" on the kernel boot line. It increased my disk speed 20-30%. Done that already. My soundcard's giving me headaches if I don't use noapic.
Also, how are you measuring performance?
copying: time cp <file1> <file2>
Then the filesystem comes into play. I mostly move 640 MB files around. I find the xfs filesystem is about 30% faster than ext2. (Yes, I was shocked the first time I compared them.) That is very filesize dependent, so you need to pick the filesystem best suited for your need. Also, are you copying to/from the same physical disks? If so, I don't kow how much raid-0 is buying you. I think you said you had 4 disks, if you need speed (and I know I do), then you might be better off to setup 2 raid-0 of 2 disks each. Then copy between them. Even if you have to double copy, it will likely be faster.
hdparm: hdparm -tT /dev/hd<drive>
You can do that on a software raid? Somehow, I would not trust the result. And if you are saying your Raid-0 file copy is only going as fast as hdparm is reporting, then you are comparing apples and oranges. The copy has filesystem overhead that the hdparm does not have. Another couple tests you could use are: for raw read performance (only do this while not mounted!!!): umount raid-0 dd if=raid-0 of=/dev/null (running iostat -d 5 in another shell session). For filesystem read/write: dd if=large-file-on-raid0-filesystem of=/dev/null (running iostat -d 5 in another shell session). dd if=/dev/zero of=file-on-raid0-filesystem
Regards,
Pieter Hulshoff
I hope someone gives you the answer to your raid-0 question. The only thing I can think of is if you are mounting with the sync flag. That is pretty non-standard, but it would prevent raid-0 from bufferring and thus, no performance gain. FYI: I am going to setup a hardware raid-0 based on 3ware and do some performance testing with it. I have not done that, but it might be informative as part of the troubleshooting. Greg -- Greg Freemyer
On Thursday 06 January 2005 18:38, Greg Freemyer wrote:
On Thu, 6 Jan 2005 17:10:17 +0100, Pieter Hulshoff
wrote: Then the filesystem comes into play. I mostly move 640 MB files around. I find the xfs filesystem is about 30% faster than ext2. (Yes, I was shocked the first time I compared them.)
I'm sure one file system is faster than the other, but I'm comparing the same file system on RAID-0 and non-RAID partitions.
Also, are you copying to/from the same physical disks? If so, I don't kow how much raid-0 is buying you.
I tried copying files to the same partition (RAID-0 to RAID-0 and non-RAID to non-RAID), files from two different partitions to one non-RAID partition (RAID-0 to non-RAID2 and non-RAID1 to non-RAID2), and from two different partitions to one RAID-0 partition (RAID-01 to RAID-02 and non-RAID to RAID-02). There were hardly any differences between the performance of the RAID partitions vs the non-RAID partitions.
I think you said you had 4 disks, if you need speed (and I know I do), then you might be better off to setup 2 raid-0 of 2 disks each. Then copy between them. Even if you have to double copy, it will likely be faster.
That's exactly what I've got in my system: md0 = /dev/hdb1 + /dev/hdd1 md1 = /dev/hda4 + /dev/hdc4 non-RAID partitions are on /dev/hda1, /dev/hda3, /dev/hdc1, and /dev/hdc3; /dev/hda2 and /dev/hdc2 are swap.
hdparm: hdparm -tT /dev/hd<drive> You can do that on a software raid? Somehow, I would not trust the result.
It works just fine. Whether or not the results can be trusted is another matter, but the results I got from hdparm vs hdparm were no different from what I got from file copying vs file copying.
And if you are saying your Raid-0 file copy is only going as fast as hdparm is reporting, then you are comparing apples and oranges.
I did not compare hdparm to copy. I compared copy to copy, and hdparm to hdparm.
I hope someone gives you the answer to your raid-0 question. The only thing I can think of is if you are mounting with the sync flag. That is pretty non-standard, but it would prevent raid-0 from bufferring and thus, no performance gain.
I've checked that; I'm not using the sync flag.
FYI: I am going to setup a hardware raid-0 based on 3ware and do some performance testing with it. I have not done that, but it might be informative as part of the troubleshooting.
I'd really appreciate that. I'll try some of the tests you described in the days to come. My system works; it's just a bit frustrating not to know why RAID-0 performs the same as non-RAID. Makes one wonder why you'd want the added risk of data loss. Regards, Pieter Hulshoff
That's exactly what I've got in my system: md0 = /dev/hdb1 + /dev/hdd1 md1 = /dev/hda4 + /dev/hdc4 non-RAID partitions are on /dev/hda1, /dev/hda3, /dev/hdc1, and /dev/hdc3; /dev/hda2 and /dev/hdc2 are swap.
For some reason, I had assumed you were using a PCI based ATA controller.
From the above, it looks like you are using a motherboard based controller.
I have done some timing tests of couple of those, and in particular on the Intel motherboards I use, the 2 different IDE channels definately impact each other. ie. They are not independent channels, but share some common piece. So you may not be buying anything by doing raid-0. If you have $50 to try a PCI -based controller you should see the speed improvement you expect. I have tried SIIG and Adaptec PCI cards, the Adaptec is slightly faster. The 3ware controller should be even faster, but I have not really found that to be true, and they are not cheap. Greg -- Greg Freemyer
On Thursday 06 January 2005 20:54, Greg Freemyer wrote:
That's exactly what I've got in my system: md0 = /dev/hdb1 + /dev/hdd1 md1 = /dev/hda4 + /dev/hdc4 non-RAID partitions are on /dev/hda1, /dev/hda3, /dev/hdc1, and /dev/hdc3; /dev/hda2 and /dev/hdc2 are swap.
For some reason, I had assumed you were using a PCI based ATA controller. From the above, it looks like you are using a motherboard based controller.
That is correct: I'm using the motherboard's controllers in combination with Linux software RAID. Considering that I can use 8 IDE devices with my motherboard, perhaps I should consider creating the RAID arrays like: /dev/hda + /dev/hde /dev/hdb + /dev/hdf That way the two drives will be in separate controllers as well. I just noticed that copying data from /dev/hda to /dev/hde for instance is 3-5 times as fast as copying data from /dev/hda to /dev/hdc. Regards, Pieter Hulshoff
On Thu, 6 Jan 2005 21:16:43 +0100, Pieter Hulshoff
On Thursday 06 January 2005 20:54, Greg Freemyer wrote:
That's exactly what I've got in my system: md0 = /dev/hdb1 + /dev/hdd1 md1 = /dev/hda4 + /dev/hdc4 non-RAID partitions are on /dev/hda1, /dev/hda3, /dev/hdc1, and /dev/hdc3; /dev/hda2 and /dev/hdc2 are swap.
For some reason, I had assumed you were using a PCI based ATA controller. From the above, it looks like you are using a motherboard based controller.
That is correct: I'm using the motherboard's controllers in combination with Linux software RAID. Considering that I can use 8 IDE devices with my motherboard, perhaps I should consider creating the RAID arrays like: /dev/hda + /dev/hde /dev/hdb + /dev/hdf That way the two drives will be in separate controllers as well. I just noticed that copying data from /dev/hda to /dev/hde for instance is 3-5 times as fast as copying data from /dev/hda to /dev/hdc.
Regards,
Pieter Hulshoff
Nice motherboard that can control 8 disks. Unless you really need to I would avoid using slaves for any of your hard drives, then put cd/dvd, etc. on the slaves. And yes I think if you make your raid0 sets on hda/hde and hdc/hdg you will see a big speed improvement, but using hda will still interfere with hdc and hde with hdg. The PCI based controllers have more independent IDE controllers, so using hda impacts hdc less than the motherboard chipsets I have tested. Greg -- Greg Freemyer
participants (3)
-
Greg Freemyer
-
Pieter Hulshoff
-
Sunny