Fwd: [opensuse] Hardware Compatibility : Adaptec 1430SA / OpenSuSe V11.0
---------- Forwarded message ----------
From: PaPa NoeL
PaPa NoeL wrote:
So I have made the mdRaid work finally and I have tested it: The performances look slightly better than using the RAID card during file transfers (+/- 10 Mo/s better) However, I have an issue: when I reboot, mdadmin doesn't re-spawn the array entirely: /dev/md0 --> is the Raid 1 of sdb sdc /dev/md1 --> is the Raid 1 of sdd sde /dev/md3 --> is the RAID 0 of md1 & md0 and this one doesn't come up at reboot.
I have to rebuild it with mdadm --create /dev/md3 --chunk=64 --level=0 --raid-devices=3 /dev/md0 dev/md1 and remount the file system.
You don't have to recreate it, you can just assemble it. Nonetheless, it should certainly come up automagically. What do you have configured in /etc/mdadm.conf ?
/Per
-- /Per Jessen, Zürich
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
I had no /etc/mdadm.conf so I wrote it myself from the example in /usr/share/doc/package/mdadm. No success. I tried the assemble option: no success. It says something weird about the file system on md0 and md1 such as it's formated in ext2. I never done that?!? I have just formatted the md3 in ext3. What happens during boot is that the md0 abd md1 are Ok, but when the mdadm wants to set the md3, it says that the md0/1 are not ready. I guess there is a tweak or smthg? ElPaPaNoeL
PaPa NoeL wrote:
I had no /etc/mdadm.conf so I wrote it myself from the example in /usr/share/doc/package/mdadm. No success. I tried the assemble option: no success.
Uh, you tried 'mdadm --assemble' and it didnt work?? Which errors messages did you get?
It says something weird about the file system on md0 and md1 such as it's formated in ext2.
Send us the output from cat /proc/mdstat please. When you've got all three arrays running.
What happens during boot is that the md0 abd md1 are Ok, but when the mdadm wants to set the md3, it says that the md0/1 are not ready.
Aha - that is interesting. Let's see the output from cat /proc/mdstat also before you assemble/recreate md3. To put something reasonable in /etc/mdadm.conf, you can do: mdadm --examine --scan >/etc/mdadm.conf /Per -- /Per Jessen, Zürich -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Per Jessen wrote:
PaPa NoeL wrote:
I had no /etc/mdadm.conf so I wrote it myself from the example in /usr/share/doc/package/mdadm. No success. I tried the assemble option: no success.
Uh, you tried 'mdadm --assemble' and it didnt work?? Which errors messages did you get?
It says something weird about the file system on md0 and md1 such as it's formated in ext2.
I've just checked the mdadm man page, and it seems the procedure for creating a RAID10 is not what I thought - it looks you can do with a single "mdadm --create -n4 -l10" - you probably need to try that. Sorry about confusing the issue, I was certain a RAID10 was created in two rounds. /Per -- /Per Jessen, Zürich -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
2008/12/7 Per Jessen
Per Jessen wrote:
PaPa NoeL wrote:
I had no /etc/mdadm.conf so I wrote it myself from the example in /usr/share/doc/package/mdadm. No success. I tried the assemble option: no success.
Uh, you tried 'mdadm --assemble' and it didnt work?? Which errors messages did you get?
It says something weird about the file system on md0 and md1 such as it's formated in ext2.
I've just checked the mdadm man page, and it seems the procedure for creating a RAID10 is not what I thought - it looks you can do with a single "mdadm --create -n4 -l10" - you probably need to try that. Sorry about confusing the issue, I was certain a RAID10 was created in two rounds.
/Per
-- /Per Jessen, Zürich
Here are the logs:
DMESG:
Adding 2104472k swap on /dev/sda1. Priority:-1 extents:1 across:2104472k
md: md0 stopped.
device-mapper: uevent: version 1.0.3
device-mapper: ioctl: 4.13.0-ioctl (2007-10-18) initialised: dm-devel@redhat.com
md: bind<sdc>
md: bind<sdb>
md: raid1 personality registered for level 1
raid1: raid set md0 active with 2 out of 2 mirrors
md: md1 stopped.
md: bind<sde>
md: bind<sdd>
raid1: raid set md1 active with 2 out of 2 mirrors
md: md3 stopped.
bootsplash: status on console 0 changed to on
ohci_hcd: 2006 August 04 USB 1.1 'Open' Host Controller (OHCI) Driver
usbcore: registered new interface driver hiddev
usbcore: registered new interface driver usbhid
drivers/hid/usbhid/hid-core.c: v2.6:USB HID core driver
IA-32 Microcode Update Driver: v1.14a
PaPa NoeL wrote:
The mdadm --examin command: No output. but changes the conf file.
Yes, I should have said that.
(none):/ # mdadm --examine --scan >/etc/mdadm.conf (none):/ # more /etc/mdadm.conf ARRAY /dev/md0 level=raid1 num-devices=2 UUID=f8046dc0:05c37635:7fcbf47e:4ac21441 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=c36ae49b:699da7bc:a4f43e27:6aca9070 ARRAY /dev/md3 level=raid0 num-devices=2 UUID=132f9525:5eea8007:9884fa3f:17c9560b
I have rebooted and it works fine now, I think it's the mdadm --examine --scan >/etc/mdadm.conf that sort it out!
Great - I'm glad you got it to work!
Will the 'mdadm --create -n4 -l10' make any difference from this configuration?
Not really - you'll have the same result, but it will probably look a little different in /proc/mdstat. /Per -- /Per Jessen, Zürich -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
2008/12/8 Per Jessen
PaPa NoeL wrote:
The mdadm --examin command: No output. but changes the conf file.
Yes, I should have said that.
(none):/ # mdadm --examine --scan >/etc/mdadm.conf (none):/ # more /etc/mdadm.conf ARRAY /dev/md0 level=raid1 num-devices=2 UUID=f8046dc0:05c37635:7fcbf47e:4ac21441 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=c36ae49b:699da7bc:a4f43e27:6aca9070 ARRAY /dev/md3 level=raid0 num-devices=2 UUID=132f9525:5eea8007:9884fa3f:17c9560b
I have rebooted and it works fine now, I think it's the mdadm --examine --scan >/etc/mdadm.conf that sort it out!
Great - I'm glad you got it to work!
Will the 'mdadm --create -n4 -l10' make any difference from this configuration?
Not really - you'll have the same result, but it will probably look a little different in /proc/mdstat.
/Per
-- /Per Jessen, Zürich
mdadm --create -n4 -l10 doesn't work because it says I don't have enough devices??? I've got four, given four... Didn't have much time to test, but from RAID 0+1 and RAID 10 I have lost a lot of performances... File transfert in RAID 0+1 = 70 MO/s File transfert in RAID 10 = 116 MO/s ???? There must be something wrong? If I had a real RAID card with the same disks, would the performances Rock? (I'm only using 5% of the CPUwith soft raid) CHeers, El PaPaNoeL
PaPa NoeL wrote:
Didn't have much time to test, but from RAID 0+1 and RAID 10 I have lost a lot of performances... File transfert in RAID 0+1 = 70 MO/s File transfert in RAID 10 = 116 MO/s ????
How did you measure this? What was the source and what was the target?
There must be something wrong?
If I had a real RAID card with the same disks, would the performances Rock? (I'm only using 5% of the CPUwith soft raid)
I have a 3ware controller with raid5 on 5 disks. Read performance is up to 300 mb/s, write performance only about 80-90 mb/s, typical raid5 values. Most of the time these values are of little use, usually it is much more important to see how the storage performs for many simultaneous read/write sessions. -- Sandy List replies only please! Please address PMs to: news-reply2 (@) japantest (.) homelinux (.) com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
2008/12/8 Sandy Drobic
PaPa NoeL wrote:
Didn't have much time to test, but from RAID 0+1 and RAID 10 I have lost a lot of performances... File transfert in RAID 0+1 = 70 MO/s File transfert in RAID 10 = 116 MO/s ????
How did you measure this? What was the source and what was the target?
There must be something wrong?
If I had a real RAID card with the same disks, would the performances Rock? (I'm only using 5% of the CPUwith soft raid)
I have a 3ware controller with raid5 on 5 disks. Read performance is up to 300 mb/s, write performance only about 80-90 mb/s, typical raid5 values.
Most of the time these values are of little use, usually it is much more important to see how the storage performs for many simultaneous read/write sessions.
-- Sandy
Sorry, I made a mistake:
File transfert in RAID 0+1 = 70 MO/s File transfert in RAID 10 = 16 MO/s and NOT 116
It's mesured by using samba share transfer as it is more relevant for me as it's the main purpose of this NAS ( I know it also depends on the 2nd computer HDD). I should do another test with hdparm or dd from /dev/random. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
PaPa NoeL wrote:
2008/12/8 Sandy Drobic
: PaPa NoeL wrote:
Didn't have much time to test, but from RAID 0+1 and RAID 10 I have lost a lot of performances... File transfert in RAID 0+1 = 70 MO/s File transfert in RAID 10 = 116 MO/s ???? How did you measure this? What was the source and what was the target?
There must be something wrong?
If I had a real RAID card with the same disks, would the performances Rock? (I'm only using 5% of the CPUwith soft raid) I have a 3ware controller with raid5 on 5 disks. Read performance is up to 300 mb/s, write performance only about 80-90 mb/s, typical raid5 values.
Most of the time these values are of little use, usually it is much more important to see how the storage performs for many simultaneous read/write sessions.
-- Sandy
Sorry, I made a mistake:
File transfert in RAID 0+1 = 70 MO/s File transfert in RAID 10 = 16 MO/s and NOT 116
What the heck is "MO/s"?
It's mesured by using samba share transfer as it is more relevant for me as it's the main purpose of this NAS ( I know it also depends on the 2nd computer HDD).
I should do another test with hdparm or dd from /dev/random.
Use bonnie, but take care to set a size bigger than your RAM size. -- Sandy List replies only please! Please address PMs to: news-reply2 (@) japantest (.) homelinux (.) com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
2008/12/8 Sandy Drobic
PaPa NoeL wrote:
2008/12/8 Sandy Drobic
: PaPa NoeL wrote:
Didn't have much time to test, but from RAID 0+1 and RAID 10 I have lost a lot of performances... File transfert in RAID 0+1 = 70 MO/s File transfert in RAID 10 = 116 MO/s ???? How did you measure this? What was the source and what was the target?
There must be something wrong?
If I had a real RAID card with the same disks, would the performances Rock? (I'm only using 5% of the CPUwith soft raid) I have a 3ware controller with raid5 on 5 disks. Read performance is up to 300 mb/s, write performance only about 80-90 mb/s, typical raid5 values.
Most of the time these values are of little use, usually it is much more important to see how the storage performs for many simultaneous read/write sessions.
-- Sandy
Sorry, I made a mistake:
File transfert in RAID 0+1 = 70 MO/s File transfert in RAID 10 = 16 MO/s and NOT 116
What the heck is "MO/s"?
It's mesured by using samba share transfer as it is more relevant for me as it's the main purpose of this NAS ( I know it also depends on the 2nd computer HDD).
I should do another test with hdparm or dd from /dev/random.
Use bonnie, but take care to set a size bigger than your RAM size.
-- Sandy
Sorry, MO is MB. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Sandy Drobic wrote:
File transfert in RAID 0+1 = 70 MO/s File transfert in RAID 10 = 16 MO/s and NOT 116
What the heck is "MO/s"?
Mega-octets per second. 'Octet' is quite often used instead of byte, especially in France and in certain applications/systems. (I seem to remember Unisys being one of those companies that used 'octet' instead of 'byte'). I think IETF documents do the same. /Per -- /Per Jessen, Zürich -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Mon, Dec 8, 2008 at 4:46 AM, PaPa NoeL
I should do another test with hdparm or dd from /dev/random.
/dev/random is not a high speed source of data. If the "entropy" of your system is low, /dev/random will pause until it has more random information to make a random data stream from. ie. It uses random system activity such as keyboard presses and mouse movement to randomize its data, but if it is low on random data to drive its algorithms, it simply pauses until it has enough. Use a real test tool and have it simulate your real world loads. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
participants (4)
-
Greg Freemyer
-
PaPa NoeL
-
Per Jessen
-
Sandy Drobic