[opensuse] Help with Raid on 10.3
Greetings all, I'm trying to do a new install of opensuse 10.3 on a new server box. I'd like to have raid1 (mirror) set up with 2 drives. I don't know the full differences between md raid and dm raid, but I followed the instructions on installing md raid on the opensuse wiki, using yast to set up the raid partitions. The problem is the system won't boot on the first reboot after install. I originally set all partitions to be extended (not primary) which didn't boot. So I started over from scratch and set /boot as primary partition with all others, swap, /, /home, etc as extended, but it still won't boot. At this point I had enabled raid in the bios for both sata controllers, but not set up the raid volume in the motherboard raid controller. Switching gears I then tried setting up the dm raid in the bios and yast saw that as an nvidia_sometingorother and I believe that would have installed and booted, but I'm unsure whether or not I should use the dm raid or if the md raid is preferable. My mobo is an ECS C51G-M754. This is my first stab at raid, and would appreciate any advise anyone could give. Many thanks, Jim F -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 12/02/2007 03:32 PM, Jim Flanagan wrote:
Greetings all,
I'm trying to do a new install of opensuse 10.3 on a new server box. I'd like to have raid1 (mirror) set up with 2 drives. I don't know the full differences between md raid and dm raid, but I followed the instructions on installing md raid on the opensuse wiki, using yast to set up the raid partitions. The problem is the system won't boot on the first reboot after install. I originally set all partitions to be extended (not primary) which didn't boot. So I started over from scratch and set /boot as primary partition with all others, swap, /, /home, etc as extended, but it still won't boot. At this point I had enabled raid in the bios for both sata controllers, but not set up the raid volume in the motherboard raid controller.
Switching gears I then tried setting up the dm raid in the bios and yast saw that as an nvidia_sometingorother and I believe that would have installed and booted, but I'm unsure whether or not I should use the dm raid or if the md raid is preferable.
Mine did work, but I have had a fair bit of experience with md raid. First, you mentioned you had a separate boot partition. Is that also a part of your raid1? Since grub does not understand md raid yet, it needs to boot from the MBR of one of the drives. Obviously, the generic MBR (which boots the active partition) will not work with md raid, but that is the default. During install, you need to change where grub is installed, putting it in the MBR. On mine, my /boot is on my raid1 root. This is noted in grub's menu.lst. Grub finds the /boot/grub directory initially via one of the drives, i.e. sda5. It then loads stage 1.5, and the raid1 modules have to be a part of the initrd for it to find and use the raid1. When I upgraded our office server, I did change the defaults for grub, but it had no problems finding the raid1 root or boot, and installed quite smoothly, so it does work. Just remember, the default is a generic boot loader in the MBR, and grub is NOT installed in the MBR. Since if boot is on raid, that partition cannot be made active, so it cannot boot that way. GRUB needs to be installed in the MBR of the drive your BIOS is set to boot from. HTH. -- Joe Morris Registered Linux user 231871 running openSUSE 10.3 x86_64 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Joe Morris (NTM) wrote:
On 12/02/2007 03:32 PM, Jim Flanagan wrote:
Greetings all,
I'm trying to do a new install of opensuse 10.3 on a new server box. I'd like to have raid1 (mirror) set up with 2 drives. I don't know the full differences between md raid and dm raid, but I followed the instructions on installing md raid on the opensuse wiki, using yast to set up the raid partitions. The problem is the system won't boot on the first reboot after install. I originally set all partitions to be extended (not primary) which didn't boot. So I started over from scratch and set /boot as primary partition with all others, swap, /, /home, etc as extended, but it still won't boot. At this point I had enabled raid in the bios for both sata controllers, but not set up the raid volume in the motherboard raid controller.
Switching gears I then tried setting up the dm raid in the bios and yast saw that as an nvidia_sometingorother and I believe that would have installed and booted, but I'm unsure whether or not I should use the dm raid or if the md raid is preferable.
Mine did work, but I have had a fair bit of experience with md raid. First, you mentioned you had a separate boot partition. Is that also a part of your raid1? Since grub does not understand md raid yet, it needs to boot from the MBR of one of the drives. Obviously, the generic MBR (which boots the active partition) will not work with md raid, but that is the default. During install, you need to change where grub is installed, putting it in the MBR. On mine, my /boot is on my raid1 root. This is noted in grub's menu.lst. Grub finds the /boot/grub directory initially via one of the drives, i.e. sda5. It then loads stage 1.5, and the raid1 modules have to be a part of the initrd for it to find and use the raid1. When I upgraded our office server, I did change the defaults for grub, but it had no problems finding the raid1 root or boot, and installed quite smoothly, so it does work. Just remember, the default is a generic boot loader in the MBR, and grub is NOT installed in the MBR. Since if boot is on raid, that partition cannot be made active, so it cannot boot that way. GRUB needs to be installed in the MBR of the drive your BIOS is set to boot from. HTH.
I haven't gotten this working just yet, but I think I'm making progress. You gave me something to work with. Thanks. Will revert. Jim F -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Jim Flanagan wrote:
Joe Morris (NTM) wrote:
On 12/02/2007 03:32 PM, Jim Flanagan wrote:
Greetings all,
I'm trying to do a new install of opensuse 10.3 on a new server box. I'd like to have raid1 (mirror) set up with 2 drives. I don't know the full differences between md raid and dm raid, but I followed the instructions on installing md raid on the opensuse wiki, using yast to set up the raid partitions. The problem is the system won't boot on the first reboot after install. I originally set all partitions to be extended (not primary) which didn't boot. So I started over from scratch and set /boot as primary partition with all others, swap, /, /home, etc as extended, but it still won't boot. At this point I had enabled raid in the bios for both sata controllers, but not set up the raid volume in the motherboard raid controller.
Switching gears I then tried setting up the dm raid in the bios and yast saw that as an nvidia_sometingorother and I believe that would have installed and booted, but I'm unsure whether or not I should use the dm raid or if the md raid is preferable.
Mine did work, but I have had a fair bit of experience with md raid. First, you mentioned you had a separate boot partition. Is that also a part of your raid1? Since grub does not understand md raid yet, it needs to boot from the MBR of one of the drives. Obviously, the generic MBR (which boots the active partition) will not work with md raid, but that is the default. During install, you need to change where grub is installed, putting it in the MBR. On mine, my /boot is on my raid1 root. This is noted in grub's menu.lst. Grub finds the /boot/grub directory initially via one of the drives, i.e. sda5. It then loads stage 1.5, and the raid1 modules have to be a part of the initrd for it to find and use the raid1. When I upgraded our office server, I did change the defaults for grub, but it had no problems finding the raid1 root or boot, and installed quite smoothly, so it does work. Just remember, the default is a generic boot loader in the MBR, and grub is NOT installed in the MBR. Since if boot is on raid, that partition cannot be made active, so it cannot boot that way. GRUB needs to be installed in the MBR of the drive your BIOS is set to boot from. HTH.
I haven't gotten this working just yet, but I think I'm making progress. You gave me something to work with. Thanks. Will revert.
Jim F
OK, I've got the software raid booting now. The main problem was I mis-read the suse wiki http://en.opensuse.org/How_to_install_SUSE_Linux_on_software_RAID. I had mistakenly partially enabled raid in my bios thinking that was needed for software raid. It is not. I had not set up the raid in the bios, but merely enabled it. In any case that was the problem. I disabled it and set up the raid as per the wiki and it works. Per the wiki I made: /boot (primary) extended /swap / /home /share One note here, it definitely did not like /boot being on an extended partition, and would not boot from that. I probably could have gotten that working too, but by making /boot a primary partition, grub boots fine the first time, with no tweaking. I did make all other partitions, /swap, /, /home and others extended partitions and all works fine. Many thanks, Jim F -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Sunday 2007-12-02 at 10:21 -0600, Jim Flanagan wrote: ...
One note here, it definitely did not like /boot being on an extended partition, and would not boot from that. I probably could have gotten that working too, but by making /boot a primary partition, grub boots fine the first time, with no tweaking.
To boot from an extended partition (isn't the correct name "logical partition"? :-? ), grub must reside in the MBR, or perhaps on another primary partition. The scheme preferred on 10.3 is to install grub in the /boot partition and mark it bootable. The MBR will have a generic code, not grub. But logical partitions can not be marked bootable, I believe. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4-svn0 (GNU/Linux) iD8DBQFHUxMEtTMYHG2NR9URAlpyAJ9swTyDSIk7cpmObJdrK4A0MA2oogCfbNiv ICAGE5acJrgpTXBMIRfRDNE= =P3Hh -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Jim Flanagan wrote:
I'm trying to do a new install of opensuse 10.3 on a new server box. I'd like to have raid1 (mirror) set up with 2 drives. I don't know the full differences between md raid and dm raid, but I followed the instructions on installing md raid on the opensuse wiki, using yast to set up the raid partitions. Many even cheap Motherboards have RAID chips integrated. Most cheap RAID chips are not fast so RAID5 is not worth it with them. But for RAID1 wouldn't you be better served with a different Main board or a hardware RAID controller ?
Kind regards Philippe -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Sunday 2007-12-02 at 13:47 +0100, Philippe Landau wrote:
Many even cheap Motherboards have RAID chips integrated. Most cheap RAID chips are not fast so RAID5 is not worth it with them.
I was told that most of those are "fake" raid, that you get no real gain compared with plain software raid, which has the advantage of not needing a particular set of hardware; ie, it is portable. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4-svn0 (GNU/Linux) iD8DBQFHUrCstTMYHG2NR9URAh2mAJ9VAYNt04/UmwslXIC2PFNgojioiACfeacg wocnajuLAQkyBp6mZtkL1xg= =A/5m -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Philippe Landau wrote:
Jim Flanagan wrote:
I'm trying to do a new install of opensuse 10.3 on a new server box. I'd like to have raid1 (mirror) set up with 2 drives. I don't know the full differences between md raid and dm raid, but I followed the instructions on installing md raid on the opensuse wiki, using yast to set up the raid partitions.
Many even cheap Motherboards have RAID chips integrated. Most cheap RAID chips are not fast so RAID5 is not worth it with them. But for RAID1 wouldn't you be better served with a different Main board or a hardware RAID controller ?
Kind regards Philippe
Well, this is a home server that has not much load. I'm replacing my 2 year old home server, P-III and Suse 10.0 that has run very well for almost 2 years. Since 10.0 is becoming extinct I need to do something. I could probably use this same box for another 2 years, but it has only 1 hard drive and I decided to build a new one with opensuse 10.3, this time with raid. But a true hardware raid is too expensive and not needed in this environment. I've been reading a lot about the differences between software raid in linux and the motherboard raid options. Most linux distros recommend linux software raid over the mobo raid, claiming more flexibility to use different types of drives, no lock in to mobo drivers, etc. I suspect this is due to problems with mobo raid in the past, and a general tendency to favor open source. They do claim software raid uses more system resources. I'm not sure which way to go, but am leaning toward the software raid option (md raid). This is probably a loaded question, but any thoughts on the matter? Jim F -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Sunday 02 December 2007, Jim Flanagan wrote:
Philippe Landau wrote:
Jim Flanagan wrote:
I'm trying to do a new install of opensuse 10.3 on a new server box. I'd like to have raid1 (mirror) set up with 2 drives. I don't know the full differences between md raid and dm raid, but I followed the instructions on installing md raid on the opensuse wiki, using yast to set up the raid partitions.
Many even cheap Motherboards have RAID chips integrated. Most cheap RAID chips are not fast so RAID5 is not worth it with them. But for RAID1 wouldn't you be better served with a different Main board or a hardware RAID controller ?
Kind regards Philippe
Well, this is a home server that has not much load. I'm replacing my 2 year old home server, P-III and Suse 10.0 that has run very well for almost 2 years. Since 10.0 is becoming extinct I need to do something. I could probably use this same box for another 2 years, but it has only 1 hard drive and I decided to build a new one with opensuse 10.3, this time with raid. But a true hardware raid is too expensive and not needed in this environment.
I've been reading a lot about the differences between software raid in linux and the motherboard raid options. Most linux distros recommend linux software raid over the mobo raid, claiming more flexibility to use different types of drives, no lock in to mobo drivers, etc. I suspect this is due to problems with mobo raid in the past, and a general tendency to favor open source. They do claim software raid uses more system resources.
I'm not sure which way to go, but am leaning toward the software raid option (md raid).
This is probably a loaded question, but any thoughts on the matter?
Jim F
Hi Jim, Hardware RAID need not be overly expensive. You can purchase an Adaptec SATA RAID 1210SA for about $59.00USD from Newegg.com: http://www.newegg.com/Product/Product.aspx?Item=N82E16816103020 This card would be quite suitable for what you are doing and is fairly easy to setup. I have used it on a couple low-end servers and have never had a problem. Jesse
Jesse L. Purdom wrote:
On Sunday 02 December 2007, Jim Flanagan wrote:
Philippe Landau wrote:
Jim Flanagan wrote:
I'm trying to do a new install of opensuse 10.3 on a new server box.
Many even cheap Motherboards have RAID chips integrated. Most cheap RAID chips are not fast so RAID5 is not worth it with them. But for RAID1 wouldn't you be better served with a different Main board or a hardware RAID controller ?
Kind regards Philippe
Well, this is a home server that has not much load.
I've been reading a lot about the differences between software raid in linux and the motherboard raid options. Most linux distros recommend linux software raid over the mobo raid, claiming more flexibility to use different types of drives, no lock in to mobo drivers, etc. I suspect this is due to problems with mobo raid in the past, and a general tendency to favor open source. They do claim software raid uses more system resources.
I'm not sure which way to go, but am leaning toward the software raid option (md raid).
This is probably a loaded question, but any thoughts on the matter?
Jim F
Hi Jim,
Hardware RAID need not be overly expensive. You can purchase an Adaptec SATA RAID 1210SA for about $59.00USD from Newegg.com:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816103020
This card would be quite suitable for what you are doing and is fairly easy to setup. I have used it on a couple low-end servers and have never had a problem.
Jesse
Interesting, I didn't know they were that inexpensive. I'm still a bit unclear as to what is true raid and what is "fake" raid. Is this a true raid card, or is it a software raid on a card? As I said I really don't understand the difference. Tks, Jim F -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Sunday 02 December 2007, Jim Flanagan wrote: <snip>
I'm not sure which way to go, but am leaning toward the software raid option (md raid).
This is probably a loaded question, but any thoughts on the matter?
Jim F
Hi Jim,
Hardware RAID need not be overly expensive. You can purchase an Adaptec SATA RAID 1210SA for about $59.00USD from Newegg.com:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816103020
This card would be quite suitable for what you are doing and is fairly easy to setup. I have used it on a couple low-end servers and have never had a problem.
Jesse
Interesting, I didn't know they were that inexpensive. I'm still a bit unclear as to what is true raid and what is "fake" raid. Is this a true raid card, or is it a software raid on a card? As I said I really don't understand the difference.
Tks,
Jim F
The primary difference between "software" and "hardware" RAID is that software RAID was developed to run under the booted Operating System. Hardware RAID is self-contained and provides greater performance and reliability. The RAID chips on most motherboards are basically disk controllers that provide a software implementation of a RAID controller at boot. Wikipedia has a pretty good entry on RAID: http://en.wikipedia.org/wiki/RAID#Implementations Have a good one! Jesse
Jesse L. Purdom writes:
Hardware RAID need not be overly expensive. You can purchase an Adaptec SA= TA=20 RAID 1210SA for about $59.00USD from Newegg.com:
http://www.newegg.com/Product/Product.aspx?Item=3DN82E16816103020
The Adaptec 12xx series cards are not "real" hardware RAID. They are "fake raid" and need a special driver which does all the work in software. Real hardware RAID cards *are* expensive -- Adaptec has a few, and 3Ware also has some good ones. http://thebs413.blogspot.com/2005/09/fake-raid-fraid-sucks-even-more-at.html -Ti -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Sunday 2007-12-02 at 09:06 -0600, Jim Flanagan wrote:
I've been reading a lot about the differences between software raid in linux and the motherboard raid options. Most linux distros recommend linux software raid over the mobo raid, claiming more flexibility to use different types of drives, no lock in to mobo drivers, etc. I suspect this is due to problems with mobo raid in the past, and a general tendency to favor open source.
No, it's not because problems in the past. Each mobo can use it's own method for mirroring, and that method doesn't have to be compatible with other mobos, not even of the same manufacturer. If your mobo breaks down, it's very possible you will not be able to reuse your hard disk without a full reformat, losing all data. That is the past, present, and future .-(
They do claim software raid uses more system resources.
Compared with a real hardware raid, yes. Compared with a fake raid (one that requires drivers), not really.
This is probably a loaded question, but any thoughts on the matter?
There is another issue: the mobo raid should work with windows, if you double boot. The software one will not. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4-svn0 (GNU/Linux) iD8DBQFHUtbGtTMYHG2NR9URAhEQAJ4hj1i3BHTwS3+cGEQGSrYG0RqpigCfROEO Zfuilr/xByIEH32y81dkAX8= =Aw5/ -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Carlos E. R. wrote:
The Sunday 2007-12-02 at 09:06 -0600, Jim Flanagan wrote:
I've been reading a lot about the differences between software raid in linux and the motherboard raid options. Most linux distros recommend linux software raid over the mobo raid, claiming more flexibility to use different types of drives, no lock in to mobo drivers, etc. I suspect this is due to problems with mobo raid in the past, and a general tendency to favor open source.
No, it's not because problems in the past.
Each mobo can use it's own method for mirroring, and that method doesn't have to be compatible with other mobos, not even of the same manufacturer. If your mobo breaks down, it's very possible you will not be able to reuse your hard disk without a full reformat, losing all data.
That is the past, present, and future .-(
They do claim software raid uses more system resources.
Compared with a real hardware raid, yes. Compared with a fake raid (one that requires drivers), not really.
This is probably a loaded question, but any thoughts on the matter?
There is another issue: the mobo raid should work with windows, if you double boot. The software one will not.
This is really the point on this type of matter. There are benefits and trade offs to both options. In my case as I plan to use this box as my server i really don't expect to put windows or any other OS on it, except for future versions of opensuse! However you never know. I guess the whole point of raid is to head off certain disk failures, ones that you really can't predict. You're point about mobo failure is a good one. I had not considered that. I did consider (assuming mobo raid) what would happen if one HD failed and I could not get an exact replacement. Even more so for a mobo. Replacing each part with an exact duplicate becomes harder as time moves on. My HD's are the newest Seagate's, 7200.11's, so those will probably be available for some time to come. Different case for my mobo, it is an inexpensive socket 754 so not sure how long they will be on the market. (They are so inexpensive now that I could conceivably buy an extra one now to have on hand, but there is no assurance it actually works sitting in the box. And with the pace of new CPSs I probably won't want the same board in 2 or more years anyway. I did go with the software raid and all seems to be working well for now. Many thanks to all for the great advice. Jim F -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Jim Flanagan wrote:
Carlos E. R. wrote:
The Sunday 2007-12-02 at 09:06 -0600, Jim Flanagan wrote:
I've been reading a lot about the differences between software raid in linux and the motherboard raid options. Most linux distros recommend linux software raid over the mobo raid, claiming more flexibility to use different types of drives, no lock in to mobo drivers, etc. I suspect this is due to problems with mobo raid in the past, and a general tendency to favor open source.
No, it's not because problems in the past.
Each mobo can use it's own method for mirroring, and that method doesn't have to be compatible with other mobos, not even of the same manufacturer. If your mobo breaks down, it's very possible you will not be able to reuse your hard disk without a full reformat, losing all data.
That is the past, present, and future .-(
They do claim software raid uses more system resources.
Compared with a real hardware raid, yes. Compared with a fake raid (one that requires drivers), not really.
This is probably a loaded question, but any thoughts on the matter?
There is another issue: the mobo raid should work with windows, if you double boot. The software one will not.
This is really the point on this type of matter. There are benefits and trade offs to both options. In my case as I plan to use this box as my server i really don't expect to put windows or any other OS on it, except for future versions of opensuse! However you never know. I guess the whole point of raid is to head off certain disk failures, ones that you really can't predict.
You're point about mobo failure is a good one. I had not considered that. I did consider (assuming mobo raid) what would happen if one HD failed and I could not get an exact replacement. Even more so for a mobo. Replacing each part with an exact duplicate becomes harder as time moves on. My HD's are the newest Seagate's, 7200.11's, so those will probably be available for some time to come. Different case for my mobo, it is an inexpensive socket 754 so not sure how long they will be on the market. (They are so inexpensive now that I could conceivably buy an extra one now to have on hand, but there is no assurance it actually works sitting in the box. And with the pace of new CPSs I probably won't want the same board in 2 or more years anyway.
I did go with the software raid and all seems to be working well for now.
I believe the software RAID in SUSE allows you to mix drive sizes, so if the replacement drive is bigger, it's not a problem. -- Use OpenOffice.org http://www.openoffice.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Sunday 2007-12-02 at 11:03 -0600, Jim Flanagan wrote:
You're point about mobo failure is a good one. I had not considered that. I did consider (assuming mobo raid) what would happen if one HD failed and I could not get an exact replacement. Even more so for a mobo. Replacing each part with an exact duplicate becomes harder as time moves on. My HD's are the newest Seagate's, 7200.11's, so those will probably be available for some time to come.
With software raid you can use different disk models, even of different size and speed. Obviously, the added partition should be of the same size or a bit larger, the rest would have to go on another independent partition. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4-svn0 (GNU/Linux) iD8DBQFHUxEhtTMYHG2NR9URAmTPAJ0TFhIQ6NG95F44VWZjQ72UAMW+1gCbBpgu xLaH+w4TilD9yfmhsE3Xt1s= =H7/b -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
participants (7)
-
Carlos E. R.
-
James Knott
-
Jesse L. Purdom
-
Jim Flanagan
-
Joe Morris (NTM)
-
Philippe Landau
-
ti@amb.org