[SLE] Software RAID with SuSE 6.4
Dear List members, I'm an experienced Linux user, not so experienced with SuSE, however. I've tried to keep the system I'm working on very close to the SuSE standard, and I am using SuSE tools whenever possible. SuSE punted when I sent them this question, because, according to SuSE support, it "really does not fall into installation support as defined in appendix H". They then directed me to the lists. The goal -------- I want to set up a network file server. I want to use software RAID for mirroring user data areas. I am currently not attempting to mirror root and system areas. Hardware -------- On my primary IDE bus, I have two hard disks, which will not be used for mirroring, and an IDE CDROM drive on the secondary IDE bus. For my mirroring attempt, I have two IBM 13.1 GB drives, each on an individual IDE bus, connected to a Promise Ultra66 PCI IDE card. The card and hard disk drives are recognized by the system, and I was able to partition the disks, setup swap partitions, and format other (non RAID) partitions. The Software ------------ I installed a clean setup from the SuSE 6.4 CDROMs. It appears to function correctly (although the Gimp dies regularly). The MD drivers load without errror on startup. /proc/mdstat exists and appears normal. RAID setup ---------- Here's a copy of my /etc/raidtab file: raiddev /dev/md0 raid-level 1 nr-raid-disks 2 persistent-superblock 1 chunk-size 4 device /dev/hde7 raid-disk 0 device /dev/hdg7 raid-disk 1 The Problem ------------ When I do the next step, which is to execute the command mkraid /dev/md0 I get the following error: newarchive:~ # mkraid /dev/md0 handling MD device /dev/md0 analyzing super-block disk 0: /dev/hde7, 12983512kB, raid superblock at 12983424kB disk 1: /dev/hdg7, 12983512kB, raid superblock at 12983424kB mkraid: aborted, see the syslog and /proc/mdstat for potential clues. The Clues: ---------- Here's what /proc/mdstat shows: newarchive:~ # cat /proc/mdstat Personalities : [1 linear] [2 raid0] [3 raid1] [4 raid5] read_ahead not set md0 : inactive md1 : inactive md2 : inactive md3 : inactive Careful examination of various files in /var/log/ shows ABSOLUTELY NOTHING! I uncommented this line in /etc/syslogd.conf # enable this, if you want to keep all messages # in one file *.* -/var/log/allmessages However, /var/log/allmessages shown absolutely nothing related to the raid drivers. Now what? -------- Any suggestions about how to proceed? Thanks in advance for any help you may be able to give me. John Seifarth __________________________________________________________________ John Seifarth http://www.waw.be/waw/ Words & Wires SPRL john@waw.be Computer Consulting & Language Services Voice: (+) 32-2-660-3943 1160 Brussels, Belgium Fax: (+) 32-2-675-3922 -- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/Support/Doku/FAQ/
On Wed, 17 May 2000, John Seifarth wrote: hmm. this prolly wont be much help, but maybe it will make sense to you. i had done 8 or 10 software raid setups with SuSE 6.2 and they are still working perfectly. Right after SuSE 6.3 came out, i had to do another one and had very similar, unexplainable problems. if i remember correctly, with SuSE 6.3, they had migrated from mdutils to raidtools. i ended up de-installing both raidtools and mdutils and then re-installing mdutils, and all worked perfectly. i have not setup software raid since. supposedly, raidtools is the new tool of choice, but i have no experience with it. looking at the steps you've taken, it looks to be the same way i had set it up with mdutils. it would be nice if the howto's were updated with the new tools to help provide a definitive way to setup software raid. anyway, hopefully you'll be able to make something click with raidtools. if you end up needing help badly, i could prolly slap some hardware together and try to make it work. lemme know if you're unable to resolve this. -- Rocky McGaugh rmcgaugh@atipa.com
Dear List members,
I'm an experienced Linux user, not so experienced with SuSE, however. I've tried to keep the system I'm working on very close to the SuSE standard, and I am using SuSE tools whenever possible.
SuSE punted when I sent them this question, because, according to SuSE support, it "really does not fall into installation support as defined in appendix H". They then directed me to the lists.
The goal -------- I want to set up a network file server. I want to use software RAID for mirroring user data areas. I am currently not attempting to mirror root and system areas.
Hardware -------- On my primary IDE bus, I have two hard disks, which will not be used for mirroring, and an IDE CDROM drive on the secondary IDE bus. For my mirroring attempt, I have two IBM 13.1 GB drives, each on an individual IDE bus, connected to a Promise Ultra66 PCI IDE card. The card and hard disk drives are recognized by the system, and I was able to partition the disks, setup swap partitions, and format other (non RAID) partitions.
The Software ------------ I installed a clean setup from the SuSE 6.4 CDROMs. It appears to function correctly (although the Gimp dies regularly). The MD drivers load without errror on startup. /proc/mdstat exists and appears normal.
RAID setup ---------- Here's a copy of my /etc/raidtab file:
raiddev /dev/md0 raid-level 1 nr-raid-disks 2 persistent-superblock 1 chunk-size 4 device /dev/hde7 raid-disk 0 device /dev/hdg7 raid-disk 1
The Problem ------------ When I do the next step, which is to execute the command mkraid /dev/md0 I get the following error: newarchive:~ # mkraid /dev/md0 handling MD device /dev/md0 analyzing super-block disk 0: /dev/hde7, 12983512kB, raid superblock at 12983424kB disk 1: /dev/hdg7, 12983512kB, raid superblock at 12983424kB mkraid: aborted, see the syslog and /proc/mdstat for potential clues.
The Clues: ---------- Here's what /proc/mdstat shows:
newarchive:~ # cat /proc/mdstat Personalities : [1 linear] [2 raid0] [3 raid1] [4 raid5] read_ahead not set md0 : inactive md1 : inactive md2 : inactive md3 : inactive
Careful examination of various files in /var/log/ shows ABSOLUTELY NOTHING!
I uncommented this line in /etc/syslogd.conf # enable this, if you want to keep all messages # in one file *.* -/var/log/allmessages
However, /var/log/allmessages shown absolutely nothing related to the raid drivers.
Now what? --------
Any suggestions about how to proceed? Thanks in advance for any help you may be able to give me.
John Seifarth
-- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/Support/Doku/FAQ/
On 17 May 2000, at 16:33, Rocky McGaugh wrote:
hmm. this prolly wont be much help, but maybe it will make sense to you.
i had done 8 or 10 software raid setups with SuSE 6.2 and they are still working perfectly.
So, I'm looking at doing software raid 1 on a production file server with SuSE. What's the performance like? The box in question is a dual-processor Pentium II-300, currently with 128 MB RAM, but that could be upped lots. I would be using a coupla 18 GB Ultrastar LVD drives and a fast SCSI controller. The box would have ~70 users attached during the day and would be used strictly for validation and for users' home directories. Is this enough power for doing this, and how fast would it be, compared to Netware or (heaven forbid) NT? Cheers, Dennis "Custard pies are a sort of esperanto: a universal language." --Noel Godin -- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/Support/Doku/FAQ/
On Wed, 17 May 2000, Dennis Soper wrote:
So, I'm looking at doing software raid 1 on a production file server with SuSE. What's the performance like?
The box in question is a dual-processor Pentium II-300, currently with 128 MB RAM, but that could be upped lots. I would be using a coupla 18 GB Ultrastar LVD drives and a fast SCSI controller. The box would have ~70 users attached during the day and would be used strictly for validation and for users' home directories. Is this enough power for doing this, and how fast would it be, compared to Netware or (heaven forbid) NT?
Cheers, Dennis
i actually did pretty extensive benchmarks on the software raid systems and was pleasantly suprised. with RAID 1, both drives on seperate channels of the same controller, there was just a negligable reduction in the write speed, and the read speed was the same. I was actually quite interested in the fact that the software raid didnt seem to utilise the parallel read capabilities that RAID1 should allow. I have not had time to do any research on this, so i dont know if it was some specific problem that hasnt allowed me to take advantage of the parallel reads, or if it is just not implemented yet. The implementation you talk about is common. We have a similar box with less users (40-50), but many more services. Unfortunately, i think NFS has too much overhead. While the box you described can certainly handle alot of load, it might not give very snappy response times as outfitted. I would definately look at more RAM, and maybe a hardware RAID card. **cough**cough**icpvortex**cough**cough**. anyway, hope this helps. good luck -- Rocky McGaugh rmcgaugh@atipa.com -- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/Support/Doku/FAQ/
participants (3)
-
dsoper@clipper.net
-
john@waw.be
-
rmcgaugh@atipa.com