Tero Pesonen wrote:
Hi all!
< really big snip >
Thanks for all comments!
Regards, Tero Pesonen
Tero, Sorry for the late post. Software RAID is great. I have 6 openSuSE boxes spinning RAID1 right now and all are 'software' raid. (2 pure software RAID -- 'md raid'; 4 fake RAID [BIOS RAID] -- 'dm raid')(5 using SATA, 1 using ATA) It is definitely the way to go. With 500G SATA II 300 M/sec drives going for $50 now days, there is no reason not to set up a RAID for the added level of redundancy it provides. Just remember RAID does *not* replace backups. There is no trick to setting up with raid. It sounds like you are going to do a fresh install, so just put your drives in the computer, put the install DVD in the drive and start the install as normal. When Yast proposes a partitioning scheme, do the following: (1) choose expert settings; (2) delete all the partitions that yast proposed; (3) on each of the discs you want to mirror, create the partitions and pick the option "[ ] Do Not Format" and set the filesystem type to "Linux RAID". Do this on all mirrored partitions; (4) next choose the RAID button and Create. Yast will then show a list of all the partitions that you have created; (5) next choose Add, and pick a partition from each drive that you will mirror one at a time. When you choose add after selecting a partition you will then assign the filesystem type 'Ext3, etc.' and the mount point. You will also notice that the first pair of partitions selected will be designated /md0. Go through the same steps here twice before moving on, for example once for /boot on sdc5 and once for /boot on sdd5. Now when you look at the screen full of partitions you will have /md0 up top and, continuing with the example, /boot to the right of sdc5 and to the right of sdd5; (6) click finish and goto step (4) for each additional raid set you want to create. You will see the subsequent sets designated as /md1, /md2, etc..; and (7) When you're done, just say OK or confirm like you normally would in the partitioner and move on to software selection. The same process applies to adding new drives and raid sets to an existing install. When it is time for the first boot, everything should work fine. However if it fails to boot and you get a grub error like GRUB ERROR 17, just remember *DO NOT PANIC*. It is usually something simple like a grub menu.lst entry, or for some reason, you may need to do a grub-install /dev/(proper device). On the 6 installs I currently have, probably installed the raid setups 10 times. Out of the ten, I have had boot failures probably 3-4 times that took adjustments. Also, if you are using the BIOS raid, search through the BIOS setting any make sure the /boot or / (if you have no /boot) arrays are *bootable*. The setting can be hard to find sometimes, but if you have problems, double-check this. Do not worry about the 24/7 running of drives. Drives commonly have about 700,000 hours MTBF. That's 79.9 years. My experience has been that drives either fail in the first week, or they last a long time. I had one old IBM Deskstar 40G drive that ran for 7 years 24/7 (it still runs, but I don't use it). During those 7 years I know didn't boot the machine any more that 15 times. (setup, kernel updates and physically moving the box from one office to the next was the only time it ever got rebooted) Good luck, if you get stuck -- write back. -- David C. Rankin, J.D.,P.E. | openSoftware und SystemEntwicklung Rankin Law Firm, PLLC | Countdown for openSuSE 11.1 www.rankinlawfirm.com | http://counter.opensuse.org/11.1/small -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org