Listmates, I can't get grub to boot a fresh 10.3 install from my promise SATA raid controller with two new 320G drives in RAID 1. I thought I knew GRUB, but RAID is completely new to me. I really, really need HELP! I have included as much information as I can including the screen messages display during boot below. If I can send anything else, please just let me know. Hardware: MSI K7N2 Delta-ILSR motherboard (RAID O or 1 is supported on ATA133+SATA H/D or 2 SATA H/D); AthlonXP2800+; 1G ram; 1 Maxtor 80G ATA drive (sda); 2 Seagate 320G drives in raid 1 (sdb, sdc) Partitions: sda: sda1 56G NTFS, sda5 1G linux swap, sda6 NTFS sdb: sdb1 79M ext3 /boot, sdb2 20G ext3 /, sdb3 270G /home sdc: sdc1 79M ext3 /boot, sdc2 20G ext3 /, sdc3 270G /home sdb and sdc were blank unformatted disks prior to install. Yast Install Partitioner saw sdb and sdc and created sdb and sdc 1-3 when yast selected the following partition scheme during install: 79M ext3 /dev/mapper/bfefdifda_part1 20G ext3 /dev/mapper/bfefdifda_part2 270G ext3 /dev/mapper/bfefdifda_part3 Yast correctly saw the raid during install and the install worked perfectly through partitioning and software installation. The first required reboot is where everything fell apart. Upon reboot the following error is received Booting from local disk... GRUB Loading stage 1.5. GRUB loading, please wait... Error 17 The system then freezes. Booting from the install DVD crashes out of the graphic install due to the hosed repair install on the DVD, but at least gets back to the basic screen that allows you to boot an existing installation. Here you can NOT select /dev/mapper/<part1,2 or 3>. Yast sees sdb1, sdb2, sdb3, sdc1, sdc2, sdc3. The system will boot if you select either sdb2 or sdc2, but the boot fails after it tries to mount /boot and /home. You are then left with the "press enter to login" that gets you to the repair mode prompt. The screen messages shown are: mount mtab /dev/mapper/bfefdifda_part2 already mounted on / activating device mapper waiting for /dev/mapper/bfefdifda_part1: No such file or directory fsck.ext3: No such file of directory while trying to open /dev/mapper/bfefdifda_part1 /dev/mapper/bfefdifda_part1: The superblock could not be read of does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> error on stat() /dev/mapper/bfefdifda_part1: no such file or directory error on stat() /dev/mapper/bfefdifda_part3: no such file or directory fsck.ext3: No such file of directory while trying to open /dev/mapper/bfefdifda_part3 /dev/mapper/bfefdifda_part3: The superblock could not be read of does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> error on stat() /dev/mapper/bfefdifda_part3: no such file or directory fsck.ext3 /dev/mapper/bfefdifda_part3 failed (status 0x8). Run Manually! /dev/disc/by-id/scsi-SATA_ST3320620AS_6QIS96D-Part1 has gone 49710 days without being checked, check forced. (..the check stuff here) /dev/disc/by-id/scsi-SATA_ST3320620AS_6QIS96D-Part3 has gone 49710 days without being checked, check forced. (..the check stuff here) blogd: No message logging because /var is not accessible fsck failed for at least 1 filesystem (not /) Press enter for login: I then go into repair mode and run yast2. The partitioner lists the following: Partition Type Mount /dev/sda IC35L080AVVA07-0 /dev/sda1 HPFS/NTFS /windows/C /dev/sda2 Extended /dev/sda5 Linux Swap swap /dev/sda6 HPFS/NTFS /windows/D /dev/sdb ST3320620AS /dev/sdb1 Linux Native /dev/sdb2 Linux Native /dev/sdb3 Linux Native /dev/sdc ST3320620AS /dev/sdc1 Linux Native /dev/sdc2 Linux Native / /dev/sdc3 Linux Native I can set the Mount Point for /boot (sdb1 or sdc1) and /home (sdb3 or sdc3) but there is no way to tell the system that it is in raid here. You obviously can't set both sdb1 and sdc1 as /boot. You can select "Linux RAID" as the fs type, but I have no clue what this would do and it give a very stern warning about setting it here. Running the yast bootloader module yields a "blank" bootloader scheme. I can go to "other" and select Propose a new scheme, but that results in it trying to boot from an image on /sdc2 but there is no way to get it to boot from a /boot partition because none are mounted at the time. I am in a holy mess here and I need help! I'll continue to google for a solution, but if anyone knows how to fix this mess please post any help you can. Thanks! -- David C. Rankin, J.D., P.E. Rankin Law Firm, PLLC 510 Ochiltree Street Nacogdoches, Texas 75961 Telephone: (936) 715-9333 Facsimile: (936) 715-9339 www.rankinlawfirm.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org