Maurice Volaski wrote:
Got an IBM Netfinitiy 4500R box which started out with just a ServeRAID 3LS adapter. Configured it for RAID one on two of three internal hard disks and left the third disk untouched. Installed SuSe 7.0 Linux. At that point, it was a working system where the RAID one pair is /dev/sda and where Linux lives on both /boot and /.
Now I install a PCI Ultra 160 SCSI card to connect external hardware RAID. Ran into lots of issues with IRQ conflicts, but eventually solved them by putting the ServeRAID card into PCI slot 1 (the 32-bit bus) and the PCI SCSI card into PCI slot 5 (part of the 64-bit bus). (Originally, the ServeRAID was in slot 5, but the BIOS has been told they were moved and it correctly finds the /boot partition and executes LILO.
With the external hardware RAID unattached, Linux sees the internal RAID one disk as /dev/sda and everything goes OK.
But with the external hardware RAID attached to the PCI SCSI card, it insists on calling the external hardware RAID /dev/hda and the internal RAID one disk /dev/hdb.
The internal SCSI of the computer was disabled (it's not attached to any drive anyway), and so was the bios scan on the PCI SCSI card to which external hardware RAID is attached.
The bootup now results in the error VFS: Cannot open root device 08:03 Kernel panic: VFS : unable to mount root fs on 08:03
Why does SUSE insist that the external RAID disk is /dev/sda? (How does it even manage to find it before it finds the internal disks?)
Interestingly, booting from the SUSE CD ROM correctly sees the internal RAID one disk as /dev/sda and the external RAID as /dev/sdc (the untouched internal disk is /dev/sdb).
I tried editing /etc/lilo.conf to make the boot disk /dev/hdb, which is what it is when the external RAID is attached, but it doesn't make a difference.
Any ideas?
You need to change the fstab entries also. -- Mark Hounschell dmarkh@cfl.rr.com