SuSe insists /dev/sda is the boot device and that my external data disk is /dev/sda
Got an IBM Netfinitiy 4500R box which started out with just a ServeRAID 3LS adapter. Configured it for RAID one on two of three internal hard disks and left the third disk untouched. Installed SuSe 7.0 Linux. At that point, it was a working system where the RAID one pair is /dev/sda and where Linux lives on both /boot and /. Now I install a PCI Ultra 160 SCSI card to connect external hardware RAID. Ran into lots of issues with IRQ conflicts, but eventually solved them by putting the ServeRAID card into PCI slot 1 (the 32-bit bus) and the PCI SCSI card into PCI slot 5 (part of the 64-bit bus). (Originally, the ServeRAID was in slot 5, but the BIOS has been told they were moved and it correctly finds the /boot partition and executes LILO. With the external hardware RAID unattached, Linux sees the internal RAID one disk as /dev/sda and everything goes OK. But with the external hardware RAID attached to the PCI SCSI card, it insists on calling the external hardware RAID /dev/hda and the internal RAID one disk /dev/hdb. The internal SCSI of the computer was disabled (it's not attached to any drive anyway), and so was the bios scan on the PCI SCSI card to which external hardware RAID is attached. The bootup now results in the error VFS: Cannot open root device 08:03 Kernel panic: VFS : unable to mount root fs on 08:03 Why does SUSE insist that the external RAID disk is /dev/sda? (How does it even manage to find it before it finds the internal disks?) Interestingly, booting from the SUSE CD ROM correctly sees the internal RAID one disk as /dev/sda and the external RAID as /dev/sdc (the untouched internal disk is /dev/sdb). I tried editing /etc/lilo.conf to make the boot disk /dev/hdb, which is what it is when the external RAID is attached, but it doesn't make a difference. Any ideas? -- Maurice Volaski, mvolaski@aecom.yu.edu Computing Support, Rose F. Kennedy Center Albert Einstein College of Medicine of Yeshiva University
On Thu, 11 Jan 2001, Maurice Volaski wrote: Hi,
Got an IBM Netfinitiy 4500R box which started out with just a ServeRAID 3LS adapter. Configured it for RAID one on two of three internal hard disks and left the third disk untouched. Installed SuSe 7.0 Linux. At that point, it was a working system where the RAID one pair is /dev/sda and where Linux lives on both /boot and /. (zip) But with the external hardware RAID attached to the PCI SCSI card, it insists on calling the external hardware RAID /dev/hda and the internal RAID one disk /dev/hdb. (zip) Why does SUSE insist that the external RAID disk is /dev/sda? (How does it even manage to find it before it finds the internal disks?)
Interestingly, booting from the SUSE CD ROM correctly sees the internal RAID one disk as /dev/sda and the external RAID as /dev/sdc (the untouched internal disk is /dev/sdb).
Any ideas?
yes... at boot-up the kernel seeks for scsi controllers, and after that, it seeks for disks, starting at scsi0 (first controller found), scsi1, until scsin (n-th controller) for some reason, your second controller is being found before your original controller. If they two use the same chipset, it may have some way of inverting the order of the scan (normally it follows the pci bus numbering order) if they use different chipsets, and if their drivers are loaded as modules, you could see the order they're being loaded during boot, if your new controller's driver is being loaded before, you may try changing the modules name's order on the line INITRD_MODULES on /etc/rc.config, exec'ing mkinitrd, and lilo, and try rebooting. Hope to have helped rather thar creating more confusion :) Regards Adilson Ribeiro
Thanks for your responses.
if they use different chipsets, and if their drivers are loaded as modules, you could see the order they're being loaded during boot, if your new controller's driver is being loaded before, you may try changing the modules name's order on the line INITRD_MODULES on /etc/rc.config, exec'ing mkinitrd, and lilo, and try rebooting.
This worked. (The hardest part was finding mkinitrd. It's spelled mk_initrd and is located on the SuSE 1 CD.) -- Maurice Volaski, mvolaski@aecom.yu.edu Computing Support, Rose F. Kennedy Center Albert Einstein College of Medicine of Yeshiva University
Maurice Volaski wrote:
Got an IBM Netfinitiy 4500R box which started out with just a ServeRAID 3LS adapter. Configured it for RAID one on two of three internal hard disks and left the third disk untouched. Installed SuSe 7.0 Linux. At that point, it was a working system where the RAID one pair is /dev/sda and where Linux lives on both /boot and /.
Now I install a PCI Ultra 160 SCSI card to connect external hardware RAID. Ran into lots of issues with IRQ conflicts, but eventually solved them by putting the ServeRAID card into PCI slot 1 (the 32-bit bus) and the PCI SCSI card into PCI slot 5 (part of the 64-bit bus). (Originally, the ServeRAID was in slot 5, but the BIOS has been told they were moved and it correctly finds the /boot partition and executes LILO.
With the external hardware RAID unattached, Linux sees the internal RAID one disk as /dev/sda and everything goes OK.
But with the external hardware RAID attached to the PCI SCSI card, it insists on calling the external hardware RAID /dev/hda and the internal RAID one disk /dev/hdb.
The internal SCSI of the computer was disabled (it's not attached to any drive anyway), and so was the bios scan on the PCI SCSI card to which external hardware RAID is attached.
The bootup now results in the error VFS: Cannot open root device 08:03 Kernel panic: VFS : unable to mount root fs on 08:03
Why does SUSE insist that the external RAID disk is /dev/sda? (How does it even manage to find it before it finds the internal disks?)
Interestingly, booting from the SUSE CD ROM correctly sees the internal RAID one disk as /dev/sda and the external RAID as /dev/sdc (the untouched internal disk is /dev/sdb).
I tried editing /etc/lilo.conf to make the boot disk /dev/hdb, which is what it is when the external RAID is attached, but it doesn't make a difference.
Any ideas?
You need to change the fstab entries also. -- Mark Hounschell dmarkh@cfl.rr.com
participants (3)
-
Adilson Guilherme Vasconcelos Ribeiro
-
Mark Hounschell
-
Maurice Volaski