I'm still a newbie with a lot of this Linux stuff. I have yet to go through the process of compiling my own kernel and this is the first time I've used mk_initrd and used the rescue disk to repair a system. Please bare with me.(long) I'm getting a Kernel Panic at startup. The last three lines of the startup log are as follows: Loading module cdrom... Using /lib/modules/2.4.19-4GB/kernel/drivers/cdrom/cdrom.o Kernel Panic: VFS: Unable to mount root fs on 03:03 I'm using 8.1 with the default reiserfs file system and GRUB boot loader. I have one hard drive setup as master on the first IDE (hda) and a cd-rw/dvd-rw drive setup as master on the second IDE (hdc). I am not using RAID or SCSI. I can trace the problem back to when I mistakenly made an initial ramdisk with only a cdrom module included. I had some problems with a dvd drive that led me to want to try different drivers. I replaced my cd-rw drive with the combo cd-rw/dvd-rw drive. Yast recognized the drive upon startup, but I had to link /dev/dvd to dev/sr0 in order to play a dvd in Mplayer and Xine ln -sf /dev/sr0 /dev/dvd After fixing the link problem I found that the dvd playback was not smooth. It seemed to be a DMA issue. When I opened the IDE DMA setup in Yast the drive either did not appear or I wasn't given the option to turn it on (I can't remember which). However, when I used hdpram to enable DMA on /dev/hdc I got smooth dvd playback. hdpram -d1 /dev/hdc I have another system running 8.1 that has a dvd-rom drive which was in place when the the OS was installed. I don't have any problems playing dvd's on that one where the drive is mounted from /dev/hdc and uses the ide-cd driver. After reading previous threads regarding issues with cd-rw drives, dvd drives, and the driver modules available, I decided that using the ide-cd instead of the ide-scsi driver would be best. I want the cd-rw/dvd-rw drive to play dvd's with the least amount of problems and the cd-rw support is not a high priority. I was doing ok up to this point. But I think the next step is what got me in trouble. Following the process described on one of Suse's support pages http://sdb.suse.de/en/sdb/html/81_ide-scsi.html I typed ide-cd and cdrom in the text field in the Yast sysconfig editor. My mistake was, instead of writing each module on the same line, I typed in each module and hit return which put it on another line of the pull down text field. I know, its a stupid mistake, I should have just done the modification in a text editor where I could see the file. Assuming all the modules would be included, I made the initial ramdisk. su root mk_initrd Upon restart I got the Kernel Panic. It appears after looking at /etc/sysconfig/kernel that "cdrom" was the only module included. I don't have a boot disk or anything in the /boot directory backed-up (opps!). I am able to use the rescue feature on the Suse install disk. Rescue login(no password): Rescue login: root Check the partitions: Rescue: fdisk /dev/hda Which are: /dev/hda1 (boot) /dev/hda2 (swap) /dev/hda3 (root) Mount partitions: Rescue: mkdir /testboot Rescue: mkdir /testroot Rescue: mount /dev/hda1 /testboot Rescue: mount /dev/hda3 /testroot Edit /testroot/etc/sysconfig/kernel from: INITRD_MODULES="cdrom" to: INITRD_MODULES="reiserfs ide-scsi" (what i assume it was before, with reiserfs to at least mount root) I tried to make the initial ramdisk with several different mk_initrd options, but nothing I tried worked. I'm sure it has something to do with references and mount points since I'm executing it from the rescue disk, but I don't know how to get around it. Rescue: mk_initrd -h Rescue: mk_initrd -b /testboot /testroot using "dev/dha3" as root device (mounted on "/testroot" as "reiserfs") no kernel image "vmlinuz" no kernel image "vmlinuz.shipped" Rescue: mk_initrd -b /testboot -k "/testboot/vmlinuz/testboot/vmlinuz.shipped" -i "/testboot/initrd /testboot/initrd.shipped" /testroot using "dev/dha3" as root device (mounted on "/testroot" as "reiserfs") no kernel image "vmlinuz" no kernel image "vmlinuz.shipped" Rescue: chroot /testroot /sbin/mk_initrd -b /testboot /sbin/mk_initrd: line 399: /dev/fd/62: No Such file or directory no '/' mountpoint specified in //etc/fstab. Rescue: chroot /testroot /sbin/mk_initrd -b /testboot /testroot /sbin/mk_initrd: line 399: /dev/fd/62: No Such file or directory no '/' mountpoint specified in /testroot/etc/fstab. /sbin/mk_initrd: line1: /testroot/etc/fstab: No Such file or directory /sbin/mk_initrd: line1: /testroot/etc/fstab: No Such file or directory Will I have to use the -d option when running mk_initrd to change the root device? Will I have to temporarily modify /etc/fstab to account for the different mount points? Will I even be able to make and initial ramdisk from the rescue cd? Could I install Suse on another hard drive in place of the one connected as hda then copy the /boot files back to the original drive? Or must the ramdisk be created by the exact same system that uses it? Out of desperation, I even tried copying the initrd and initrd.shipped files from the second working Suse 8.1 machine with the reiserfs module included to see if it would at least mount the root file system. But I got another Kernel Panic with a similar message mentioning something about a reiserfs and mmx error instead of the cdrom error. I was not surprised since the second system has a totally different hardware configuration. I'm sure this is WAY more than you need to know :) I wanted to get the dvd drive working, but at this point I'll be happy just to see the login screen again! Could someone please point me in the right direction? Thanks. -Luke Tilsley