[opensuse] Broken raid with 12.2 upgrade
I have 3 simple arrays, each with 2 partitions - sda1/sdb1, sda2/sdb2, sda3/sdb3. The first pair is for /boot, the second root, the third /home. After upgrading to 12.2 from DVD, at boot the stock 3.4.6 kernel throws this message, once for each partition: failed to executed /sbin/mdadm --incremental /dev/sda1 --offroot The arrays are still mountable and accessible after the system comes up. However, when I attempt to boot into the arrays (I have more than one instance of suse on the machine, the production instance uses the arrays), the root cannot be mounted and the boot fails. In this case the kernel throws this message, again for each of the 6 partitions: Invalid raid superblock magic on sda1 sda1 does not have a valid v0.90 superblock, not importing Anyone have a suggestion as to what is the problem? Previously I was using a 3.0.3 kernel with 11.4, and it still works fine with the arrays. Thanks in advance. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 13 Oct 2012 08:36:33 Dennis Gallien wrote:
I have 3 simple arrays, each with 2 partitions - sda1/sdb1, sda2/sdb2, sda3/sdb3. The first pair is for /boot, the second root, the third /home.
After upgrading to 12.2 from DVD, at boot the stock 3.4.6 kernel throws this message, once for each partition:
failed to executed /sbin/mdadm --incremental /dev/sda1 --offroot
The arrays are still mountable and accessible after the system comes up.
However, when I attempt to boot into the arrays (I have more than one instance of suse on the machine, the production instance uses the arrays), the root cannot be mounted and the boot fails. In this case the kernel throws this message, again for each of the 6 partitions:
Invalid raid superblock magic on sda1 sda1 does not have a valid v0.90 superblock, not importing
Anyone have a suggestion as to what is the problem? Previously I was using a 3.0.3 kernel with 11.4, and it still works fine with the arrays.
Thanks in advance.
Could it be that the relevant kernel modules are not included in the initrd image for your new kernel and therefore not available at boot time? Regards, Rodney. -- ========================================================================== Rodney Baker VK5ZTV rodney.baker@iinet.net.au ========================================================================== -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Hi! Am 13.10.2012 um 02:05 schrieb Rodney Baker:
On Sat, 13 Oct 2012 08:36:33 Dennis Gallien wrote:
I have 3 simple arrays, each with 2 partitions - sda1/sdb1, sda2/sdb2, sda3/sdb3. The first pair is for /boot, the second root, the third /home.
After upgrading to 12.2 from DVD, at boot the stock 3.4.6 kernel throws this message, once for each partition:
failed to executed /sbin/mdadm --incremental /dev/sda1 --offroot
The arrays are still mountable and accessible after the system comes up.
I am having the exact same problem with my root on LVM since I applied the latest kernel update yesterday. :( For me the following solved the problem: Boot into rescue system (on Install-DVD). Simply enter `root` when asked for a login. Chroot into the system you want repaired. In my case: mkdir /foo vgchange -a y mount /dev/noraid/root /foo mount --bind /proc /foo/proc mount --bind /dev /foo/dev chroot /foo Create a fixed initrd. I did add `lvm` to `INITRD_MODULES` in `/etc/sysconfig/kernel`. However, as `mkinitrd` claims not to find any such module I assume the problem simply that the initrd itself was broken, not that lvm was missing. mkinitrd Exit your chroot and reboot. exit reboot That fixed it for me. Hope you are as lucky. Regards, Matthias -- Matthias Bach marix@marix.org http://marix.org "The only way of discovering the limits of the possible is to venture a little way past them into the impossible." - Arthur C. Clarke
On Sunday, October 14, 2012 12:00:25 PM Matthias Bach wrote:
Hi!
Am 13.10.2012 um 02:05 schrieb Rodney Baker:
On Sat, 13 Oct 2012 08:36:33 Dennis Gallien wrote:
I have 3 simple arrays, each with 2 partitions - sda1/sdb1, sda2/sdb2, sda3/sdb3. The first pair is for /boot, the second root, the third /home.
After upgrading to 12.2 from DVD, at boot the stock 3.4.6 kernel throws this message, once for each partition:
failed to executed /sbin/mdadm --incremental /dev/sda1 --offroot
The arrays are still mountable and accessible after the system comes up.
I am having the exact same problem with my root on LVM since I applied the latest kernel update yesterday. :(
For me the following solved the problem:
Boot into rescue system (on Install-DVD). Simply enter `root` when asked for a login.
Chroot into the system you want repaired. In my case:
mkdir /foo vgchange -a y mount /dev/noraid/root /foo mount --bind /proc /foo/proc mount --bind /dev /foo/dev chroot /foo
Create a fixed initrd. I did add `lvm` to `INITRD_MODULES` in `/etc/sysconfig/kernel`. However, as `mkinitrd` claims not to find any such module I assume the problem simply that the initrd itself was broken, not that lvm was missing.
mkinitrd
Exit your chroot and reboot.
exit reboot
That fixed it for me. Hope you are as lucky.
Regards, Matthias
Thanks. My problem was a bug in mdadm, fixed in a Oct 5 patch. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (3)
-
Dennis Gallien
-
Matthias Bach
-
Rodney Baker