https://bugzilla.novell.com/show_bug.cgi?id=456236 User oracle@zoellich.de added comment https://bugzilla.novell.com/show_bug.cgi?id=456236#c4 Heim Meim <oracle@zoellich.de> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |oracle@zoellich.de --- Comment #4 from Heim Meim <oracle@zoellich.de> 2009-02-27 09:29:17 MST --- As my Problem seems to be similar I append to this bug. After yesterdays opensuse11.1 security update my RAID1 fails partly. Actually it's kernel-default-base-2.6.27.19-3.2.1 I've got two disks: /dev/sdb1 2 64 506047+ fd Linux raid autodetect /dev/sdb2 65 91201 732057952+ fd Linux raid autodetect /dev/sdc1 2 64 506047+ fd Linux raid autodetect /dev/sdc2 65 91201 732057952+ fd Linux raid autodetect "Raided" like: /dev/md0 /dev/sdb1 /dev/sdc1 mount on /boot /dev/md1 /dev/sdb2 /dev/sdc2 used for lvm (Partitions: / and /home) Today a boot attempt fails with a filesystemcheck on an disk other than /. Examination shows that /dev/md1 was assembled correctly and / and /home got mounted but /dev/md0 was renamed to /dev/md127 and therefore the mount -a failed as fstab points to /dev/md0. Commenting out #/dev/md0 /boot ext3 acl,user_xattr 1 2 in fstab I got my system to boot. The renamed RAID1 devices /dev/md127 does contain the content of /boot, so ther is no dataloss. Some more information: zobel:~ # cat /etc/mdadm.conf DEVICE partitions ARRAY /dev/md0 level=raid1 UUID=4c637e08:9518d98e:e2d44a80:811014dc ARRAY /dev/md1 level=raid1 UUID=07b0d52e:5b8486c2:4d9e334b:ef78f2c8 zobel:~ # cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1] 506032 blocks super 1.0 [2/2] [UU] bitmap: 0/8 pages [0KB], 32KB chunk md1 : active raid1 sdb2[0] sdc2[1] 732057816 blocks super 1.0 [2/2] [UU] bitmap: 10/350 pages [40KB], 1024KB chunk zobel:~ # mdadm --detail /dev/md0 mdadm: md device /dev/md0 does not appear to be active. zobel:~ # mdadm --detail /dev/md1 /dev/md1: Version : 1.00 Creation Time : Sun Dec 21 21:44:57 2008 Raid Level : raid1 Array Size : 732057816 (698.14 GiB 749.63 GB) Used Dev Size : 1464115632 (1396.29 GiB 1499.25 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Feb 27 17:26:40 2009 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : 192.168.0.76:1 UUID : 07b0d52e:5b8486c2:4d9e334b:ef78f2c8 Events : 20 Number Major Minor RaidDevice State 0 8 18 0 active sync /dev/sdb2 1 8 34 1 active sync /dev/sdc2 zobel:~ # mdadm --detail /dev/md127 /dev/md127: Version : 1.00 Creation Time : Sun Dec 21 21:44:57 2008 Raid Level : raid1 Array Size : 506032 (494.25 MiB 518.18 MB) Used Dev Size : 506032 (494.25 MiB 518.18 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Feb 27 16:50:36 2009 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : 192.168.0.76:0 UUID : 4c637e08:9518d98e:e2d44a80:811014dc Events : 8 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.