[Bug 932735] New: Booting Problems From A Degraded RAID 1 Partition
http://bugzilla.opensuse.org/show_bug.cgi?id=932735 Bug ID: 932735 Summary: Booting Problems From A Degraded RAID 1 Partition Classification: openSUSE Product: openSUSE Distribution Version: 13.2 Hardware: x86-64 OS: openSUSE 13.2 Status: NEW Severity: Major Priority: P5 - None Component: Bootloader Assignee: jsrain@suse.com Reporter: clkennedy38@gmail.com QA Contact: jsrain@suse.com Found By: --- Blocker: --- User-Agent: Mozilla/5.0 (X11; Linux i686; rv:31.0) Gecko/20100101 Firefox/31.0 Build Identifier: While performing some pre-production testing of a openSUSE 13.2 install I discovered a problem when booting from a degraded mdadm raid-1 partition. If the 2nd disk, /dev/sdb containing raid partition /dev/sdb2, is physically inoperative or disconnected, the system will not boot and I end up in the Dracut Emergency Shell. At that time, mdadm shows this partition, /dev/md0, to be operational albeit degraded, but nothing that should prevent booting. If /dev/sdb is connected the system boots normally. My disk configuration contains 2 disks each with a 4G swap partition and an 85G raid1 partition combined to for a bootable mirrored raid1 /dev/md0 containing /dev/sda2 & /dev/sdb2. The specifics are listed in the additional information section of this report. Reproducible: Always Steps to Reproduce: Power Down Disconnect /dev/sdb containing /dev/sdb2 which is a part of bootable /dev/md0 Power Up Actual Results: After selecting the boot option from the grub menu it lands me in the Dracut Emergency Shell. Expected Results: Should boot normally with a degraded /dev/md0 array. This is what happens in all previous releases. * Thinking that it still had something to do with the degraded array, I configured it to a single disk raid1 where /dev/md0 only contained /dev/sda2 (after failing out and removing /dev/sdb2 I did: mdadm /dev/md0 --grow --raid-device=1 --force). When the secondary disk was removed, it would not boot and exhibited the same behavior as above. If the second disk was once again physically connected it would boot without incident. * One of my early thoughts was that perhaps it had something to do with the boot loader. 13.2 uses grub2 by default and doesn't offer grub (I guess they're calling it grub legacy now) as an install option. I'm more familiar with the configuration of the original so I installed it without incident and, after removing /dev/sdb, it exhibited the same behavior as above. * At the emergency shell prompt, it gives the option to view the boot log. Some interesting highlights: o kernal: md0: is active with 1 out of 2 mirrors. o kernal: md0: detected capacity change from 0 to 91267923968 o kernal: md0: unknown partition table o systemd[1]: Found device /dev/md0 o dracut-initque[284]: Warning: Could not boot o dracut-initque[284]: Warning: /dev/disk/by_UUID/ea3 .. does not exist * Interestingly, the /dev/disk/by_UUID listed above equates to /dev/sdb1 when /dev/sdb is attached. In this case it isn't. I thought that perhaps the resume= entry in the grub menu was the cause -- but no I had removed it. BELOW IS MY DISK CONFIGURATION Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors /dev/sda1 2048 8390655 8388608 4G 82 Linux swap / Solaris /dev/sda2 * 8390656 186648575 178257920 85G fd Linux raid autodetect Disk /dev/sdb: 93.2 GiB, 100030242816 bytes, 195371568 sectors /dev/sdb1 2048 8390655 8388608 4G 82 Linux swap / Solaris /dev/sdb2 * 8390656 186648575 178257920 85G fd Linux raid autodetect Disk /dev/md0: 85 GiB, 91267923968 bytes, 178257664 sectors mdadm --detail /dev/md0 /dev/md0: Version : 1.0 Creation Time : Fri May 15 11:32:28 2015 Raid Level : raid1 Array Size : 89128832 (85.00 GiB 91.27 GB) Used Dev Size : 89128832 (85.00 GiB 91.27 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Wed May 27 19:39:44 2015 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : any:0 UUID : f191ca0c:b31d6d89:41232679:5e77bec6 Events : 2465 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 2 8 18 1 active sync /dev/sdb2 /etc/fstab contains /dev/sda1 swap swap defaults 0 0 /dev/md0 / ext3 acl,user_xattr 1 1 -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
Jiri Srain
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
--- Comment #2 from Clifford Kennedy
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
Thomas Renninger
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
--- Comment #4 from Neil Brown
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
Thomas Renninger
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
Thomas Renninger
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
--- Comment #7 from Neil Brown
Wait. All your patches seem to be mainline:
Looks like they were finally merged on Tuesday :-) (In reply to Thomas Renninger from comment #6)
Please scream out load if this is nothing for maintenance.
All looks good to me. Thanks. -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
--- Comment #9 from Clifford Kennedy
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
Clifford Kennedy
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
--- Comment #11 from Clifford Kennedy
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
http://bugzilla.opensuse.org/show_bug.cgi?id=932735#c14
--- Comment #14 from Clifford Kennedy
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
http://bugzilla.opensuse.org/show_bug.cgi?id=932735#c15
--- Comment #15 from Clifford Kennedy
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
http://bugzilla.opensuse.org/show_bug.cgi?id=932735#c16
Clifford Kennedy
http://bugzilla.opensuse.org/show_bug.cgi?id=932735
http://bugzilla.opensuse.org/show_bug.cgi?id=932735#c20
--- Comment #20 from Clifford Kennedy
participants (1)
-
bugzilla_noreply@novell.com