[Bug 879384] New: mkinitrd cannot handle multiple depended MD RAID configurations
https://bugzilla.novell.com/show_bug.cgi?id=879384 https://bugzilla.novell.com/show_bug.cgi?id=879384#c0 Summary: mkinitrd cannot handle multiple depended MD RAID configurations Classification: openSUSE Product: openSUSE 13.1 Version: Final Platform: x86-64 OS/Version: openSUSE 13.1 Status: NEW Severity: Critical Priority: P5 - None Component: Kernel AssignedTo: kernel-maintainers@forge.provo.novell.com ReportedBy: DOlsson@WEB.de QAContact: qa-bugs@suse.de Found By: --- Blocker: --- User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0 During the installation of the latest update kernel for openSUSE 13.1 on my system, "mkinitrd" refused to build the "initrd" file due to bailing out with an error in the "/lib/mkinitrd/setup/72-block.sh" script. The problem turns out to be a bug in the "/lib/mkinitrd/setup/72-block.sh" script, which cannot handle the fact that the root file system is placed on a MD RAID device that consists of a physical disk and another MD RAID device! Reproducible: Always Steps to Reproduce: 1. On my system, I have created 2 RAID1 MD devices: /dev/md/data consistent of /dev/sdb and /dev/sdc /dev/md/system consistent of /dev/sda and /dev/md/data1 The /dev/md/data MD device has been configured with 2 partitions (data[12]). The /dev/md/system MD device has been configured with 4 partitions (system[1234]). Graphically the MD RAID layout looks like this (all disk paths relative to "/dev"): sda --------------------------------(c)-+-> md/system -+-> md/system1 -> EFI sdb -(a)-v ! !-> md/system2 -> /boot sdc -(b)-*-> md/data -+-> md/data1 -(d)-^ !-> md/system3 -> / ! *-> md/system4 -> swap ! !-> md/data2 --> LVM volume -+-> vol/var !-> vol/tmp *-> vol/home where (a) and (b) is a RAID 1 device with 2 disks (sdb and sdc), and (c) and (d) is a RAID 1 device with 2 disks (sda and md/data1). 2. When using "mkinitrd" to create an "initrd" file on a system using the above given disk layout, results in the "initrd" not being build, due to an error bail out in script "/lib/mkinitrd/setup/72-block.sh". Actual Results: Using "mkinitrd" to create an "initrd" file on systems, where the root file system is placed on an MD RAID device that itself consists of MD RAID devices, results in an error and no "initrd" file (extract from the "/var/log/zypp/history" log file: # 2014-05-20 12:49:22 kernel-desktop-3.11.10-11.1.x86_64.rpm installed ok # Additional rpm output: # # Kernel image: /boot/vmlinuz-3.11.10-11-desktop # Initrd image: /boot/initrd-3.11.10-11-desktop # KMS drivers: radeon # Root device: /dev/md127p3 (mounted on / as ext4) # Resume device: /dev/disk/by-label/swap00 (/dev/md127p4) # Device md126 not handled # Script /lib/mkinitrd/setup/72-block.sh failed! # There was an error generating the initrd (1) Expected Results: Creation of the "initrd" file in "/boot" containing the correct MD set up in "/etc/mdadm.conf", i.e. containing all the definitions needed to be able to have consistent MD devices of which the "root" file system depends. Although this patch definitively is not the right one (fix at the wrong place, as far as I can tell), it does at least work as a work-around in my case: --- setup-block.sh-2.8.1-2.1 2013-10-27 20:29:16.000000000 +0000 +++ setup-block.sh 2014-05-20 14:26:27.607541290 +0000 @@ -147,6 +147,9 @@ loop*) echo "[BLOCK] WARNING: Loop device detected. Include the required drivers manually." >&2 ;; + md*) + echo "[BLOCK] WARNING: md device detected. You may have to include the md driver manually." >&2 + ;; mmc*) result=$(find_blkmodule "$blkdev") result="$result mmc_block" A correct patch would be to fix the conversion of MD RAID devices to real devices in the "setup-md.sh" script, but as this is a much harder nut to crack, the above will do for now, until I have been able to figure out how to create a correct patch for this bug. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=879384
https://bugzilla.novell.com/show_bug.cgi?id=879384#c1
--- Comment #1 from Dennis Olsson
https://bugzilla.novell.com/show_bug.cgi?id=879384
https://bugzilla.novell.com/show_bug.cgi?id=879384#c
Neil Brown
https://bugzilla.novell.com/show_bug.cgi?id=879384
https://bugzilla.novell.com/show_bug.cgi?id=879384#c2
--- Comment #2 from Dennis Olsson
https://bugzilla.novell.com/show_bug.cgi?id=879384
https://bugzilla.novell.com/show_bug.cgi?id=879384#c3
--- Comment #3 from Dennis Olsson
https://bugzilla.novell.com/show_bug.cgi?id=879384
https://bugzilla.novell.com/show_bug.cgi?id=879384#c4
Neil Brown
https://bugzilla.novell.com/show_bug.cgi?id=879384
https://bugzilla.novell.com/show_bug.cgi?id=879384#c5
--- Comment #5 from Neil Brown
https://bugzilla.novell.com/show_bug.cgi?id=879384
https://bugzilla.novell.com/show_bug.cgi?id=879384#c6
Dennis Olsson
https://bugzilla.novell.com/show_bug.cgi?id=879384
https://bugzilla.novell.com/show_bug.cgi?id=879384#c7
--- Comment #7 from Neil Brown
https://bugzilla.novell.com/show_bug.cgi?id=879384
https://bugzilla.novell.com/show_bug.cgi?id=879384#c8
--- Comment #8 from Bernhard Wiedemann
https://bugzilla.novell.com/show_bug.cgi?id=879384
https://bugzilla.novell.com/show_bug.cgi?id=879384#c9
Neil Brown
https://bugzilla.novell.com/show_bug.cgi?id=879384
https://bugzilla.novell.com/show_bug.cgi?id=879384#c10
--- Comment #10 from Swamp Workflow Management
participants (1)
-
bugzilla_noreply@novell.com