https://bugzilla.novell.com/show_bug.cgi?id=760859 https://bugzilla.novell.com/show_bug.cgi?id=760859#c0 Summary: raid1 with pata and sata disk started as 2 arrays Classification: openSUSE Product: openSUSE 12.1 Version: Final Platform: Other OS/Version: Other Status: NEW Severity: Normal Priority: P5 - None Component: Bootloader AssignedTo: jsrain@suse.com ReportedBy: volker3204@paradise.net.nz QAContact: jsrain@suse.com Found By: --- Blocker: --- User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.34 (KHTML, like Gecko) konqueror/4.7.2 Safari/534.34 There seems to be a race condition between PATA disk, SATA disk, and a USB flash card reader for which device gets allocated device names in /dev first. If it ends up like # lsscsi [0:0:0:0] disk ATA SAMSUNG HD103UJ 1AA0 /dev/sda [4:0:0:0] disk Generic IC1210 CF 1.9C /dev/sdb [4:0:0:1] disk Generic IC1210 MS 1.9C /dev/sdc [4:0:0:2] disk Generic IC1210 MMC/SD 1.9C /dev/sdd [4:0:0:3] disk Generic IC1210 SM 1.9C /dev/sde [5:0:0:0] disk ATA ST3120026A 3.06 /dev/sdf after the raid1 was created with the two disks as /dev/sd[ab] then two separate arrays are started, both degraded. It used to work fine before systemd, or up to 11.1 (didn't try with 11.[234]), but 12.1 is broken. # cat /etc/mdadm.conf DEVICE containers partitions ARRAY /dev/md/linux1:system UUID=2276a9a1:da6d0888:554fa14a:f4f32b37 ARRAY /dev/md/linux1:home UUID=ae0f83e0:3304dfea:feed7f14:ef394574 # mdadm -Ds ARRAY /dev/md126 metadata=1.0 name=linux1:system UUID=2276a9a1:da6d0888:554fa14a:f4f32b37 ARRAY /dev/md/linux1:home metadata=1.2 name=linux1:home UUID=ae0f83e0:3304dfea:feed7f14:ef394574 ARRAY /dev/md/linux1:home metadata=1.2 name=linux1:home UUID=ae0f83e0:3304dfea:feed7f14:ef394574 Initrd contains # t etc/mdadm.conf AUTO -all ARRAY /dev/md126 metadata=1.0 name=linux1:system UUID=2276a9a1:da6d0888:554fa14a:f4f32b37 The problem occurs only with the second array for /home, not the first array for / . Reproducible: Always Steps to Reproduce: 1. Set up system as described. 2. Reboot several times, examining raid status each time. 3. Actual Results: 1 raid1 array started with 2 active disks for rootfs. 2 raid1 arrays started with 1 active disk each, both degraded. Both arrays have the same array UUID. Expected Results: 2 raid1 arrays started, both with 2 active disks. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.