https://bugzilla.novell.com/show_bug.cgi?id=230733 ------- Comment #2 from walter.haidinger@gmx.at 2007-01-03 14:30 MST ------- I first noticed the change when booting into the rescue system without /etc/mdadm.conf. The md devices were assembled differently than under 10.1. I did _not_ change my mdadm.conf during the upgrade from 10.1 to 10.2. My raid config is as follows: md0 : active raid1 hde1[1] hda1[0] md1 : active raid1 hdg1[1] hdc1[0] md10 : active raid5 hdg5[3] hde5[2] hdc5[0] hda5[1] md11 : active raid5 hdg6[3] hde6[2] hdc6[0] hda6[1] md12 : active raid5 hdg7[3] hde7[2] hdc7[0] hda7[1] md13 : active raid5 hdg8[3] hde8[2] hdc8[0] hda8[1] md14 : active raid5 hdg9[3] hde9[2] hdc9[1] hda9[0] /etc/mdadm.conf DEVICE /dev/hd[aceg]* ARRAY /dev/md0 level=raid1 num-devices=2 devices=/dev/hd[ae]1 UUID=e6679ec5:2441c872:ed53428c:c96ac811 ARRAY /dev/md1 level=raid1 num-devices=2 devices=/dev/hd[cg]1 UUID=170960f6:4f175a32:7fc98f9c:70889186 ARRAY /dev/md10 level=raid5 num-devices=4 devices=/dev/hd[aceg]5 UUID=c00eb0ba:b16fc743:89896896:fd26ad33 ARRAY /dev/md11 level=raid5 num-devices=4 devices=/dev/hd[aceg]6 UUID=9fdbdb21:d670f738:38578622:6eb972a6 ARRAY /dev/md12 level=raid5 num-devices=4 devices=/dev/hd[aceg]7 UUID=a6be8d41:ac89a245:c5933584:a34c710e ARRAY /dev/md13 level=raid5 num-devices=4 devices=/dev/hd[aceg]8 UUID=e3a4fab0:ef909a59:982fe73c:7e41cd84 ARRAY /dev/md14 level=raid5 num-devices=4 devices=/dev/hd[aceg]9 UUID=e6449904:51f246db:e896c64c:c692f28c After upgrading to 10.2, md1 was assembled as md6, md10 as md3, md11 as md4 and md12 as md5. md13 and md14 did not change. The changes indicate that, despite having an mdadm.conf with UUID entries, the array was assembled to an earlier configuration, i.e. the one before the extension with partitions 8 and 9 because of bigger drives. I changed all raid5 md devices to start from minor 10 back then when I added md13 and md14. The settings in mdadm.conf were obviously ignored. Instead the md minor number stored in the md superblock from the inital md creation was probably used. Please note that the md minors were always (say, reproducible) assembled to the same "wrong" (stored?) number. I had to stop the array and update the super-minor as described in my initial report to resolve the issue. I'm sorry but I don't have any logs because I was working in single user mode only to resolve the problem. Btw, LVM did manage to assemble all LVs (with some warnings, of course) despite some PVs lived on the "wrong" md device. :-) Finally, the steps to reproduce the problem are probably: * Create md devices with, I don't know, 10.1, 10.0, 9.3 or earlier? Can't recall when I moved to raid, maybe 3-4 years ago. * Change the md minors in /etc/mdadm.conf This should work in < 10.2. * Boot 10.2 and the md devices should be assembled with the minors from the initial creation. Please tell me if you can reproduce the problem with the steps above. I've had the problem with two completely different machines, one at home (setup above after upgrading 10.1 to 10.2) and one at work with just raid1 mirrors (running 10.1, booting with 10.2 rescue gave different md minors which broke the system backup scripts). -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.