[Bug 798275] New: yast2: overwriting partition no longer functions
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c0 Summary: yast2: overwriting partition no longer functions Classification: openSUSE Product: openSUSE Factory Version: 12.3 Milestone 1 Platform: All OS/Version: Linux Status: NEW Severity: Major Priority: P5 - None Component: Installation AssignedTo: bnc-team-screening@forge.provo.novell.com ReportedBy: jengelh@inai.de QAContact: jsrain@suse.com Found By: Beta-Customer Blocker: --- Used: openSUSE-NET-x86_64-Build0328-Media.iso During installation, I chose Create Custom Partitioning, removed the old sda1 partition (in the GUI), and instantiated a new sda1 partition. I know that the contents of sda1 remain, so mkfs's error message makes sense per se, but yast should correctly resolve this one way or another, as it did in prior openSUSE releases. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c1 --- Comment #1 from Jan Engelhardt <jengelh@inai.de> 2013-01-13 02:35:39 CET --- Created an attachment (id=520036) --> (http://bugzilla.novell.com/attachment.cgi?id=520036) /var/log/YaST* -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c2 --- Comment #2 from Jan Engelhardt <jengelh@inai.de> 2013-01-13 02:35:59 CET --- Created an attachment (id=520037) --> (http://bugzilla.novell.com/attachment.cgi?id=520037) screenshot -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c3 Jan Engelhardt <jengelh@inai.de> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |nfbrown@suse.com --- Comment #3 from Jan Engelhardt <jengelh@inai.de> 2013-01-13 02:45:55 CET --- For some very obscure reason, /dev/sda1 is part of a RAID... # cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [multipath] md127 : inactive sda1[0](S) 4193268 blocks super 1.0 unused devices: <none> # mdadm -E /dev/sda1 mdadm: No md superblock detected on /dev/sda1. # mdadm -D /dev/md127 mdadm: md device /dev/md127 does not appear to be active. # mdadm -S /dev/md127 mdadm: /dev/md127 stopped # mkfs.ext4 -v /dev/sda1 (succeeds) That wants to make me ask the mdadm maintainer... - how do you use mdadm to display the status of an array that is allocated, but not active? (I assume sda was locked from having its partition table reread) - how did md even think sda1 should be part of md127 when -E tells me there is no SB? -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c4 Jan Engelhardt <jengelh@inai.de> changed: What |Removed |Added ---------------------------------------------------------------------------- Attachment #520036|0 |1 is obsolete| | --- Comment #4 from Jan Engelhardt <jengelh@inai.de> 2013-02-16 13:00:50 CET --- Created an attachment (id=524986) --> (http://bugzilla.novell.com/attachment.cgi?id=524986) /var/log/YaST Steps to reproduce: 1. Using the rescue system or some preexisting system, create a RAID1 spanning /dev/sda1 and /dev/sdb1, preferably with 1.0 superblock. 2. Stop the newly-created array and zero out /dev/sda. 3. Start the installation DVD. Once the shell becomes available on tty2, cat /proc/mdstat shows: Personalities : raid 0, 1, 6, 5, 4, 10, multipath md127 : inactive sdb1[1](S) 2095092 blocks super 1.0 unused devices: <none> 4. yasts partitioner shows that (a) sda has no partitions, (b) sdb has sdb1 of type 0xFD RAID, (c) md127 is not shown anywhere 5. Delete sdb1. 6. Create partition sdb1, with ext4 for the "/" mountpoint. 7. Once actual installation begins, mkfs fails. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c Michal Hrusecky <mhrusecky@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- AssignedTo|bnc-team-screening@forge.pr |yast2-maintainers@suse.de |ovo.novell.com | -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c Jiří Suchomel <jsuchome@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |aschnell@suse.com AssignedTo|yast2-maintainers@suse.de |fehr@suse.com -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c5 Thomas Fehr <fehr@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |NEEDINFO InfoProvider| |nfbrown@suse.com --- Comment #5 from Thomas Fehr <fehr@suse.com> 2013-02-25 12:09:21 UTC --- Neil, is this really intended that a md raid that is in state inactive in /proc/mdstat nevertheless keeps its partitions busy kernel wise. Since stray raid partitions are not so uncommon and we never ran into that problem I would assume there were some changes with this since older releases. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c6 Neil Brown <nfbrown@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEEDINFO |NEW InfoProvider|nfbrown@suse.com | --- Comment #6 from Neil Brown <nfbrown@suse.com> 2013-02-26 01:45:05 UTC --- I'm not sure if it was intention of accident, but it has always been that way, at least as long has it has been possible to mark devices as busy (2.6.0 I think). So I think the fact that it hasn't been reported before it just "luck". Presumably the array was left in this inactive state by mdadm, but that doesn't surprise me. It will normally only do that when you are assembling an array by hand: mdadm -A /dev/md0 /dev/sda1 for example. Normally the boot process does something like: mdadm -As which, I think, doesn't leave 'inactive' arrays. Do you know what would have started the array in this case? -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c7 --- Comment #7 from Thomas Fehr <fehr@suse.com> 2013-02-26 16:14:24 UTC --- Since 12.3, raids are set up per default when the installation environment (inst-sys) comes up. So this is different in 12.3 compared to earlier releases where raid were only activated after yast called mdadm. I would suspect some udev magic setting these raids up and of course I would prefer if yast would still be able to activate and deactivate the raid devices as before (this would also bring back traditional names like /dev/md0, /dev/md1, /dev/md2 instead of /dev/md127, ... Any idea if this could be done? In addition I would assume that the md0, md1 and so on are stable (md0 is always the same raid after every reboot) while /dev/md127 can be a different raid after every boot. See following /proc/mdstat content after two reboots with a disk removed in between. Depending on the raid seeing all partitions the raid is either in active state like e.g. here: f134:~ # cat /proc/mdstat Personalities : [raid10] md125 : active (auto-read-only) raid10 sdb1[0] sdd1[2] sde1[3] 99904 blocks super 1.0 32K chunks 2 near-copies [4/3] [U_UU] bitmap: 0/1 pages [0KB], 65536KB chunk md126 : active (auto-read-only) raid10 sdb3[0] sdd3[2] sde3[3] 12279744 blocks super 1.0 32K chunks 2 near-copies [4/3] [U_UU] bitmap: 0/1 pages [0KB], 65536KB chunk md127 : active (auto-read-only) raid10 sdb2[0] sdd2[2] sde2[3] 199552 blocks super 1.0 32K chunks 2 near-copies [4/3] [U_UU] bitmap: 1/1 pages [4KB], 65536KB chunk unused devices: <none> If I remove /dev/sdc from the machine and reboot the raid is also set up (means raid partitions are busy) but apparently in inactive state: f134:~ # cat /proc/mdstat Personalities : md125 : inactive sde3[3](S) sdb3[0](S) sdd3[2](S) 18419652 blocks super 1.0 md126 : inactive sde1[3](S) sdb1[0](S) sdd1[2](S) 149892 blocks super 1.0 md127 : inactive sde2[3](S) sdb2[0](S) sdd2[2](S) 299364 blocks super 1.0 unused devices: <none> -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c8 Jan Engelhardt <jengelh@inai.de> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |fcrozat@suse.com --- Comment #8 from Jan Engelhardt <jengelh@inai.de> 2013-02-26 18:13:46 CET ---
I would suspect some udev magic
Damn right! In 12.3 (normal system, not the installer), I noticed that md components imported via iscsiadm suddenly were md-assembled as well. _That_ was annoying, especially if it's shared storage that one is not supposed to activate on more than one machine. Putting the blame on systemd/udev. Please pass it along :) -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c9 Robert Milasan <rmilasan@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |NEEDINFO InfoProvider| |nfbrown@suse.com --- Comment #9 from Robert Milasan <rmilasan@suse.com> 2013-02-26 18:52:38 UTC --- I'm not sure if this is udev to be honest, udev sets up what it gets from the kernel and work somehow with mdadm. Also the assembaly happens in systemd system, because test this in 12.2 with sysvinit scripts the device are not assembled at all until you don't setup the right script and initrd (this is only as info). Anyway, not sure what I can do from my side?! Neil: any ideas? -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c10 --- Comment #10 from Robert Milasan <rmilasan@suse.com> 2013-02-26 19:22:31 UTC --- The number of the raid is based on the number of the minor of the raid device, so if kernel event says that the raid has minor 127 then the device is md127. I think this usually happens when creating a raid device in this way: # mdadm --create /dev/md/raid1 -e 1.2 --raid-devices=2 -l 1 /dev/sdb1 /dev/sdc1 # cat /sys/block/md127/uevent MAJOR=9 MINOR=127 DEVNAME=md127 DEVTYPE=disk but when creating the raid like this: # mdadm --create /dev/md0 -e 1.2 --raid-devices=2 -l 1 /dev/sdb1 /dev/sdc1 # /sys/block/md0/uevent MAJOR=9 MINOR=0 DEVNAME=md0 DEVTYPE=disk The actual minor will be 0 like we just named it, or at least this is what I believe it happens, but from the udev point of you, I can't really do anything. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c11 --- Comment #11 from Jan Engelhardt <jengelh@inai.de> 2013-02-26 23:36:11 CET --- In SLE 11, /lib/udev/rules.d/64-md-raid.rules has a note # handle potential components of arrays - IMSM arrays only. but in openSUSE, linux_raid_members and other types are also considered. Additionally, only the SLE rules file gives a hint to what's required to turn it off: echo -en "AUTO -all" >>/etc/mdadm.conf I still have a bad feeling about autoassembly. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c12 Neil Brown <nfbrown@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEEDINFO |NEW InfoProvider|nfbrown@suse.com | --- Comment #12 from Neil Brown <nfbrown@suse.com> 2013-02-26 23:24:17 UTC --- Yes, autoassembly from udev running "mdadm -I" - of course. Though I understand your "bad feeling", I think this is really what we want long-term, so we need to be able to deal with it. Back to the original problem: mkfs fails. At that point, or possibly earlier, the installer should detect that the device is busy and do something about it. Detecting that it is busy is easy: open(O_EXCL). Finding out why is a little harder, but not by much. When md has grabbed the device, /sys/block/$DEVNAME/holders will contain an entry 'md*' - 'md127' in this case. If you just know the major/minor numbers, you can get this via /sys/dev/block/$major:$minor/holders/* So you could just find that and tell 'mdadm' to '--stop' the array. But you probably want to report the array as owning the device in the first place. How does the install currently enumerate md arrays? Presumably we can enhance whatever mechanism it uses to enumerate the inactive arrays too. I'd be happy to modify "mdadm" to report something more useful for inactive arrays if that is what the installer uses. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c13 Robert Milasan <rmilasan@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |NEEDINFO InfoProvider| |nfbrown@suse.com --- Comment #13 from Robert Milasan <rmilasan@suse.com> 2013-02-27 08:16:51 UTC --- Neil, what about changing the way the raid is created when the user/app doesn't specify the block device? I mean if the user/app created a raid device and doesn't specify /dev/md0 or /dev/md1, etc. then the block device name ends up to be /dev/md127 and up. Should this start with 0 and up? -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c14 Neil Brown <nfbrown@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEEDINFO |NEW InfoProvider|nfbrown@suse.com | --- Comment #14 from Neil Brown <nfbrown@suse.com> 2013-02-27 09:40:20 UTC --- Hi Robert, I can't see how that is relevant to the current problem. It doesn't really matter which number is used. The issue (I thought) is that an array is keeping the device busy. numbers like 0 and 1 and 2 are often explicitly assigned to array, so if we find some devices which don't have a number assigned we shouldn't use some low number, as that might conflict with some later devices that we might find. So we choose a number that is likely to not be the explicit name for an array. We start at 127 and count down. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c15 --- Comment #15 from Robert Milasan <rmilasan@suse.com> 2013-02-27 09:51:22 UTC --- Sorry, missed the point, seems like, but yeah, nothing that I can do in this to be honest. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c16 --- Comment #16 from Thomas Fehr <fehr@suse.com> 2013-02-27 12:11:11 UTC --- The problem with the busy raid set aside (we certainly can fix that, now that we know about the different behavior). So far I could not really believe that the behavior is really intentional and assumed the behavior is a bug or an unwanted side effect. But so be it. Just some more remarks about the reason I consider the new behavior suboptimal: 1) most users will be confused if their raid that was named /dev/md0 for ages is now named /dev/md127 2) all updates that have former names (/dev/md0, /dev/md1) in fstab will of course fail 3) if you attach a disk containing raid signatures (maybe temporarily), to a system (e.g. by iscsi, usb), most people will be surprised to find automatically (active or inactive) raid devices in the system 4) if you detach the disk again, you have raid device in /proc/partition and /proc/mdstat that use block device that do not exist any more in /proc/partitions To illustrate the point 3) and 4) I did some quick tests on my 12.3 RC2 test install: # first all is pretty standard f134:~ # tail /proc/partitions 8 35 6139898 sdc3 8 48 6291456 sdd 8 49 49976 sdd1 8 50 99800 sdd2 8 51 6139898 sdd3 8 64 6291456 sde 8 65 49976 sde1 8 66 99800 sde2 8 67 6139898 sde3 11 0 4420608 sr0 f134:~ # cat /proc/mdstat Personalities : [raid10] [raid1] [raid0] unused devices: <none> # now I attach a disk with raids via iscsi f134:~ # iscsiadm .... -login f134:~ # tail /proc/partitions 8 67 6139898 sde3 11 0 4420608 sr0 8 80 1048576 sdf 8 81 290816 sdf1 8 82 755712 sdf2 8 96 1048576 sdg 8 97 290816 sdg1 8 98 755712 sdg2 9 127 290496 md127 9 126 1510400 md126 f134:~ # cat /proc/mdstat Personalities : [raid10] [raid1] [raid0] md126 : active raid0 sdg2[1] sdf2[0] 1510400 blocks super 1.2 512k chunks md127 : active (auto-read-only) raid1 sdg1[1] sdf1[0] 290496 blocks super 1.2 [2/2] [UU] unused devices: <none> # so now I have unknowingly two active raid devices on my system # now I unattach the iscsi disks again f134:~ # iscsiadm .... --logout f134:~ # tail /proc/partitions 8 49 49976 sdd1 8 50 99800 sdd2 8 51 6139898 sdd3 8 64 6291456 sde 8 65 49976 sde1 8 66 99800 sde2 8 67 6139898 sde3 11 0 4420608 sr0 9 127 290496 md127 9 126 1510400 md126 f134:~ # cat /proc/mdstat Personalities : [raid10] [raid1] [raid0] md126 : active raid0 sdg2[1] sdf2[0] 1510400 blocks super 1.2 512k chunks md127 : active (auto-read-only) raid1 sdg1[1] sdf1[0](F) 290496 blocks super 1.2 [2/1] [_U] unused devices: <none> # so disks are gone but raid devices are still there # and even claim to be active # the disks sdf and sdg seem not to be really gone since if I attach the # same iscsi disk again it gets sdh and sdi as block device f134:~ # iscsiadm .... -login f134:~ # tail /proc/partitions 8 67 6139898 sde3 11 0 4420608 sr0 9 127 290496 md127 9 126 1510400 md126 8 112 1048576 sdh 8 113 290816 sdh1 8 114 755712 sdh2 8 128 1048576 sdi 8 129 290816 sdi1 8 130 755712 sdi2 f134:~ # cat /proc/mdstat Personalities : [raid10] [raid1] [raid0] md126 : active raid0 sdg2[1] sdf2[0] 1510400 blocks super 1.2 512k chunks md127 : active (auto-read-only) raid1 sdg1[1] sdf1[0](F) 290496 blocks super 1.2 [2/1] [_U] unused devices: <none> f134:~ # So for me all these effects create additional confusion for not too much gain. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c Robert Milasan <rmilasan@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |NEEDINFO InfoProvider| |nfbrown@suse.com -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c17 --- Comment #17 from Neil Brown <nfbrown@suse.com> 2013-02-27 22:58:56 UTC --- (In reply to comment #16)
Just some more remarks about the reason I consider the new behavior suboptimal: 1) most users will be confused if their raid that was named /dev/md0 for ages is now named /dev/md127
That should not happen. If it was created as "/dev/md0", then the name "0" will be stored in the metadata and so "/dev/md0" will be used when the device is discovered and an array is assembled. However if you plug the device into a different host, then that host might have its own "/dev/md0" array, so mdadm will give this new one a different name. At install time, the computer "looks" like a new different machine because the hostname is different so it will tend to use "/dev/md127" devices. If the install image has HOSTHOST <ignore> in mdadm.conf, then the mdadm will think all devices found belong to "this" host and will use the most appropriate name.
2) all updates that have former names (/dev/md0, /dev/md1) in fstab will of course fail
See 1) above.
3) if you attach a disk containing raid signatures (maybe temporarily), to a system (e.g. by iscsi, usb), most people will be surprised to find automatically (active or inactive) raid devices in the system
No more than they might be surprised when the plug in a USB device to find that the filesystem on it has been mounted and a window pops up showing the contents of that filesystem.
4) if you detach the disk again, you have raid device in /proc/partition and /proc/mdstat that use block device that do not exist any more in /proc/partitions
That would be a bug. The udev rules file contains: ACTION=="remove", ENV{ID_PATH}=="?*", RUN+="/sbin/mdadm -If $name --path $env{ID_PATH}" ACTION=="remove", ENV{ID_PATH}!="?*", RUN+="/sbin/mdadm -If $name" so that when a device is removed, "mdadm -If $device" is called, which should remove it from any active array. I guess removing the last device will not actually shutdown the array - it should but doesn't yet.
f134:~ # cat /proc/mdstat Personalities : [raid10] [raid1] [raid0] md126 : active raid0 sdg2[1] sdf2[0] 1510400 blocks super 1.2 512k chunks
md127 : active (auto-read-only) raid1 sdg1[1] sdf1[0] 290496 blocks super 1.2 [2/2] [UU]
unused devices: <none> # so now I have unknowingly two active raid devices on my system
Exactly as expected. Note that the RAID1 array is "auto-read-only" so no resync/recovery will start until something actually writes to the device. So this is not really different from automatically mounting the filesystem.
# now I unattach the iscsi disks again f134:~ # iscsiadm .... --logout f134:~ # tail /proc/partitions 8 49 49976 sdd1 8 50 99800 sdd2 8 51 6139898 sdd3 8 64 6291456 sde 8 65 49976 sde1 8 66 99800 sde2 8 67 6139898 sde3 11 0 4420608 sr0 9 127 290496 md127 9 126 1510400 md126 f134:~ # cat /proc/mdstat Personalities : [raid10] [raid1] [raid0] md126 : active raid0 sdg2[1] sdf2[0] 1510400 blocks super 1.2 512k chunks
md127 : active (auto-read-only) raid1 sdg1[1] sdf1[0](F) 290496 blocks super 1.2 [2/1] [_U]
unused devices: <none> # so disks are gone but raid devices are still there # and even claim to be active
This is not intended, so it's a bug. Thanks for reporting it! 'sdf1' should have been removed from the array - I think I know what it wasn't. The "raid0" situation should involve simply stopping the array. That should be easy enough to manage. I wonder how I can signal udisks to unmount the device if it has been mounted ... I'll have to look into that. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c18 --- Comment #18 from Thomas Fehr <fehr@suse.com> 2013-02-28 11:15:13 UTC --- (In reply to comment #17)
(In reply to comment #16)
Just some more remarks about the reason I consider the new behavior suboptimal: 1) most users will be confused if their raid that was named /dev/md0 for ages is now named /dev/md127
That should not happen. If it was created as "/dev/md0", then the name "0" will be stored in the metadata and so "/dev/md0" will be used when the device is discovered and an array is assembled. However if you plug the device into a different host, then that host might have its own "/dev/md0" array, so mdadm will give this new one a different name. At install time, the computer "looks" like a new different machine because the hostname is different so it will tend to use "/dev/md127" devices. If the install image has HOSTHOST <ignore> in mdadm.conf, then the mdadm will think all devices found belong to "this" host and will use the most appropriate name.
Ok, I see. But this mechanism does not seem to be reliable I have seen /dev/md127 in various cases during installation in spite of the raid being created as /dev/md0. Probably because network environment is different during different installations. So maybe adding a mdadm.conf with "HOSTHOST <ignore>" to installation system would be useful.
No more than they might be surprised when the plug in a USB device to find that the filesystem on it has been mounted and a window pops up showing the contents of that filesystem.
But I have never seen automatically mounting of filesystems happen in installation environment or on server setups. Automatically mounting happens with frameworks like KDE or GNOME in a typical desktop environment not on a very low level like udev. There are hardly any server setups that have a desktop environment like KDE or GNOME running. In addition I would assume KDE and GNOME have a user interface to activate and deactivate automounting of filesystems for normal user, while deactivating raid auto activation needs editing udev rules, which is something no average linux admin is aware of.
This is not intended, so it's a bug. Thanks for reporting it! 'sdf1' should have been removed from the array - I think I know what it wasn't. The "raid0" situation should involve simply stopping the array. That should be easy enough to manage. I wonder how I can signal udisks to unmount the device if it has been mounted ... I'll have to look into that.
Ok, thanks for caring about that.... I have now changed yast raid handling that it should work with raid auto-activation so the bug should be fixed. Just one question does the same behavior with raid auto activation also exist in SLES11 SP3, so do I need to backport my changes to SLES11 SP3 code base? -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c19 --- Comment #19 from Bernhard Wiedemann <bwiedemann@suse.com> 2013-02-28 17:00:18 CET --- This is an autogenerated message for OBS integration: This bug (798275) was mentioned in https://build.opensuse.org/request/show/156846 Factory / libstorage -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c20 --- Comment #20 from Bernhard Wiedemann <bwiedemann@suse.com> 2013-02-28 18:00:08 CET --- This is an autogenerated message for OBS integration: This bug (798275) was mentioned in https://build.opensuse.org/request/show/156862 Factory / yast2-storage -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c21 --- Comment #21 from Jan Engelhardt <jengelh@inai.de> 2013-03-01 12:50:14 CET --- Neil, can we have something in mdadm(8) to show partially-assembled arrays as well? I think it is inconsistent to have them in /proc/mdstat when `mdadm -D /dev/mdN` says the array is not there. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c23 Neil Brown <nfbrown@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEEDINFO |NEW InfoProvider|nfbrown@suse.com | --- Comment #23 from Neil Brown <nfbrown@suse.com> 2013-04-29 02:11:59 UTC --- (In reply to comment #18)
I have now changed yast raid handling that it should work with raid auto-activation so the bug should be fixed.
Just one question does the same behavior with raid auto activation also exist in SLES11 SP3, so do I need to backport my changes to SLES11 SP3 code base?
I think SLES11 could have the same behaviour, so if the change isn't too intrusive it might be good to backport it. The request in comment #21 has been taken upstream. The suggested functionality should appear in a future upstream release and will then filter into openSUSE. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
https://bugzilla.novell.com/show_bug.cgi?id=798275 https://bugzilla.novell.com/show_bug.cgi?id=798275#c24 Thomas Fehr <fehr@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |RESOLVED Resolution| |FIXED --- Comment #24 from Thomas Fehr <fehr@suse.com> 2013-04-29 09:20:32 UTC --- The backport for this fix is already in SLES11 SP3 code base. So we should be fine there and in 12.3 and Factory. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
participants (1)
-
bugzilla_noreply@novell.com