http://bugzilla.suse.com/show_bug.cgi?id=1060551
http://bugzilla.suse.com/show_bug.cgi?id=1060551#c2
Andrei Borzenkov changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |systemd-maintainers@suse.de
--- Comment #2 from Andrei Borzenkov ---
Adding systemd guys as this is not YaST issue *except* the very first boot
where initrd inherits wrong symlink from installation system. I guess this
requires separate bug report (unless there is already one).
I was able to setup dmraid in VM (it is capable of creating isw metadata) and
reproduced this issue. I attach journalctl and udevadm db dump from failed boot
in emergency mode. I think I now understand what happens.
As udev scans devices it finds the same filesystem on each individual partition
and on dmraid device:
P:
/devices/pci0000:00/0000:00:03.0/virtio1/host5/target5:0:0/5:0:0:0/block/sda/sda3
...
S: disk/by-uuid/2a92d892-b6db-4e24-a758-f3d0a0bd7005
...
E: ID_FS_TYPE=btrfs
E: ID_FS_USAGE=filesystem
...
E: TAGS=:systemd:
P:
/devices/pci0000:00/0000:00:03.0/virtio1/host5/target5:0:1/5:0:1:0/block/sdb/sdb3
...
S: disk/by-uuid/2a92d892-b6db-4e24-a758-f3d0a0bd7005
...
E: ID_FS_TYPE=btrfs
E: ID_FS_USAGE=filesystem
...
E: TAGS=:systemd:
P: /devices/virtual/block/dm-3
...
S: disk/by-uuid/2a92d892-b6db-4e24-a758-f3d0a0bd7005
...
E: ID_FS_TYPE=btrfs
E: ID_FS_USAGE=filesystem
...
E: TAGS=:systemd:
So we have *THREE* different devices each defining the same UUID link and none
setting SYSTEMD_READY=0. So *any* of those devices satisfies systemd
requirement for /dev/disk/by-uuid/2a92d892-b6db-4e24-a758-f3d0a0bd7005; as soon
as *any* of those devices is detected by udev, systemd starts mount units
waiting for UUID. Which explains
Oct 03 20:23:48 linux-3l98 mount[958]: mount: /dev/sda3 is already mounted or
/var/lib/pgsql busy
Whether it is sda3 or sdb3 is random (race condition) and I also had swapping
and /boot (on the first partition) failed just as well for the same reason.
Of course when we get chance to log into the system and examine current state
all devices had already been processed so the last one (dmraid devices) already
overwrote links which now appear correct. Bummer.
Fixing it properly requires us to skip probing partitions of dmraid disk
members:
N: sda
E: ID_FS_TYPE=isw_raid_member
E: ID_FS_USAGE=raid
we already import parent ID_* in 60-persistent-storage.rules.
Is there any situation where we *want* process individual partition even though
it is known to be part of dmraid?
Oh, and BTW I expect mdraid to have the same issue for external
metadata/partitioned raid as well ...
--
You are receiving this mail because:
You are on the CC list for the bug.