[Bug 982329] New: lvmetad doesn't activate volumes
http://bugzilla.opensuse.org/show_bug.cgi?id=982329 Bug ID: 982329 Summary: lvmetad doesn't activate volumes Classification: openSUSE Product: openSUSE Tumbleweed Version: Current Hardware: x86-64 OS: Other Status: NEW Severity: Major Priority: P5 - None Component: Basesystem Assignee: bnc-team-screening@forge.provo.novell.com Reporter: manfred.h@gmx.net QA Contact: qa-bugs@suse.de Found By: --- Blocker: --- This has some history. For some time, "use_lvmetad = 1" in /etc/lvm/lvm.conf used to work. With some update this got reset to "0" by default. While the system used to work, it became apparent that running "dracut" (former "mkinitrd") now takes appr. 1 minute per installed kernel. Also, running "lvs" almost takes forever... When I compared my Tumbleweed installation with my other Leap installations, I found out that on Leap "lvmetad" is running, resulting in snappy responses after "lvs" commands, and also pretty fast "dracut" runs. OK, let's activate "lvmetad" in Tumbleweed. First I activated/enabled and started "lvm2-lvmetad.socket", followed by stopping a potentially running (but there was't any) "lvm2-lvmetad.service". I then set "use_lvmetad = 1" in /etc/lvm/lvm.conf, started and enabled "lvm2-lvmetad.service" and rebuilt the initrd. The following reboot then got hung at "Waiting for /dev/mapper/rdisk00-swapdev1 to appear" and a similar message for the corresponding LV for /var. My setup is using /dev/md0 (MD-raid10) as a physical volume, providing space for such volumes as /var, /swapdev1 and /home. It is my understanding that at some point "/usr/lib/udev/rules.d/69-dm-lvm-metad.rules" should run "pvscan --cache" to provide the necessary udev events feeding into dm/udev to actually activate volumes, but, apparently this doesn't happen. Whe I got stuck in the single-boot/rescue phase after the boot, running "pvscan --cache" manually, followed by a "udevadm trigger" made the missing VG and LVs available; but, since /var is on an LV, the system is not actually usable at that point in time. I then downgraded the "lvm2" package (from "2.02.141-64.2" in current TW) to "2.02.120-67.1" to Leap's most current version, repeated all the lvmetad related configs from above, rebuilt the initrd, rebooted and - voila - my system boots again - and running "dracut" is not at all that boring again (appr. 10 secs. per kernel now). It looks to me that the current LVM2 package is not lvmetad capable. -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=982329
Manfred Hollstein
http://bugzilla.opensuse.org/show_bug.cgi?id=982329
Manfred Hollstein
http://bugzilla.opensuse.org/show_bug.cgi?id=982329
http://bugzilla.opensuse.org/show_bug.cgi?id=982329#c1
Bruno Friedmann
http://bugzilla.opensuse.org/show_bug.cgi?id=982329
http://bugzilla.opensuse.org/show_bug.cgi?id=982329#c2
--- Comment #2 from Manfred Hollstein
http://bugzilla.opensuse.org/show_bug.cgi?id=982329
http://bugzilla.opensuse.org/show_bug.cgi?id=982329#c3
--- Comment #3 from Bruno Friedmann
http://bugzilla.opensuse.org/show_bug.cgi?id=982329
http://bugzilla.opensuse.org/show_bug.cgi?id=982329#c4
--- Comment #4 from Manfred Hollstein
http://bugzilla.opensuse.org/show_bug.cgi?id=982329
http://bugzilla.opensuse.org/show_bug.cgi?id=982329#c5
--- Comment #5 from Manfred Hollstein
http://bugzilla.opensuse.org/show_bug.cgi?id=982329
http://bugzilla.opensuse.org/show_bug.cgi?id=982329#c6
--- Comment #6 from Manfred Hollstein
http://bugzilla.opensuse.org/show_bug.cgi?id=982329
http://bugzilla.opensuse.org/show_bug.cgi?id=982329#c7
--- Comment #7 from Bruno Friedmann
http://bugzilla.opensuse.org/show_bug.cgi?id=982329
http://bugzilla.opensuse.org/show_bug.cgi?id=982329#c10
--- Comment #10 from Manfred Hollstein
Would you please tell me whether the following service is running or not?
systemctl status initrd-udevadm-cleanup-db.service
Yes it runs during boot: # systemctl status initrd-udevadm-cleanup-db.service ● initrd-udevadm-cleanup-db.service - Cleanup udevd DB Loaded: loaded (/usr/lib/systemd/system/initrd-udevadm-cleanup-db.service; static; vendor preset: disabled) Active: inactive (dead) Jun 02 08:52:20 saturn systemd[1]: Starting Cleanup udevd DB... Jun 02 08:52:20 saturn systemd[1]: Started Cleanup udevd DB.
Thank you for reporting and sorry for the problems brought to you!
No problem, thanks for your hints regarding devices/filter vs. devices/global_filter - see next comment ;) -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=982329
http://bugzilla.opensuse.org/show_bug.cgi?id=982329#c11
--- Comment #11 from Manfred Hollstein
(In reply to Manfred Hollstein from comment #6)
FWIW2: I just tried to reject /dev/loop devices from lvm's filter by adding
"r|/dev/loop.*|",
to devices/filter in /etc/lvm/lvm.conf, set max_loop to 64, rebuilt everything required, rebooted and failed again. Even if the /dev/loop devices are explicitly excluded in lvm.conf, they appear to be scanned when the system is booted, and 64 non-existing loop devices appear to be too many to allow the dm/udev logic in the latest lvm2 package to succeed.
I'm now back to max_loop set to 32 :(
In case of use_lvmetad = 0, the effective filter is "devices/filter"; while in case of use_lvmetad = 1, the effective filter is "devices/global_filter". Please see the comment about this:
# Since "filter" is often overridden from command line, it is not suitable # for system-wide device filtering (udev rules, lvmetad). To hide devices # from LVM-specific udev processing and/or from lvmetad, you need to set # global_filter. The syntax is the same as for normal "filter" # above. Devices that fail the global_filter are not even opened by LVM.
# global_filter = []
So in your case you need to add this to your lvm.conf. global_filter = ["r|/dev/loop.*|", "a/.*/"]
Yes, you're right, I should have read that before. To have a similar setup for the case lvmetad get's disabled again somehow, I also added "r|/dev/loop.*|" to "devices/filter". But, while we're at comments... The comment for "global/raid10_segtype_default" says for the possible setting "mirror": # "mirror" - LVM will layer the 'mirror' and 'stripe' segment types. It # will do this by creating a mirror on top of striped sub-LVs; # effectively creating a RAID 0+1 array. This is suboptimal # in terms of providing redunancy and performance. Changing to # this setting is not advised. To me this sounds like the default should not be set to "mirror", however it is actually. Shouldn't it be set do "raid10"? FWIW, I have now changed that setting to read "raid10". -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=982329
http://bugzilla.opensuse.org/show_bug.cgi?id=982329#c12
--- Comment #12 from Manfred Hollstein
participants (1)
-
bugzilla_noreply@novell.com