[Bug 926405] New: unable to lvcreate _new_ LVs on existing VG with existing LVs and plenty of room. error: "lseek # failed: Invalid argument"
http://bugzilla.opensuse.org/show_bug.cgi?id=926405 Bug ID: 926405 Summary: unable to lvcreate _new_ LVs on existing VG with existing LVs and plenty of room. error: "lseek # failed: Invalid argument" Classification: openSUSE Product: openSUSE Distribution Version: 13.2 Hardware: x86-64 OS: openSUSE 13.2 Status: NEW Severity: Major Priority: P5 - None Component: Basesystem Assignee: bnc-team-screening@forge.provo.novell.com Reporter: lyndat3@your-mail.com QA Contact: qa-bugs@suse.de Found By: --- Blocker: --- I'm unable to create a new LV on an existing VG with LVs. I'm on uname -rm 3.19.3-1.gf10e7fc-default x86_64 lvm version LVM version: 2.02.98(2) (2012-10-15) Library version: 1.03.01 (2011-10-15) Driver version: 4.29.0 rpm -qa| grep mapper device-mapper-1.02.78-20.2.2.x86_64 I have a VG with lots ov available space vgs VG #PV #LV #SN Attr VSize VFree VG0 1 7 0 wz--n- 930.19g 855.19g It's already got LVs on it lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert LV_ROOT VG0 -wi-ao--- 20.00g LV_HOME VG0 -wi-ao--- 40.00g LV_SWAP VG0 -wi-ao--- 2.00g ... Time's passed. Now, when I attempt to create a new LV on the VG, if fails with lvcreate -L 30G -n LV_TEST /dev/VG0 /dev/md1: lseek 18446744071795900416 failed: Invalid argument and in syslog I see Apr 07 21:34:21 xen01 kernel: md1: unknown partition table The VG's on a RAID1 array. pvs PV VG Fmt Attr PSize PFree /dev/md1 VG0 lvm2 a-- 930.19g 855.19g cat /proc/mdstat ... md1 : active raid1 sdg4[0] sdh4[1] 975404544 blocks super 1.0 [2/2] [UU] bitmap: 0/8 pages [0KB], 65536KB chunk ... consisting of two Linux-RAID partitions on a gpt disk sgdisk -p /dev/sdg Disk /dev/sdg: 1953525168 sectors, 931.5 GiB Logical sector size: 512 bytes Disk identifier (GUID): ... Partition table holds up to 128 entries First usable sector is 34, last usable sector is 1953525134 Partitions will be aligned on 2048-sector boundaries Total free space is 2015 sectors (1007.5 KiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF02 BIOS Boot Partition 2 4096 618495 300.0 MiB EF00 EFI System Partition 3 618496 2715646 1024.0 MiB FD00 RAID for /boot 4 2715648 1953525134 930.2 GiB FD00 RAID for LVMs sgdisk -p /dev/sdh Disk /dev/sdh: 1953525168 sectors, 931.5 GiB Logical sector size: 512 bytes Disk identifier (GUID): ... Partition table holds up to 128 entries First usable sector is 34, last usable sector is 1953525134 Partitions will be aligned on 2048-sector boundaries Total free space is 2015 sectors (1007.5 KiB) Number Start (sector) End (sector) Size Code Name 1 2048 4095 1024.0 KiB EF02 BIOS Boot Partition 2 4096 618495 300.0 MiB EF00 EFI System Partition 3 618496 2715646 1024.0 MiB FD00 RAID for /boot 4 2715648 1953525134 930.2 GiB FD00 RAID for LVMs The RAID's healthy, the system boots, and I can see/use all the existing LVs on the VG/PV. I just can't create new LVs anymore. -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
lynda t
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
lynda t
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
Liuhua Wang
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #2 from lynda t
I cannot reproduce locally.
Is the VG you are trying to lvcreate on a disk/array partitioned exactly like mine? I.e. (a) on a RAID array assembled from the not-first partitions of gpt-partitioned, 'Linux RAID' (fd00) disks? (b) on the same VG as the "/" LV?
Please attach the output of:
systemctl status lvm2-lvmetad.socket lvm2-lvmetad.service lvm2-activation-early.service lvm2-activation.service.
systemctl status lvm2-lvmetad.socket lvm2-lvmetad.service lvm2-activation-early.service lvm2-activation.service lvm2-lvmetad.socket - LVM2 metadata daemon socket Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.socket; enabled) Active: active (running) since Wed 2015-04-08 14:16:57 PDT; 6h ago Docs: man:lvmetad(8) Listen: /run/lvm/lvmetad.socket (Stream) lvm2-lvmetad.service - LVM2 metadata daemon Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; disabled) Active: active (running) since Wed 2015-04-08 14:17:00 PDT; 6h ago Docs: man:lvmetad(8) Main PID: 1076 (lvmetad) CGroup: /system.slice/lvm2-lvmetad.service └─1076 /sbin/lvmetad lvm2-activation-early.service Loaded: not-found (Reason: No such file or directory) Active: inactive (dead) lvm2-activation.service Loaded: not-found (Reason: No such file or directory) Active: inactive (dead) ps ax | grep lvm 1076 ? Ss 0:00 /sbin/lvmetad grep use_lvmetad /etc/lvm/lvm.conf use_lvmetad = 1
Also the output of lsblk command and the content of /etc/fstab file. - lsblk
lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdg 8:96 0 931.5G 0 disk ├─sdg1 8:97 0 1M 0 part ├─sdg2 8:98 0 300M 0 part /boot/efi ├─sdg3 8:99 0 1024M 0 part │ └─md0 9:0 0 1024M 0 raid1 /boot └─sdg4 8:100 0 930.2G 0 part └─md1 9:1 0 930.2G 0 raid1 ├─VG0-LV_SWAP 254:0 0 2G 0 lvm [SWAP] ├─VG0-LV_ROOT 254:1 0 20G 0 lvm / ├─VG0-LV_HOME 254:6 0 10G 0 lvm /home sdh 8:112 0 931.5G 0 disk ├─sdh1 8:113 0 1M 0 part ├─sdh2 8:114 0 300M 0 part ├─sdh3 8:115 0 1024M 0 part │ └─md0 9:0 0 1024M 0 raid1 /boot └─sdh4 8:116 0 930.2G 0 part └─md1 9:1 0 930.2G 0 raid1 ├─VG0-LV_SWAP 254:0 0 2G 0 lvm [SWAP] ├─VG0-LV_ROOT 254:1 0 20G 0 lvm / ├─VG0-LV_HOME 254:6 0 10G 0 lvm /home
- lvcreate -vvvv
You can see that here https://www.redhat.com/archives/linux-lvm/2015-April/msg00010.html
- udevadm info --export-db
Do your really need all of that? Or just for relevant devices, e.g. for d in sdg sdg4 md1 do udevadm info /dev/$d done What specific devices' udevadm info are you looking for ? LT -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #3 from Lidong Zhong
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #4 from lynda t
Apparently the problem is here:
#format_text/format-text.c:630 Writing VG0 metadata to /dev/md1 at 20480 len 3166 #device/dev-io.c:90 /dev/md1: lseek 18446744071795900416 failed: Invalid argument
The offset suddenly is changed from 20480 to a very large number.
At https://www.redhat.com/archives/linux-lvm/2015-April/msg00009.html the redhat devs comments 18446744071795900416 - this is an interesting number, actually it is 2**64 - 1825 * 2**20. Looks like some integer overflow.
Did you do lvm2 upgrade before? If yes, is there any error during the upgrade?
Sorry I don't understand. What upgrade are you referring to. They zypper update from the release iso version of lvm2 to current updated version? -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #5 from Lidong Zhong
At
https://www.redhat.com/archives/linux-lvm/2015-April/msg00009.html
the redhat devs comments
18446744071795900416 - this is an interesting number, actually it is 2**64 - 1825 * 2**20. Looks like some integer overflow.
Yes
Did you do lvm2 upgrade before? If yes, is there any error during the upgrade?
Sorry I don't understand.
What upgrade are you referring to. They zypper update from the release iso version of lvm2 to current updated version?
What I am worried here is the device mapper library is too old for lvm2. What's the value of pvmetadatacopies in your lvm.conf? Or do you use this parameter when creating the pv? -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #6 from lynda t
What I am worried here is the device mapper library is too old for lvm2.
Here, I currently have rpm -qa | egrep -i "mapper|lvm2" device-mapper-1.02.78-20.2.2.x86_64 lvm2-2.02.98-43.21.1.x86_64
What's the value of pvmetadatacopies in your lvm.conf? Or do you use this parameter when creating the pv?
grep pvmetadatacopies /etc/lvm/lvm.conf # pvmetadatacopies = 2 # on-disk metadata (pvmetadatacopies = 0). Or this can be in but, when I created this PV/VG, I used pvcreate -ff --metadatacopies 2 /dev/md1 Physical volume "/dev/md1" successfully created vgcreate -v -s 32M VG0 /dev/md1 -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #7 from Lidong Zhong
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #8 from lynda t
Thank you. I think we got the reason now. There is indeed an data overflow. We will make a upgrade later.
Could you use the default pvmetadatacopies value as a workaround? After the upgrade, you can change your vg back.
I don't think so. Can that be done for an *existing* PV that already has a VG with LVs on it? IIUC it's not possible to change the PV metadata/data-alignment once it's been created. I'd have to destroy then recreate the PV/VG/LVs. Correct? LT -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #9 from Lidong Zhong
(In reply to Lidong Zhong from comment #7)
Thank you. I think we got the reason now. There is indeed an data overflow. We will make a upgrade later.
Could you use the default pvmetadatacopies value as a workaround? After the upgrade, you can change your vg back.
I don't think so.
Can that be done for an *existing* PV that already has a VG with LVs on it?
IIUC it's not possible to change the PV metadata/data-alignment once it's been created. I'd have to destroy then recreate the PV/VG/LVs.
Correct?
I am afraid you are right. Implementing vgchange --metadatacopies 2 volgroupname seems couldn't make the single pv store 2 copies of the metadata. Thanks.
LT
-- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #10 from Liuhua Wang
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #12 from lynda t
upstream commit: 34d207d9b37edc2499dfff2c4809fecf72926416 seems can resolve this issue.
As a workaround, please modify /etc/lvm/lvm.conf: use_lvmetad=1 => use_lvmetad=0
grep ^use_lvmetad /etc/lvm/lvm.conf use_lvmetad = 0 systemctl list-unit-files | grep lvm lvm2-lvmetad.service disabled lvm2-monitor.service disabled lvm_local.service disabled lvm2-lvmetad.socket enabled ... reboot lvcreate -L 10G -n TEST /dev/VG0 lvs | grep TEST TEST VG0 -wi-a---- 10.00g Seems to work. -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
Liuhua Wang
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #14 from lynda t
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #15 from Liuhua Wang
I've changed none of the defaults, settings or services.
Is the WARNING a problem that needs to be fixed? What's the reason for the complaint in the 1st place?
For an HVM DomU what are SUPPOSED to be the lvm*.service states, and the value of use_lvmetad?
The warning is harmless. It is because the lvmetad.service is running but use_lvmetad=0. either of : -set use_lvmetad=1 -systemdctrl stop lvmetad.service will eliminate the warning. It should have been fixed in the other bug: https://bugzilla.opensuse.org/show_bug.cgi?id=914415 How did you update the lvm2 package? rpm -U ? -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #16 from lynda t
The warning is harmless. It is because the lvmetad.service is running but use_lvmetad=0. either of : -set use_lvmetad=1 -systemdctrl stop lvmetad.service will eliminate the warning.
Ok, but what's recommended for out-of-the-box install? It's not typical to be installing LVs in DomUs anyway. Even a warning shouldn't be the first thing you see in a clean install
It should have been fixed in the other bug: https://bugzilla.opensuse.org/show_bug.cgi?id=914415
How did you update the lvm2 package? rpm -U ?
This one is a clean DomU install. Nothing's been done other than the initial install, then a 'zypper up' with default, as installed repos. LT -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #17 from Liuhua Wang
(In reply to Liuhua Wang from comment #15)
The warning is harmless. It is because the lvmetad.service is running but use_lvmetad=0. either of : -set use_lvmetad=1 -systemdctrl stop lvmetad.service will eliminate the warning.
Ok, but what's recommended for out-of-the-box install?
It's not typical to be installing LVs in DomUs anyway.
Even a warning shouldn't be the first thing you see in a clean install
It should have been fixed in the other bug: https://bugzilla.opensuse.org/show_bug.cgi?id=914415
How did you update the lvm2 package? rpm -U ?
This one is a clean DomU install. Nothing's been done other than the initial install, then a 'zypper up' with default, as installed repos.
So the warning appeared in the initail install instead of update, right? I don't know what your repo/lvm2 version of initial install is. If the inital install is using lvm2 including the fix for bsc#914415, it should have no such warning. Can you confirm the changelog? -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=926405
--- Comment #18 from lynda t
So the warning appeared in the initail install instead of update, right? I don't know what your repo/lvm2 version of initial install is. If the inital install is using lvm2 including the fix for bsc#914415, it should have no such warning. Can you confirm the changelog?
This is testing the most recent DomU which HAS been upgraded. With rpm -qa | grep lvm lvm2-2.02.98-43.21.1.x86_64 grep "use_lvmetad =" /etc/lvm/lvm.conf use_lvmetad = 0 systemctl list-unit-files | grep lvm lvm2-lvmetad.service enabled lvm2-monitor.service disabled lvm2-lvmetad.socket enabled Which is newer than the one referenced in # openSUSE-RU-2015:0263-1: An update that has two recommended fixes can now be installed. Category: recommended (low) Bug References: 894202,914415 CVE References: Sources used: openSUSE 13.2 (src): lvm2-2.02.98-43.17.1 I still see grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found theme: /boot/grub2/themes/openSUSE/theme.txt Found linux image: /boot/vmlinuz-3.16.7-7-default Found initrd image: /boot/initrd-3.16.7-7-default WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! No volume groups found done Checking the changelog rpm -q --changelog lvm2 * Thu Mar 19 2015 lwang@suse.com - RAID calculation for sufficient allocatable space (bsc#923021) add: acdc731e-RAID-Fix-_sufficient_pes_free-calculation.patch * Thu Feb 05 2015 lwang@suse.com - LVM2 does not support unpartitioned DASD device which has special format in the first 2 tracks and will siliently discards LVM2 lable information written to it when pvcreate. (bsc#894202) Add: dab3ebce-devices-Do-not-support-unpartitioned-DASD.patch - Delete lvm2-lvmetad.service from %service_add_pre/post and %service _del_preun/postun. Delete lvm2-lvmetad.socket from %service_del_preun/postun to avoid lvmetad.service being started by 'systemctl retry-start' when updating package. (bsc#914415) ... -- You are receiving this mail because: You are on the CC list for the bug.
participants (1)
-
bugzilla_noreply@novell.com