Comment # 2 on bug 926405 from
(In reply to Liuhua Wang from comment #1)
> I cannot reproduce locally.

Is the VG you are trying to lvcreate on a disk/array partitioned exactly like
mine? I.e.

(a) on a RAID array assembled from the not-first partitions of gpt-partitioned,
'Linux RAID' (fd00) disks?
(b) on the same VG as the "/" LV?

> Please attach the output of:
> 
> systemctl status lvm2-lvmetad.socket lvm2-lvmetad.service
> lvm2-activation-early.service lvm2-activation.service.

systemctl status lvm2-lvmetad.socket lvm2-lvmetad.service
lvm2-activation-early.service lvm2-activation.service

    lvm2-lvmetad.socket - LVM2 metadata daemon socket
       Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.socket; enabled)
       Active: active (running) since Wed 2015-04-08 14:16:57 PDT; 6h ago
         Docs: man:lvmetad(8)
       Listen: /run/lvm/lvmetad.socket (Stream)


    lvm2-lvmetad.service - LVM2 metadata daemon
       Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; disabled)
       Active: active (running) since Wed 2015-04-08 14:17:00 PDT; 6h ago
         Docs: man:lvmetad(8)
     Main PID: 1076 (lvmetad)
       CGroup: /system.slice/lvm2-lvmetad.service
               ������1076 /sbin/lvmetad


    lvm2-activation-early.service
       Loaded: not-found (Reason: No such file or directory)
       Active: inactive (dead)


    lvm2-activation.service
       Loaded: not-found (Reason: No such file or directory)
       Active: inactive (dead)


    ps ax | grep lvm
         1076 ?        Ss     0:00 /sbin/lvmetad

    grep use_lvmetad /etc/lvm/lvm.conf
        use_lvmetad = 1

> Also the output of lsblk command and the content of /etc/fstab file.
>    - lsblk

lsblk
    NAME                         MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINT
    sdg                            8:96   0 931.5G  0 disk   
    ������sdg1                         8:97   0     1M  0 part   
    ������sdg2                         8:98   0   300M  0 part   /boot/efi
    ������sdg3                         8:99   0  1024M  0 part   
    ��� ������md0                        9:0    0  1024M  0 raid1  /boot
    ������sdg4                         8:100  0 930.2G  0 part   
      ������md1                        9:1    0 930.2G  0 raid1  
        ������VG0-LV_SWAP            254:0    0     2G  0 lvm    [SWAP]
        ������VG0-LV_ROOT            254:1    0    20G  0 lvm    /
        ������VG0-LV_HOME            254:6    0    10G  0 lvm    /home
    sdh                            8:112  0 931.5G  0 disk   
    ������sdh1                         8:113  0     1M  0 part   
    ������sdh2                         8:114  0   300M  0 part   
    ������sdh3                         8:115  0  1024M  0 part   
    ��� ������md0                        9:0    0  1024M  0 raid1  /boot
    ������sdh4                         8:116  0 930.2G  0 part   
      ������md1                        9:1    0 930.2G  0 raid1  
        ������VG0-LV_SWAP            254:0    0     2G  0 lvm    [SWAP]
        ������VG0-LV_ROOT            254:1    0    20G  0 lvm    /
        ������VG0-LV_HOME            254:6    0    10G  0 lvm    /home

>    - lvcreate -vvvv

You can see that here

  https://www.redhat.com/archives/linux-lvm/2015-April/msg00010.html

>    - udevadm info --export-db

Do your really need all of that? Or just for relevant devices, e.g.

for d in sdg sdg4 md1
do
 udevadm info /dev/$d
done

What specific devices' udevadm info are you looking for ?

LT


You are receiving this mail because: