Bug ID 1060654
Summary [lvmlockd] "lvm_global" lockspace always fails to close
Classification openSUSE
Product openSUSE Tumbleweed
Version Current
Hardware Other
OS Other
Status NEW
Severity Normal
Priority P5 - None
Component High Availability
Assignee ha-bugs@suse.de
Reporter zren@suse.com
QA Contact qa-bugs@suse.de
Found By ---
Blocker ---

1. problem

"lvm_global" lockspace always stays on system even though all shared VGs have
been deactivated and lock stopped.

2. reproduce

 - assume you already have a existing shared VG - vgtest1
 - reboot
 - run #lvmlockd -p /run/lvmlockd.pid -A 1 -g dlm
 - dlm_tool ls

===
# dlm_tool ls
dlm lockspaces
name          lvm_global
id            0x12aabd2d
flags         0x00000000 
change        member 1 joined 0 remove 1 failed 0 seq 3,3
members       172204564
===

 - # dlm_tool leave lvm_global
Leaving lockspace "lvm_global"
## hang forever

3. info

#dlm_tool leave lvm_global
===
1722 kernel: add@ lvm_global
1722 uevent: online@/kernel/dlm/lvm_global
1722 kernel: online@ lvm_global
1722 lvm_global cpg_join dlm:ls:lvm_global ...
1722 dlm:ls:lvm_global conf 2 1 0 memb 172204564 172204568 join 172204568 left
1722 lvm_global add_change cg 1 joined nodeid 172204568
1722 lvm_global add_change cg 1 we joined
1722 lvm_global add_change cg 1 counts member 2 joined 1 remove 0 failed 0
1722 lvm_global check_ringid cluster 60 cpg 0:0
1722 dlm:ls:lvm_global ring 172204564:60 2 memb 172204564 172204568
1722 lvm_global check_fencing disabled
1722 lvm_global send_start 172204568:1 counts 0 2 1 0 0
1722 lvm_global wait_messages cg 1 need 2 of 2
1722 lvm_global receive_start 172204568:1 len 80
1722 lvm_global match_change 172204568:1 matches cg 1
1722 lvm_global wait_messages cg 1 need 1 of 2
1722 lvm_global receive_start 172204564:2 len 80
1722 lvm_global match_change 172204564:2 matches cg 1
1722 lvm_global wait_messages cg 1 got all 2
1722 lvm_global start_kernel cg 1 member_count 2
1722 write "313179437" to "/sys/kernel/dlm/lvm_global/id"
1722 set_members mkdir
"/sys/kernel/config/dlm/cluster/spaces/lvm_global/nodes/172204564"
1722 set_members mkdir
"/sys/kernel/config/dlm/cluster/spaces/lvm_global/nodes/172204568"
1722 write "1" to "/sys/kernel/dlm/lvm_global/control"
1722 write "0" to "/sys/kernel/dlm/lvm_global/event_done"
1722 lvm_global prepare_plocks
1722 lvm_global set_plock_data_node from 0 to 172204564
1722 lvm_global save_plocks start
1722 lvm_global receive_plocks_done 172204564:2 flags 2 plocks_data 0 need 1
save 1
1722 lvm_global match_change 172204564:2 matches cg 1
1722 lvm_global process_saved_plocks begin
1722 lvm_global process_saved_plocks 0 done
1722 lvm_global receive_plocks_done 172204564:2 plocks_data_count 0
1722 uevent: add@/devices/virtual/misc/dlm_lvm_global
3997 uevent: remove@/devices/virtual/misc/dlm_lvm_global
3997 uevent: offline@/kernel/dlm/lvm_global
3997 kernel: offline@ lvm_global
3997 dlm:ls:lvm_global conf 1 0 1 memb 172204564 join left 172204568
3997 lvm_global confchg for our leave
3997 lvm_global stop_kernel cg 0
3997 write "0" to "/sys/kernel/dlm/lvm_global/control"
3997 dir_member 172204568
3997 dir_member 172204564
3997 set_members rmdir
"/sys/kernel/config/dlm/cluster/spaces/lvm_global/nodes/172204568"
3997 set_members rmdir
"/sys/kernel/config/dlm/cluster/spaces/lvm_global/nodes/172204564"
3997 set_members lockspace rmdir
"/sys/kernel/config/dlm/cluster/spaces/lvm_global"
3997 write "0" to "/sys/kernel/dlm/lvm_global/event_done"
3997 lvm_global purged 0 plocks for 172204568
===


You are receiving this mail because: