http://bugzilla.novell.com/show_bug.cgi?id=525454
User pgnet.dev@gmail.com added comment
http://bugzilla.novell.com/show_bug.cgi?id=525454#c1
pgnet Dev changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|P5 - None |P3 - Medium
CC| |pgnet.dev@gmail.com
--- Comment #1 from pgnet Dev 2009-08-06 11:02:12 MDT ---
seeing the same behavior.
in a Xen DomU,
uname -ri
2.6.27.29-6-xen x86_64
i've created a VG & LV,
vgdisplay
--- Volume group ---
VG Name VG_TEST
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 22
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.82 TB
PE Size 64.00 MB
Total PE 29808
Alloc PE / Size 29808 / 1.82 TB
Free PE / Size 0 / 0
VG UUID eVc4RN-aIU1-FIH0-Xjd3-riUW-6VzP-N5TtPj
lvdisplay
--- Logical volume ---
LV Name /dev/VG_TEST/LV_TEST
VG Name VG_TEST
LV UUID T5btdX-Zn0k-anMi-SAjD-yRl0-tsAv-NdPWlu
LV Write Access read/write
LV Status available
# open 1
LV Size 418.06 GB
Current LE 6689
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
which mount properly,
mount /dev/VG_TEST/LV_TEST /home/stor/TEST
df -H | grep TEST
/dev/mapper/VG_TEST-LV_TEST
442G 208M 420G 1% /home/stor/TEST
fwiw, the VG/LV are backed on a 4 x 1TB raid10 array (/dev/md0) ...
mdadm --detail /dev/md0
/dev/md0:
Version : 1.02
Creation Time : Wed Aug 5 17:38:38 2009
Raid Level : raid10
Array Size : 1953519616 (1863.02 GiB 2000.40 GB)
Used Dev Size : 1953519616 (1863.02 GiB 2000.40 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Aug 6 09:42:07 2009
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 256K
Name : nas:0 (local to host nas)
UUID : bbfa5763:1372bf9b:037392ad:e3a985b9
Events : 14
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
immediately after reboot, the VG & LV exist,
vgs
VG #PV #LV #SN Attr VSize VFree
VG_TEST 1 3 0 wz--n- 1.82T 0
lvs
LV VG Attr LSize Origin Snap% Move Log Copy%
Convert
LV_TEST VG_TEST -wi--- 418.06G
but, the LV is not mountable,
mount /dev/VG_TEST/LV_TEST /home/stor/TEST
mount: special device /dev/VG_TEST/LV_TEST does not exist
checking,
ls /dev/VG_TEST
/bin/ls: cannot access /dev/VG_TEST: No such file or directory
not good :-/
reading @,
http://www.linuxquestions.org/questions/linux-server-73/logical-volumes-not-...
a solution is,
vgchange -a y
1 logical volume(s) in volume group "VG_TEST" now active
ls /dev/VG_TEST
LV_TEST
mount /dev/VG_TEST/LV_TEST /home/stor/TEST
but, per the post, adding to /etc/inittab, to survive reboot,
...
+++ md:35: once:/sbin/vgchange -a y
# end of /etc/inittab
fails to do so, returning, as above,
reboot
ls /dev/VG_TEST
/bin/ls: cannot access /dev/VG_TEST: No such file or directory
vgchange -a y
1 logical volume(s) in volume group "VG_TEST" now active
ls /dev/VG_TEST
LV_TEST
mount /dev/VG_TEST/LV_TEST /home/stor/TEST
--
Configure bugmail: http://bugzilla.novell.com/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.