[opensuse] Problems growing a logical volume with LVM tools.
Hi Folks, I've been using SuSE and openSUSE for a little while now, but this is my first foray into LVM. First off - the setup is a 4U rack-mounted server running openSUSE 10.3 64-bit, with all updates installed. There are 16 hard drives are split up into 2 8-disk RAID-5 volumes using a 3ware hardware SATA controller card. Each volume was made up of disks that were previously 400 GB, but now upgraded to 1 TB, hence the total volume size grew from 2.4 TB to 6.3 TB. The initial setup on this machine did not include LVM, and I would like to implement it to handle future size increases. Bot I am stuck at the final step of extending the logical volume. Here's what I did so far: - On the 6.3 TB RAID-5 volume, I created a partition (with parted, type GPT, with the lvm flag on). This partition was 2.4 (or thereabouts) in size, to simulate the growth from 2.4 to 6.3 TB (we were previously able to grow the underlying RAID-5 volume successfully). - I made this partition (sdc1) a physical volume. - I then made a volume group (array2) from this single partition - After that, I made a logical volume (rd2) from the array2 volume group - I used parted and mkfs on this logical partition (rd2) to create a partition with the "loop" label and format it. - I put some data on there and ran a checksum. - Then I unmounted the rd2 logical partition from the computer's file system. - I ran pvresize on it and maxed out the number of extents available on the physical volume successfully. - I repeated the same for vgresize successfully. - And finally (or so I thought) I ran lvextend to resize the logical volume. But here's where I ran into issues. Here's the command I've been trying and the output: # lvresize -d -v -l 1668910 /dev/array2/rd2 Finding volume group array2 Archiving volume group "array2" metadata (seqno 4). Extending logical volume rd2 to 6.37 TB Creating volume group backup "/etc/lvm/backup/array2" (seqno 5). Found volume group "array2" Found volume group "array2" Loading array2-rd2 table device-mapper: reload ioctl failed: Invalid argument Failed to suspend rd2 The size of the logical volume does not change. I even tried this using the GUI in YaST2, but the error message that popped up was essentially the same thing. What did I miss, or what I am not doing ? I got the same results with "lvextend" ... Thanks all. cheers vinai -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Vinai Roopchansingh wrote:
- And finally (or so I thought) I ran lvextend to resize the logical volume. But here's where I ran into issues. Here's the command I've been trying and the output:
# lvresize -d -v -l 1668910 /dev/array2/rd2 Finding volume group array2 Archiving volume group "array2" metadata (seqno 4). Extending logical volume rd2 to 6.37 TB Creating volume group backup "/etc/lvm/backup/array2" (seqno 5). Found volume group "array2" Found volume group "array2" Loading array2-rd2 table device-mapper: reload ioctl failed: Invalid argument Failed to suspend rd2
I suspect your question is probably better asked on an LVM mailing list. /Per Jessen, Zürich -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
2008/9/19 Vinai Roopchansingh
Hi Folks,
I've been using SuSE and openSUSE for a little while now, but this is my first foray into LVM.
First off - the setup is a 4U rack-mounted server running openSUSE 10.3 64-bit, with all updates installed. There are 16 hard drives are split up into 2 8-disk RAID-5 volumes using a 3ware hardware SATA controller card. Each volume was made up of disks that were previously 400 GB, but now upgraded to 1 TB, hence the total volume size grew from 2.4 TB to 6.3 TB. The initial setup on this machine did not include LVM, and I would like to implement it to handle future size increases. Bot I am stuck at the final step of extending the logical volume. Here's what I did so far:
- On the 6.3 TB RAID-5 volume, I created a partition (with parted, type GPT, with the lvm flag on). This partition was 2.4 (or thereabouts) in size, to simulate the growth from 2.4 to 6.3 TB (we were previously able to grow the underlying RAID-5 volume successfully).
- I made this partition (sdc1) a physical volume.
- I then made a volume group (array2) from this single partition
- After that, I made a logical volume (rd2) from the array2 volume group
- I used parted and mkfs on this logical partition (rd2) to create a partition with the "loop" label and format it.
- I put some data on there and ran a checksum.
- Then I unmounted the rd2 logical partition from the computer's file system.
- I ran pvresize on it and maxed out the number of extents available on the physical volume successfully.
- I repeated the same for vgresize successfully.
- And finally (or so I thought) I ran lvextend to resize the logical volume. But here's where I ran into issues. Here's the command I've been trying and the output:
# lvresize -d -v -l 1668910 /dev/array2/rd2 Finding volume group array2 Archiving volume group "array2" metadata (seqno 4). Extending logical volume rd2 to 6.37 TB Creating volume group backup "/etc/lvm/backup/array2" (seqno 5). Found volume group "array2" Found volume group "array2" Loading array2-rd2 table device-mapper: reload ioctl failed: Invalid argument Failed to suspend rd2
The size of the logical volume does not change. I even tried this using the GUI in YaST2, but the error message that popped up was essentially the same thing. What did I miss, or what I am not doing ? I got the same results with "lvextend" ...
Thanks all.
cheers vinai
Can you post this info?: - parted -l /dev/sdc - pvdisplay - vgdisplay - lvdisplay Just checking, did you reboot after your last kernel update? Regards, Ciro -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Sat, 20 Sep 2008, Ciro Iriarte wrote:
2008/9/19 Vinai Roopchansingh
:
First off - the setup is a 4U rack-mounted server running openSUSE 10.3 64-bit, with all updates installed. There are 16 hard drives are split up into 2 8-disk RAID-5 volumes using a 3ware hardware SATA controller card. Each volume was made up of disks that were previously 400 GB, but now upgraded to 1 TB, hence the total volume size grew from 2.4 TB to 6.3 TB. The initial setup on this machine did not include LVM, and I would like to implement it to handle future size increases. Bot I am stuck at the final step of extending the logical volume. Here's what I did so far:
- On the 6.3 TB RAID-5 volume, I created a partition (with parted, type GPT, with the lvm flag on). This partition was 2.4 (or thereabouts) in size, to simulate the growth from 2.4 to 6.3 TB (we were previously able to grow the underlying RAID-5 volume successfully).
- I made this partition (sdc1) a physical volume.
- I then made a volume group (array2) from this single partition
- After that, I made a logical volume (rd2) from the array2 volume group
- I used parted and mkfs on this logical partition (rd2) to create a partition with the "loop" label and format it.
- I put some data on there and ran a checksum.
- Then I unmounted the rd2 logical partition from the computer's file system.
- I ran pvresize on it and maxed out the number of extents available on the physical volume successfully.
- I repeated the same for vgresize successfully.
- And finally (or so I thought) I ran lvextend to resize the logical volume. But here's where I ran into issues. Here's the command I've been trying and the output:
# lvresize -d -v -l 1668910 /dev/array2/rd2 Finding volume group array2 Archiving volume group "array2" metadata (seqno 4). Extending logical volume rd2 to 6.37 TB Creating volume group backup "/etc/lvm/backup/array2" (seqno 5). Found volume group "array2" Found volume group "array2" Loading array2-rd2 table device-mapper: reload ioctl failed: Invalid argument Failed to suspend rd2
The size of the logical volume does not change. I even tried this using the GUI in YaST2, but the error message that popped up was essentially the same thing. What did I miss, or what I am not doing ? I got the same results with "lvextend" ...
Thanks all.
cheers vinai
Can you post this info?:
- parted -l /dev/sdc
Model: ATA ST3808110AS (scsi) Disk /dev/sda: 80.0GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 1078MB 1077MB primary , , , , , , , , , type=82, , 2 1078MB 80.0GB 78.9GB primary ext3 boot, , , , , , , , , type=83, , Model: AMCC 9500S-8 DISK (scsi) Disk /dev/sdb: 7000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 17.4kB 7000GB 7000GB , , , , , lvm, , , , , , Model: AMCC 9500S-8 DISK (scsi) Disk /dev/sdc: 7000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 17.4kB 2800GB 2800GB primary , , , , , lvm, , , , , ,
- pvdisplay
--- Physical volume --- PV Name /dev/sdc1 VG Name array2 PV Size 6.37 TB / not usable 3.57 MB Allocatable yes PE Size (KByte) 4096 Total PE 1668910 Free PE 1001339 Allocated PE 667571 PV UUID rbsmDW-8zxk-TCNJ-AF8N-v7wr-UP0B-DfeKZ8 --- Physical volume --- PV Name /dev/sdb1 VG Name array1 PV Size 6.37 TB / not usable 3.97 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 1668911 Free PE 0 Allocated PE 1668911 PV UUID XHPMTj-ZGHD-3uLi-NBKg-Tir8-aeVn-BTkrhu
- vgdisplay
--- Volume group --- VG Name array2 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 1668910 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 6.37 TB PE Size 4.00 MB Total PE 1668910 Alloc PE / Size 667571 / 2.55 TB Free PE / Size 1001339 / 3.82 TB VG UUID m2ATs2-75dX-Uy90-mzo2-j2lM-wbhH-i8L6kb --- Volume group --- VG Name array1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 6.37 TB PE Size 4.00 MB Total PE 1668911 Alloc PE / Size 1668911 / 6.37 TB Free PE / Size 0 / 0 VG UUID smTgPg-bgU1-voE4-SYWO-1cWX-KPYw-VOT8Z2
- lvdisplay
--- Logical volume --- LV Name /dev/array2/rd2 VG Name array2 LV UUID yf0Yhj-VbhB-13dg-fapz-g644-Dojh-rKkYIR LV Write Access read/write LV Status available # open 0 LV Size 2.55 TB Current LE 667571 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:0 --- Logical volume --- LV Name /dev/array1/rd1 VG Name array1 LV UUID hKF4R6-xXlI-TORe-IN3p-M2JS-vSqn-KHEA5b LV Write Access read/write LV Status available # open 1 LV Size 6.37 TB Current LE 1668911 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:1
Just checking, did you reboot after your last kernel update?
Absolutely ... Thanks all. Let me know if there's any other information I can provide ... cheers vinai -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
2008/9/21 vinai
On Sat, 20 Sep 2008, Ciro Iriarte wrote:
snip...
Can you post this info?:
- parted -l /dev/sdc
Model: ATA ST3808110AS (scsi) Disk /dev/sda: 80.0GB Sector size (logical/physical): 512B/512B Partition Table: msdos
Number Start End Size Type File system Flags 1 32.3kB 1078MB 1077MB primary , , , , , , , , , type=82, , 2 1078MB 80.0GB 78.9GB primary ext3 boot, , , , , , , , , type=83, ,
Model: AMCC 9500S-8 DISK (scsi) Disk /dev/sdb: 7000GB Sector size (logical/physical): 512B/512B Partition Table: gpt
Number Start End Size File system Name Flags 1 17.4kB 7000GB 7000GB , , , , , lvm, , , , , ,
Model: AMCC 9500S-8 DISK (scsi) Disk /dev/sdc: 7000GB Sector size (logical/physical): 512B/512B Partition Table: gpt
Number Start End Size File system Name Flags 1 17.4kB 2800GB 2800GB primary , , , , , lvm, , , , , ,
- pvdisplay
--- Physical volume --- PV Name /dev/sdc1 VG Name array2 PV Size 6.37 TB / not usable 3.57 MB Allocatable yes PE Size (KByte) 4096 Total PE 1668910 Free PE 1001339 Allocated PE 667571 PV UUID rbsmDW-8zxk-TCNJ-AF8N-v7wr-UP0B-DfeKZ8
--- Physical volume --- PV Name /dev/sdb1 VG Name array1 PV Size 6.37 TB / not usable 3.97 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 1668911 Free PE 0 Allocated PE 1668911 PV UUID XHPMTj-ZGHD-3uLi-NBKg-Tir8-aeVn-BTkrhu
- vgdisplay
--- Volume group --- VG Name array2 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 1668910 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 6.37 TB PE Size 4.00 MB Total PE 1668910 Alloc PE / Size 667571 / 2.55 TB Free PE / Size 1001339 / 3.82 TB VG UUID m2ATs2-75dX-Uy90-mzo2-j2lM-wbhH-i8L6kb
--- Volume group --- VG Name array1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 6.37 TB PE Size 4.00 MB Total PE 1668911 Alloc PE / Size 1668911 / 6.37 TB Free PE / Size 0 / 0 VG UUID smTgPg-bgU1-voE4-SYWO-1cWX-KPYw-VOT8Z2
- lvdisplay
--- Logical volume --- LV Name /dev/array2/rd2 VG Name array2 LV UUID yf0Yhj-VbhB-13dg-fapz-g644-Dojh-rKkYIR LV Write Access read/write LV Status available # open 0 LV Size 2.55 TB Current LE 667571 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:0
--- Logical volume --- LV Name /dev/array1/rd1 VG Name array1 LV UUID hKF4R6-xXlI-TORe-IN3p-M2JS-vSqn-KHEA5b LV Write Access read/write LV Status available # open 1 LV Size 6.37 TB Current LE 1668911 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:1
Just checking, did you reboot after your last kernel update?
Absolutely ...
Thanks all. Let me know if there's any other information I can provide ...
cheers vinai
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Parted only states 2800GB for /dev/sdc1.... How did the PV accepted to grow?, probably that's the issue... You should change the partition table resizing sdc1 (then running pvresize), or create sdc2 as a new PV with the rest of the space... As an alternative, you can use the whole sdc disk without partitions, that way you can avoid the partition resizing step in case you keep growing the underlying raid. Not sure about how this will affect the LVM auto discovery process (no partition type to look for). Regards, Ciro -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
participants (4)
-
Ciro Iriarte
-
Per Jessen
-
vinai
-
Vinai Roopchansingh