[opensuse] Recover Degraded LVM Stripe Set?
How do I recover an LVM stripe set where one drive went bad? I created 2 volume groups as follows: vgcreate drbdpool_under_0 /dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 vgcreate drbdpool_under_1 /dev/sda6 /dev/sdb6 /dev/sdc6 /dev/sdd6 /dev/sde6 /dev/sdf6 Then I created 2 logical volumes: lvcreate -i6 -I16 -L1.31T -n lv0_under_drbdpool /dev/vg0_under_drbdpool lvcreate -i6 -I16 -L1.31T -n lv1_under_drbdpool /dev/vg1_under_drbdpool One of the drives went bad and had to be replaced, but I don't know how to recover the volume group and logical volumes. pvscan shows the following... ha11a:~ # pvscan WARNING: Device for PV L6YooS-MX8j-FtgX-MNRn-Y4Ff-hbms-IO5dvA not found or rejected by a filter. WARNING: Device for PV iYuI0B-61nP-r68y-KrFF-3zH1-2cmG-dJB4tW not found or rejected by a filter. WARNING: Device for PV iYuI0B-61nP-r68y-KrFF-3zH1-2cmG-dJB4tW not found or rejected by a filter. WARNING: Device for PV L6YooS-MX8j-FtgX-MNRn-Y4Ff-hbms-IO5dvA not found or rejected by a filter. PV /dev/sda6 VG vg1_under_drbdpool lvm2 [224.00 GiB / 440.00 MiB free] PV /dev/sdb6 VG vg1_under_drbdpool lvm2 [224.00 GiB / 440.00 MiB free] PV /dev/sdc6 VG vg1_under_drbdpool lvm2 [224.00 GiB / 440.00 MiB free] PV /dev/sdd6 VG vg1_under_drbdpool lvm2 [224.00 GiB / 440.00 MiB free] PV /dev/sde6 VG vg1_under_drbdpool lvm2 [224.00 GiB / 440.00 MiB free] PV unknown device VG vg1_under_drbdpool lvm2 [224.00 GiB / 440.00 MiB free] PV /dev/sda5 VG vg0_under_drbdpool lvm2 [223.99 GiB / 428.00 MiB free] PV /dev/sdb5 VG vg0_under_drbdpool lvm2 [223.99 GiB / 428.00 MiB free] PV /dev/sdc5 VG vg0_under_drbdpool lvm2 [223.99 GiB / 428.00 MiB free] PV /dev/sdd5 VG vg0_under_drbdpool lvm2 [223.99 GiB / 428.00 MiB free] PV /dev/sde5 VG vg0_under_drbdpool lvm2 [223.99 GiB / 428.00 MiB free] PV unknown device VG vg0_under_drbdpool lvm2 [223.99 GiB / 428.00 MiB free] Total: 12 [2.62 TiB] / in use: 12 [2.62 TiB] / in no VG: 0 [0 ] --Eric N�����r��y隊Z)z{.�ﮞ˛���m�)z{.��+�:�{Zr�az�'z��j)h���Ǿ� ޮ�^�ˬz��
Eric Robinson wrote:
How do I recover an LVM stripe set where one drive went bad?
I created 2 volume groups as follows:
vgcreate drbdpool_under_0 /dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 vgcreate drbdpool_under_1 /dev/sda6 /dev/sdb6 /dev/sdc6 /dev/sdd6 /dev/sde6 /dev/sdf6
Then I created 2 logical volumes:
lvcreate -i6 -I16 -L1.31T -n lv0_under_drbdpool /dev/vg0_under_drbdpool lvcreate -i6 -I16 -L1.31T -n lv1_under_drbdpool /dev/vg1_under_drbdpool
One of the drives went bad and had to be replaced, but I don't know how to recover the volume group and logical volumes.
I don't see how you can recover the logical volumes - there is a bit missing, where will you get that from? The volume group can probably be mended by just removing the failed drive/partitions - probably after you have removed the logical volumes. -- Per Jessen, Zürich (3.0°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I don't see how you can recover the logical volumes - there is a bit missing, where will you get that from? The volume group can probably be mended by just removing the failed drive/partitions - probably after you have removed the logical volumes.
What's the point of creating an LVM-based RAID array if you can't recover when a drive goes bad? --Eric N�����r��y隊Z)z{.�ﮞ˛���m�)z{.��+�:�{Zr�az�'z��j)h���Ǿ� ޮ�^�ˬz��
20.11.2017 20:19, Eric Robinson пишет:
I don't see how you can recover the logical volumes - there is a bit missing, where will you get that from? The volume group can probably be mended by just removing the failed drive/partitions - probably after you have removed the logical volumes.
What's the point of creating an LVM-based RAID array if you can't recover when a drive goes bad?
What makes you believe you created *R*AID (where *R* stays for Redundant)? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I don't see how you can recover the logical volumes - there is a bit missing, where will you get that from? The volume group can probably be mended by just removing the failed drive/partitions - probably after you have removed the logical volumes.
What's the point of creating an LVM-based RAID array if you can't recover when a drive goes bad?
What makes you believe you created *R*AID (where *R* stays for Redundant)?
--
Probably just a misunderstanding on my part. The commands I used were... # vgcreate vg0_under_drbdpool /dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 # lvcreate -i6 -I16 -L1.31T -n lv0_under_drbdpool /dev/vg0_under_drbdpool I interpreted that to mean I was creating a redundant array across 6 drives. Obviously, I was wrong. What should I have done differently? --Eric N�����r��y隊Z)z{.�ﮞ˛���m�)z{.��+�:�{Zr�az�'z��j)h���Ǿ� ޮ�^�ˬz��
20.11.2017 20:37, Eric Robinson пишет:
I don't see how you can recover the logical volumes - there is a bit missing, where will you get that from? The volume group can probably be mended by just removing the failed drive/partitions - probably after you have removed the logical volumes.
What's the point of creating an LVM-based RAID array if you can't recover when a drive goes bad?
What makes you believe you created *R*AID (where *R* stays for Redundant)?
--
Probably just a misunderstanding on my part. The commands I used were...
# vgcreate vg0_under_drbdpool /dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 # lvcreate -i6 -I16 -L1.31T -n lv0_under_drbdpool /dev/vg0_under_drbdpool
I interpreted that to mean I was creating a redundant array across 6 drives. Obviously, I was wrong. What should I have done differently?
Set --type to one with redundancy (raid1, raid10, raid5*, raid6*; mirror is more or less deprecated but works too and you unlikely want raid4 even though it is redundant). -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 11/20/2017 09:45 AM, Andrei Borzenkov wrote:
20.11.2017 20:37, Eric Robinson пишет:
I don't see how you can recover the logical volumes - there is a bit missing, where will you get that from? The volume group can probably be mended by just removing the failed drive/partitions - probably after you have removed the logical volumes.
What's the point of creating an LVM-based RAID array if you can't recover when a drive goes bad? What makes you believe you created *R*AID (where *R* stays for Redundant)?
-- Probably just a misunderstanding on my part. The commands I used were...
# vgcreate vg0_under_drbdpool /dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 # lvcreate -i6 -I16 -L1.31T -n lv0_under_drbdpool /dev/vg0_under_drbdpool
I interpreted that to mean I was creating a redundant array across 6 drives. Obviously, I was wrong. What should I have done differently?
Set --type to one with redundancy (raid1, raid10, raid5*, raid6*; mirror is more or less deprecated but works too and you unlikely want raid4 even though it is redundant).
Interesting! I didn't know that you could create RAID arrays with lvcreate. Why would one do this instead of using mdadm? Ah, I found this link describing things: https://unix.stackexchange.com/questions/150644/raiding-with-lvm-vs-mdraid-p... It sounds like mdadm (or hardware RAID) would remain my choices. Greg: how is your Drobo doing? Regards, Lew -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, Nov 20, 2017 at 1:03 PM, Lew Wolfgang <wolfgang@sweet-haven.com> wrote:
On 11/20/2017 09:45 AM, Andrei Borzenkov wrote:
20.11.2017 20:37, Eric Robinson пишет:
I don't see how you can recover the logical volumes - there is a bit missing, where will you get that from? The volume group can probably be mended by just removing the failed drive/partitions - probably after you have removed the logical volumes.
What's the point of creating an LVM-based RAID array if you can't recover
when a drive goes bad? What makes you believe you created *R*AID (where *R* stays for Redundant)?
--
Probably just a misunderstanding on my part. The commands I used were...
# vgcreate vg0_under_drbdpool /dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 # lvcreate -i6 -I16 -L1.31T -n lv0_under_drbdpool /dev/vg0_under_drbdpool
I interpreted that to mean I was creating a redundant array across 6 drives. Obviously, I was wrong. What should I have done differently?
Set --type to one with redundancy (raid1, raid10, raid5*, raid6*; mirror is more or less deprecated but works too and you unlikely want raid4 even though it is redundant).
Interesting! I didn't know that you could create RAID arrays with lvcreate. Why would one do this instead of using mdadm?
Ah, I found this link describing things:
https://unix.stackexchange.com/questions/150644/raiding-with-lvm-vs-mdraid-p...
It sounds like mdadm (or hardware RAID) would remain my choices.
Greg: how is your Drobo doing?
Background: Drobo is an off the shelf RAID solution that is designed for ease of use. I have it connected via iSCSI to a Windows server. I never got it working with openSUSE. (I did try, but not real hard.) I share out a CIFS share from the Windows server and use Samba to mount it from openSUSE. Answer: I have it setup up to handle a dual disk failure. Every several months it runs out of space so I throw in a 10TB drive. It automatically sees the drive and levels everything out. No interaction beyond the physical insertion process. My only complaint is it can only create LVs (logical volumes) up to 16TB. I currently have 3 of those. They are thin provisioned and when I delete large amounts of data files, the now unused space is returned to the unallocated pool. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 11/20/2017 09:19 AM, Eric Robinson wrote:
I don't see how you can recover the logical volumes - there is a bit missing, where will you get that from? The volume group can probably be mended by just removing the failed drive/partitions - probably after you have removed the logical volumes.
What's the point of creating an LVM-based RAID array if you can't recover when a drive goes bad?
Hi Eric, I'm certainly not an expert here, but LVM is not RAID. This is why I've never used LVM myself, there is no recourse if one of the disks fails. I would imagine you could build logical volumes on top of multiple RAID arrays, but I'm not sure why you'd want to do that in the first place. I hope you made backups. Regards, Lew -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 20/11/17 17:39, Lew Wolfgang wrote:
On 11/20/2017 09:19 AM, Eric Robinson wrote:
I don't see how you can recover the logical volumes - there is a bit missing, where will you get that from? The volume group can probably be mended by just removing the failed drive/partitions - probably after you have removed the logical volumes.
What's the point of creating an LVM-based RAID array if you can't recover when a drive goes bad?
Hi Eric,
I'm certainly not an expert here, but LVM is not RAID. This is why I've never used LVM myself, there is no recourse if one of the disks fails. I would imagine you could build logical volumes on top of multiple RAID arrays, but I'm not sure why you'd want to do that in the first place. I hope you made backups.
Why you should want LVM over RAID? I'm planning to do exactly that myself. LVM does snapshots, LVM can move volumes around. LVM can resize volumes. The raid provides the redundancy, and the ability to add more hard drives. LVM then provides the ability to manage that space. Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Wol's lists wrote:
On 20/11/17 17:39, Lew Wolfgang wrote:
On 11/20/2017 09:19 AM, Eric Robinson wrote:
I don't see how you can recover the logical volumes - there is a bit missing, where will you get that from? The volume group can probably be mended by just removing the failed drive/partitions - probably after you have removed the logical volumes.
What's the point of creating an LVM-based RAID array if you can't recover when a drive goes bad?
Hi Eric,
I'm certainly not an expert here, but LVM is not RAID. This is why I've never used LVM myself, there is no recourse if one of the disks fails. I would imagine you could build logical volumes on top of multiple RAID arrays, but I'm not sure why you'd want to do that in the first place. I hope you made backups.
Why you should want LVM over RAID?
To avoid ending in the same situation as the OP.
I'm planning to do exactly that myself. LVM does snapshots, LVM can move volumes around. LVM can resize volumes.
The raid provides the redundancy, and the ability to add more hard drives. LVM then provides the ability to manage that space.
Exactly. We've been running LVM on top mdraid for years. -- Per Jessen, Zürich (4.5°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 20/11/17 18:01, Eric Robinson wrote:
I'm certainly not an expert here, but LVM is not RAID.
My understanding is that RAID arrays can be created using LVM commands instead of mdadm, and LVM-based arrays have some advantages. I think that's why the lvcreate command has a --type flag.
Don't forget the Unix rule "do one thing and do it well". While I don't want to start another filesystem war, btrfs is good at *SOME* things. However, it does try to be a swiss army knife, and one thing in the toolkit that it does not do well at all is raid. Before you use lvm-raid rather than raid, make sure that it does a good job! Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----Original Message----- From: Wols Lists [mailto:antlists@youngman.org.uk] Sent: Monday, November 20, 2017 10:51 AM To: opensuse@opensuse.org Subject: Re: [opensuse] Recover Degraded LVM Stripe Set?
On 20/11/17 18:01, Eric Robinson wrote:
I'm certainly not an expert here, but LVM is not RAID.
My understanding is that RAID arrays can be created using LVM commands instead of mdadm, and LVM-based arrays have some advantages. I think that's why the lvcreate command has a --type flag.
Don't forget the Unix rule "do one thing and do it well".
While I don't want to start another filesystem war, btrfs is good at *SOME* things. However, it does try to be a swiss army knife, and one thing in the toolkit that it does not do well at all is raid.
Before you use lvm-raid rather than raid, make sure that it does a good job!
Cheers, Wol
Agreed. After reviewing the article on mdraid vs. lvm raid, I'm inclined to agree that mdraid is still the better choice. --Eric N�����r��y隊Z)z{.�ﮞ˛���m�)z{.��+�:�{Zr�az�'z��j)h���Ǿ� ޮ�^�ˬz��
Eric Robinson wrote:
I don't see how you can recover the logical volumes - there is a bit missing, where will you get that from? The volume group can probably be mended by just removing the failed drive/partitions - probably after you have removed the logical volumes.
What's the point of creating an LVM-based RAID array if you can't recover when a drive goes bad?
What you described is esentially a RAID0 array - striped. That gives you speed, but no redundancy. -- Per Jessen, Zürich (4.4°C) http://www.cloudsuisse.com/ - your owncloud, hosted in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (7)
-
Andrei Borzenkov
-
Eric Robinson
-
Greg Freemyer
-
Lew Wolfgang
-
Per Jessen
-
Wol's lists
-
Wols Lists