Anton Aylward wrote:
On 01/05/2015 03:55 PM, Joe Zappa wrote:
When an LVM works (Linux LVM or any other LVM), it works great.
When an LVM fails, it makes the mess 3x worse to clean up and get your system back up and running.
Long before we had a functioning LVM with Linux I was using the "same thing" under AID. That was way back in the mid 1990s, though I believe that the Vx system, the "Veritas manager" was IBMs from the beginning of the 1990s, with a shoo-in to OSF. HP later licenced it.
No discussion of LVM history should take place without mentioning Heinz Mauelshagen, who did some of the earliest LVM on Linux work.
There was some variance with Linux's LVM1, but the command set we see in LVM2 is the same as I was using all the way up to AIX5.2 when I switched over to Redhat, mandrake and eventually Suse.
I found LVM2 to be stable, though I worked with with grub-orig and a boot and ROOT on real partitions. When grub2 came along I moved ROOT to LVM quite successfully and with no problems at all.
The disaster struck! LVM was 'optimized" and "lvmetad" was introduced. Go back through the archives and you can see the problems we had if the initrd had a mismatch over that, if the 'daemon' was expected but wasn't there. So we turned lvmetad off, but something keeps turning it back on and the system won't reboot and Rescue Mode is required. https://forums.opensuse.org/showthread.php/495141-boot-problem-after-lvm2-up... https://bugzilla.redhat.com/show_bug.cgi?id=989607 https://bugzilla.redhat.com/show_bug.cgi?id=813766 Peter Rajnoha's suggestion makes sense Sadly this was resolved with an "errata" notice!
And this has NOTHING to do with systemd, don't start on about that!
I use LVM *only* in circumstances in which an LVM is needed.
Otherwise, all it does is create more points of failure.
The "only when" is a YMMV. By the same logic, using BtrFS (or could that be any FS?) creates more points of failure. Yet the logic of BtrFS is that it should subsume all disk space in order to balance/optimize.
YMMV.
As things stand I see no advantage to using lvmetad in my use case. I don't have a LVM that spans multiple spindles. Enabling it adds a risk that your boot may be screwed. It certainly has been the case with me, so I have it disabled.
Here is a typical use-case where LVM is a good idea: Need for mirroring: Site -- Stock broker firm, national corporate office Host use -- order entry system (buy, sell, etc.) Host configuration: 2 16-cpu Sequent hosts running dynix in a cluster as shown below: +-------+ +---------+ +--------+ | =============== SCSI =============== | | Host1 =============== DISK =============== Host 2 | | =============== CABINET =============== | +-------+ SCSI Cables +---------+ SCSI cables +--------+ Host database Host with OS disks with OS on local on local disks disks (mirrored) (mirrored) LVM justification 1. Database is mirrored for redundancy (fault tolerance) 2. THIRD mirror for backup purposes. To minimize the amount of time the database is offline, at market closing, the database would be shutdown for a short period of time, during which the third mirror would be broken off. The database tables were stored in the vendor's optimized format, and backups could be made by referencing the partition name (/dev/sc[controller]d[disk]p[partition] or something like that). Backups done as follows: 1. database shutdown 2. 3rd mirrors of partitions disassociated from 1st & 2nd mirrors 3. database restarted 4. OS & local disks backed up [Level 0 on Saturday] 5. 3rd mirror used as data source for database backup 6. backup tape images compared to 3rd-mirror images for verification of flawless backup. 7. Once backup is verified, reassociate 3rd mirror with 1st & 2nd mirrors. Why the LVM? Because steps 5 & 6 would take several hours. By using the 3rd mirror, this gave us up to 24 hours to complete a backup, because the database could still be used during business hours. Another use-case that justifies LVM: General Motors Tech Center, High Performance Computing Group Supercomputers running high-end analysis & engineering apps, such as fine-grained finite-element analysis of collision scenarios. These computer are expensive to even OWN -- just sitting on the floor, they depreciate at a rate of many dollars/day. Hosts contain several dozen CPUs.... most jobs are disk I/O bound. Solution -- disk striping, with 3 disks per stripe. No mirroring -- loss of data is not a catastrophic loss, and the money is better spent on striping (speed) than mirroring (security). RAID 5 and RAID 6 are good for online medium-term archiving (i.e. one step above tape archives)... data-loss is expensive, so you want to protect it, but the slow write speed (due to recomputing the check-sums) is acceptable, so mirroring is not needed.
http://www.redbooks.ibm.com/abstracts/redp0107.html -- STATUS QUO is Latin for "the mess we're in."
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org