David Haller said the following on 04/18/2011 02:32 PM:
On Tue, 12 Apr 2011, Anton Aylward wrote:
David Haller said the following on 04/12/2011 11:35 AM:
I HAVE differently sized disks. How to have a sane RAID thingy across those without going bonkers? I'd be real happy if you'd come up with a good idea!
That's a very good point, David, and is the great shortcoming of RAID s RAID, as opposed to the "Raid" subsystems that are available from some commercial (read closed source) vendors.
I'm not so sure about the latter ;)
Neither am I :-) I've seen their explanations but still have this feeling that the salesdroids have a rubber chicken they are waving while your attention is misdirected.
Being in a similar situation, I use LVM. I'm experimenting with Btrfs.
I like what zfs claims to do.
So did I. But look at the licensing and look where Mason is working. http://thread.gmane.org/gmane.comp.file-systems.btrfs/2880 See also Alex Elsayed's comment in that thread Btrfs got to where it is so quickly b/c it is building on mature kernel features. For instance it’s using the same device mapping code for RAID and block abstraction that Linux’s software RAID and LVM are based on. Which is why it was merged into the mainline kernel so quickly.
RAID users keep one drive spare. In effect I do too, but its spread across all the other drives so I can make the best use of striping. I can also "mirror" file systems :-)
LVM over RAID is just a "no-no" in my mind.
For what I'm doing its not so much a "no-no" as a "Why?" I can do many RAID things with LVM - mirror, stripe- and more.
What I need a backup of, I mirror via rsync locally to another drive and/or to a drive in the other box.
I do that sometimes, too, but snapshot+DVD is my baseline. The snapshot is almost instantaneous so I can use it on live file systems in a way that rsync/tar/cpio can't deal with..
I've had couple of drives go bad.
Depending on the number of drives involved ... to be expected :(
One was on a server that had been running for over 2 years when I shut it down to go on an extended vacation. On my return it crashed, irrecoverably ... as in head-gouging. I'm told that's a known failure mode if you don't shut down or "park the head" on some drives.
*OUCH* I guess that's why SUSE runs a 'hdparm -y' on drives on shutdown since a while ago.
The other was a slow decay. Increasing loss of sectors.
I "phase out" disks once they report any defective sectors. They'll get used for "scrap space" and whatever, but not anything even remotely worth a backup.
Since modern drives remap defective sectors to the reserve area, what is your threshold? If I 'phased out' a drive when the first error was reported few drives would last 3 months! Bathtub means you get a few when its young -- that's life.
I have a root-xterm running, with an 'tail -f' on /v/l/m (and more).
So do I, on my log-server .... "h look, never mind you missed it..." So I have "swatch". -- Though force can protect in emergency, only justice, fairness, consideration and co-operation can finally lead men to the dawn of eternal peace. -- Dwight D. Eisenhower -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org