On 2017-07-11 23:46, Greg Freemyer wrote:
On Tue, Jul 11, 2017 at 4:32 PM, Carlos E. R. <robin.listas@telefonica.net> wrote:
On 2017-07-11 22:21, James Knott wrote:
On 07/11/2017 04:04 PM, Wols Lists wrote:
For those who don't know, a desktop drive is "within spec" if it returns one soft read error per 10GB read. In other words, read a 6TB drive
s/10GB/10TB/
end-to-end twice, and the manufacturer says "if you get a read error, that's normal". But it will cause an array to fail if you haven't set it up properly ...
What about for those who do know? ;-)
I don't, so please explain ;-)
Carlos,
Consider a Raid-5:
Then consider no background scrubber is running.
Per the specs for most desktop drives, there is reasonable chance (20%) with 2+ TB drives that one or more of the drives will develop undetected bad sectors.
Now, assume one of the drives fail. You replace it with a new drive and kick off a re-build.
As soon as you hit the bad sector, your rebuild fails and you are stuck working with backups!
Yes, that's reasonable.
== Solutions
Use Raid-6, but realize in today's era it is still only good for one failed drive at a time.
Find drives with significantly higher specs for undetected bad sectors (I don't know if drives like that exist or not).
Use a scrubber religiously to make sure there are no undetected bad sectors.
== Here's a 8-year old paper arguing even Raid-6 will run out of safety margin by 2019.
http://queue.acm.org/detail.cfm?id=1670144
I have no idea if the assumptions it is making are still valid.
I think I read that article or a similar one some years ago. I'll try read it later, time permitting :-) My question was rather on the issue of how to configure the disks properly. -- Cheers / Saludos, Carlos E. R. (from 42.2 x86_64 "Malachite" at Telcontar)