On 11/07/17 22:46, Greg Freemyer wrote:
On Tue, Jul 11, 2017 at 4:32 PM, Carlos E. R. <robin.listas@telefonica.net> wrote:
On 2017-07-11 22:21, James Knott wrote:
On 07/11/2017 04:04 PM, Wols Lists wrote:
For those who don't know, a desktop drive is "within spec" if it returns one soft read error per 10GB read. In other words, read a 6TB drive
s/10GB/10TB/
Whoops :-)
end-to-end twice, and the manufacturer says "if you get a read error, that's normal". But it will cause an array to fail if you haven't set it up properly ...
What about for those who do know? ;-)
I don't, so please explain ;-)
Carlos,
Consider a Raid-5:
Then consider no background scrubber is running.
Per the specs for most desktop drives, there is reasonable chance (20%) with 2+ TB drives that one or more of the drives will develop undetected bad sectors.
Now, assume one of the drives fail. You replace it with a new drive and kick off a re-build.
As soon as you hit the bad sector, your rebuild fails and you are stuck working with backups!
<snip>
I have no idea if the assumptions it is making are still valid.
Sorry, but the facts are wrong, too :-( You're describing a hard error - where the sector is corrupt, and that's that. I'm describing a soft error, where "something" goes wrong, and the drive times out. The next attempt to read it will work fine. But as you describe, the rebuild will bomb. At which point, the raid novice panics and all hell breaks lose. The fix is actually dead simple - just reassemble, with force if necessary. The rebuild will restart, and off you go ... :-) The problem, of course, is that the larger your array gets, the more errors you can expect, and the greater the likelihood of your rebuild failing, maybe multiple times. Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org