On Wed, Dec 14, 2016 at 3:52 AM, Per Jessen
Andrei Borzenkov wrote:
On Wed, Dec 14, 2016 at 11:08 AM, Per Jessen
wrote: Andrei Borzenkov wrote:
On Wed, Dec 14, 2016 at 10:08 AM, Per Jessen
wrote: I ordered a second 10TB drive last week. I also pulled out one of the 1TB drives to leave a empty drive bay for the 10TB. It took about 12 hours to rebuild then.
Interesting that it rebuilds when a disk has been removed.
Huh? That's what I expect from any decent RAID implementation. Unless this disk was not in use, but that's not clear from the above.
When I remove a disk from a raid5 or raid6 array, it just goes into degraded mode. Rebuilding doesn't happen until a new drive is added.
Which is why hot spare disks exist
Of course, but Greg didn't mention any.
The user interface is minimal. No option to specify a hot spare.
- to have something to rebuild on as soon as failure is detected. I honestly expect that anyone who cares about data availability has one ... :)
Requirements do differ, but I would tend to agree. Running degraded is an open invitation for disaster, the longer the worse.
If there is spare disk space the Drobo clearly starts a rebuild immediately. Remember it is thin provisiooning, so it leverages unallocated space on the volumes as free space.
There are also custom implementations that reserve space on disks and rebuild data from failed disk onto this reserve space. Or simply on free space, as long as there is enough. This company appears to offer such custom implementation:
Greg will be able to tell us.
I think the latter "Or simply on free space, as long as there is enough" Greg
-- Per Jessen, Zürich (0.8°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org