On 06/24/2015 12:33 PM, Moby wrote:
On 06/19/2015 02:03 PM, Moby wrote:
On 06/19/2015 02:01 PM, Anton Aylward wrote:
I note that on the btrfs list there have been recent (this month) patches to addres RAID5/6/ issues. one post says
<quote src="Subject: Re: Does raid56 work on latest kernel? Date: Thu, 11 Jun 2015 04:14:25 +0000 (UTC) Message-ID: <pan$4058c$7888c445$459683cf$d6eac669@cox.net> "> ...
Technically, all the raid56 support should be there now -- it's code- complete. Practically, however, I've been recommending people continue to stay off of it for anything but pure data-loss-ok testing, for at *LEAST* another couple kernels, to shake out some of the inevitable bugs and let the code settle at least a /bit/.
And point-of-fact, we /did/ have some bad raid56 mode reports shortly after 4.0, from people who had /not/ let it shake out a bit and were trying to use it in normal-case situations. Whether they're actually fixed now (with the latest stable or with late 4.1-rcs for 4.1 release) I'm not sure, tho the number of bad reports has died down quite a bit, but I don't know whether that is people actually following the recommendation, or because it's fixed now. Either way, however, I'd not be using it myself, and couldn't recommend it for others except as I said purely for data-loss-doesn't-matter testing, until at LEAST 4.2, as there's still likely to be a few more critical bugs to shake out.
And even at 4.2, I'd still recommend raid56 mode only for those willing to be bleeding edge testers, if that's what it takes to be leading edge testers, because point-of-fact, that code still won't be as stable as the raid0/1/10 modes, which are basically as stable as btrfs itself is by this point. It's going to take some time, as well as the reports of those leading/bleeding edge testers, to shake out further bugs and stabilize that still very new code.
For more btrfs-mainstream users[1], I'd recommend waiting about a year, five kernel cycles, for btrfs raid56 mode to stabilize.
</quote> Thanks Anton - sorry if I did not mention earlier, the RAID5 setup is using md raid, not using btrfs ...
Same machine (upgraded kernel and btrfsprogs as per Anton's suggestion). Doing a btrfs balance with -dusage=95 now shows negative values in percentage left!
Every 15.0s: sh -c date;btrfs balance status -v / Wed Jun 24 12:29:12 2015
Wed Jun 24 12:29:12 CDT 2015 Balance on '/' is running 306 out of about 145 chunks balanced (312 considered), -111% left Dumping filters: flags 0x1, state 0x1, force is off DATA (flags 0x2): balancing, usage=95
I upgraded today to kernel 4.1.0-1.gfcf8349-default and btrfs-progs v4.1+20150622. Then I kicked off a btrfs balance with no -duage. So far it is running fine and is at 28% left. Keeping my fingers crossed that this updates fixes the issue. -- --Moby They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety. -- Benjamin Franklin -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org