On 12/23/2016 02:04 PM, Paul Neuwirth wrote:
maybe another reason to switch back to ext4. btrfsck failed IIRC with exit code 234. ran btrfsck --repair /dev/md2 . now I started btrfsck --repair -p /dev/md2 and waiting for result (tons of lines).
The chances of btrfs failing in RAID 1 are very slim, like near 0, unless a controller issue or problems with the drive(s) themselves. mdadm is software RAID and it's not surprising people have issues with it as software RAID is not production-level RAID. Software RAID (poor man's RAID) just isn't that good. If you want real RAID, then you need a true controller card in a server that has ECC ram, with something like a PERC RAID controller card. That has a true battery backup should the server or workstation lose power, so whatever is in the buffer that didn't get written to disk gets written the next time it's powered up. It's easy to point the finger at the filesystem when in actuality it's shoddy hardware or just because the parity bits get all messed up because of software RAID. The more recent problems with btrfs and RAID had to do with RAID 5/6, and supposedly the issues have been mostly fixed. btrfs does have numerous advantages over EXT4, one being checksumming and another being snapshots. This is just my opinion, but with btrfs and any type of RAID, I will only run it with a true RAID controller card. mdadm and btrfs RAID (btrfs has built-in software RAID) I believe users end up having more problems with in the long run. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org