On 11/02/18 06:01, Andrei Borzenkov wrote:
10.02.2018 22:35, Wols Lists пишет:
On 10/02/18 19:04, jdd@dodin.org wrote:
What small change? provided you've set the md mirror up correctly, you can take a single disk, stick it in a new machine, and boot. The only problem is the boot may "fail" with "can't mount root" initially until you force the array to assemble degraded. At which point you can then add a new disk and rebuild the array.
this is what I mean as "non obvious". I can't connect it on an usb dock and read
md won't assemble a broken array unless it was already broken when it was shut down.
It will (or rather, during boot sequence it will wait for some time and then force degraded array to be started).
"It" does not mean "kernel md driver" but rather all other components, including user level ones, that come with mdadm package.
Well, I think that most of the md guys would consider that a distro (mis)feature. If it's the root fs with a separate home, then it's not too bad - lose the OS and you're cursing but it's recoverable. If it's home ... :-( However, behind all of this is you need to monitor! The system breaking on boot is not going to help you detect a broken raid/btrfs, if the reason the system crashes is raid/btrfs broke underneath it! I put a comment on the wiki about a datacentre with - probably a petabyte array - where a tech just happened to notice that the raid-6 array had two "disk fail" red lights showing... That's what triggered my outburst against the btrfs defaults - a naive user would think adding a second drive is a good idea. If you don't know what you're doing, IT ISN'T! If techies can screw up, what hope do lusers have? Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org