On 05/01/2017 05:00 AM, jdd@dodin.org wrote:
Hello,
I tested a new server install with three disks as raid 1. No problem so far.
But for reason not related to the hardware, I finally decided to don't use this server as main as expected and so want to reclaim some disks.
I found various but not plain clues from the net, so I simply poweroff the computer, remove the disk (easy as this one was external esata) and reboot.
The reboot is happening without any warning, yast2 do not show any problem
as the removed disk is to be reformatted anyway, is there some other thing I have to do to get a clean system in the server?
I only see this in the logs:
md/raid1:md127: active with 2 out of 3 mirrors
and
Started Activate md array even though degraded
may be I can reduce boot time in some way?
thanks jdd
You actually should 'fail' and 'remove' the drive you are removing from the array. It will continue to run in degraded mode if you just pull the drive. Simple to do, just `mdadm /dev/mdX --fail /dev/partition`, e.g. to remove the disk providing /dev/sdb1 to the md0 array, you can use: # mdadm /dev/md0 --fail /dev/sdb1 This will place that disk in the 'failed' state so that it can be permanently removed from the array with # mdadm /dev/md0 --remove /dev/sdb1 (repeat for all sdbX partitions contained in mdX array). This will properly remove the drives from the array metadata rather than leaving references to drives in a failed state. You can check to confirm your array status, as always, with: # cat /proc/mdstat and for array specifics # mdadm -D /dev/mdX and for drive/partition specifics, e.g. for the sda1, sdb1, sdc1 partitions that comprise an array: # mdadm -E /dev/sd[abc]1 There are a number of good howtos out there. When all else fails, linux-raid@vger.kernel.org and one of the (the) lead developer, Neil Brown, will happily walk you through any issues. -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org