[opensuse] removing a disk on raid 1 array
Hello, I tested a new server install with three disks as raid 1. No problem so far. But for reason not related to the hardware, I finally decided to don't use this server as main as expected and so want to reclaim some disks. I found various but not plain clues from the net, so I simply poweroff the computer, remove the disk (easy as this one was external esata) and reboot. The reboot is happening without any warning, yast2 do not show any problem as the removed disk is to be reformatted anyway, is there some other thing I have to do to get a clean system in the server? I only see this in the logs: md/raid1:md127: active with 2 out of 3 mirrors and Started Activate md array even though degraded may be I can reduce boot time in some way? thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2017-05-01 12:00, jdd@dodin.org wrote:
Hello,
I tested a new server install with three disks as raid 1. No problem so far.
But for reason not related to the hardware, I finally decided to don't use this server as main as expected and so want to reclaim some disks.
I found various but not plain clues from the net, so I simply poweroff the computer, remove the disk (easy as this one was external esata) and reboot.
The reboot is happening without any warning, yast2 do not show any problem
as the removed disk is to be reformatted anyway,
There is something you have to overwrite, so that the disk is not detected as part of a raid array. I forget which. -- Cheers / Saludos, Carlos E. R. (from 42.2 x86_64 "Malachite" at Telcontar)
Le 01/05/2017 à 12:18, Carlos E. R. a écrit :
There is something you have to overwrite, so that the disk is not detected as part of a raid array. I forget which.
yes, but yast2 do a very good job. It was able to remove the previous partition and rebuild an ext4 partition fully workable. YaST had to remove lvm components jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jdd@dodin.org wrote:
Hello,
I tested a new server install with three disks as raid 1. No problem so far.
But for reason not related to the hardware, I finally decided to don't use this server as main as expected and so want to reclaim some disks.
I found various but not plain clues from the net, so I simply poweroff the computer, remove the disk (easy as this one was external esata) and reboot.
The reboot is happening without any warning, yast2 do not show any problem
as the removed disk is to be reformatted anyway, is there some other thing I have to do to get a clean system in the server?
Remove that drive from the raid config.
I only see this in the logs:
md/raid1:md127: active with 2 out of 3 mirrors
So you still have a raid1 with two drives?
and
Started Activate md array even though degraded
Right, that is exactly what is happening. -- Per Jessen, Zürich (7.8°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/05/17 11:00, jdd@dodin.org wrote:
Hello,
I tested a new server install with three disks as raid 1. No problem so far.
But for reason not related to the hardware, I finally decided to don't use this server as main as expected and so want to reclaim some disks.
I found various but not plain clues from the net, so I simply poweroff the computer, remove the disk (easy as this one was external esata) and reboot.
The reboot is happening without any warning, yast2 do not show any problem
as the removed disk is to be reformatted anyway, is there some other thing I have to do to get a clean system in the server?
mdadm --remove /dev/sdx
I only see this in the logs:
md/raid1:md127: active with 2 out of 3 mirrors
and
Started Activate md array even though degraded
may be I can reduce boot time in some way?
Oh - and reformatting the removed drive will probably result in a very confused computer when you put it in the new computer. Or a trashed array if you put it back in the original server! Make sure you WIPE it. There's an mdadm command (--remove-superblock, I think) that will actually delete the array off the drive, or if you've got the time, "dd if=/dev/zero of=/dev/sdx" is best. Don't forget also that things like GPT keep backups now - making sure your drive is clean is no longer as simple as overwriting the first 5212 bytes of the drive. I used to do that all the time back in the days of bios and partition tables - it doesn't work any more :-( Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 01/05/2017 à 13:11, Wols Lists a écrit :
mdadm --remove /dev/sdx
there is no more any /dev/sdx for per, yes I had three disks previously, now two the most infos are from mdadm --detail /dev/md127, but where are these infos stored? jdd # mdadm --detail /dev/md127 /dev/md127: Version : 1.0 Creation Time : Wed Dec 21 22:43:31 2016 Raid Level : raid1 Array Size : 976758592 (931.51 GiB 1000.20 GB) Used Dev Size : 976758592 (931.51 GiB 1000.20 GB) Raid Devices : 3 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Mon May 1 15:29:48 2017 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : any:jdd-raid UUID : 4d57c009:5d7fc724:07376d9a:8ecb39a2 Events : 9592 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 - 0 0 2 removed -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/05/17 09:44 AM, jdd@dodin.org wrote:
the most infos are from mdadm --detail /dev/md127, but where are these infos stored?
When I need to answer questions like that, and this is generic, not specific to this case, I start with --help Then the man page. Yes, man pages are often badly written, but they also give clues. Often, at the end of the man page, there is a "FILES" section that mentions config files and others, as is the case here. Then, quite probably, I run the 'strings' command looking for any file references; probably that involves strings <binary> | grep "/" Sometimes that's not the definitive answer. Sometime you need to run "ldd" to find what libraries it uses. Just about everything uses libc and ld-linux, what you need to check out are libraries that are specific to that application. Like anything to do with computers in general and Linux in particular, this requires a bit of sense and though, its not a neat, deterministic algorithm. In this case strings /sbin/mdadm | grep -e "/" turns up quite a few items that are worth investigating. Heck, it even shows part of the "--help" that might be useful! Compared to some, I think this is an easy one. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 01/05/2017 à 16:25, Anton Aylward a écrit :
On 01/05/17 09:44 AM, jdd@dodin.org wrote:
the most infos are from mdadm --detail /dev/md127, but where are these infos stored?
When I need to answer questions like that, and this is generic, not specific to this case, I start with
--help
did it, without founding what I seek for :-(
Then the man page. Yes, man pages are often badly written, but they also give clues. Often, at the end of the man page, there is a "FILES" section that mentions config files and others, as is the case here.
nope. mdadm.conf do not keep this info
Then, quite probably, I run the 'strings' command looking for any file references; probably that involves
strings <binary> | grep "/"
less is enough, then search with / but what may I search? there is lot of text in this binary.
In this case strings /sbin/mdadm | grep -e "/"
only gives me a > prompt? I didn't find the solution in the raid wiki neither (but I should have read it before) jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jdd@dodin.org wrote:
Le 01/05/2017 à 13:11, Wols Lists a écrit :
mdadm --remove /dev/sdx
there is no more any /dev/sdx
for per, yes I had three disks previously, now two
It seems that the "failed" drive was automagically removed, so you just need to reduce the number of drives in the arrya to 2 (instead of 3). There is no doubt some mdadm --manage incantation for that. -- Per Jessen, Zürich (7.8°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/05/17 15:31, Per Jessen wrote:
jdd@dodin.org wrote:
Le 01/05/2017 à 13:11, Wols Lists a écrit :
mdadm --remove /dev/sdx
there is no more any /dev/sdx
for per, yes I had three disks previously, now two
It seems that the "failed" drive was automagically removed, so you just need to reduce the number of drives in the arrya to 2 (instead of 3).
There is no doubt some mdadm --manage incantation for that.
Yep - my mistake with --remove. The magic incantation, iirc, is --raid-devices=2. Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 01/05/2017 à 12:00, jdd@dodin.org a écrit :
md/raid1:md127: active with 2 out of 3 mirrors
the "3" seems to be in the superblock # mdadm -E /dev/sda1 /dev/sda1: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : 4d57c009:5d7fc724:07376d9a:8ecb39a2 Name : any:jdd-raid Creation Time : Wed Dec 21 22:43:31 2016 Raid Level : raid1 Raid Devices : 3 (...) jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 01/05/2017 à 12:00, jdd@dodin.org a écrit :
md/raid1:md127: active with 2 out of 3 mirrors
it was not that simple to find the linux raid mailing list (iy's here: http://vger.kernel.org/vger-lists.html#linux-raid), but very effective. the answer is: mdadm --grow /dev/md127 --raid-devices=2 "grow" can also shrink... thanks to roman jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 1 May 2017 19:47:03 +0200 "jdd@dodin.org" <jdd@dodin.org> wrote:
Le 01/05/2017 à 12:00, jdd@dodin.org a écrit :
md/raid1:md127: active with 2 out of 3 mirrors
it was not that simple to find the linux raid mailing list (iy's here: http://vger.kernel.org/vger-lists.html#linux-raid), but very effective.
It's the first hit when I type linux raid mailing list into google?! But I'm glad they were able to help you. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 01/05/2017 à 22:02, Dave Howorth a écrit :
It's the first hit when I type linux raid mailing list into google?! But I'm glad they were able to help you.
but is it normal to have to use google to find a mailing list? I found no link on the linux raid wiki thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/05/17 22:06, jdd@dodin.org wrote:
Le 01/05/2017 à 22:02, Dave Howorth a écrit :
It's the first hit when I type linux raid mailing list into google?! But I'm glad they were able to help you.
but is it normal to have to use google to find a mailing list? I found no link on the linux raid wiki
From the home page of the raid wiki - and only a couple of lines down ... "Linux RAID issues are discussed in the linux-raid mailing list to be found at http://vger.kernel.org/vger-lists.html#linux-raid" Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 1 May 2017 22:46:39 +0100 Anthony Youngman <antlists@youngman.org.uk> wrote:
On 01/05/17 22:06, jdd@dodin.org wrote:
Le 01/05/2017 à 22:02, Dave Howorth a écrit :
It's the first hit when I type linux raid mailing list into google?! But I'm glad they were able to help you.
but is it normal to have to use google to find a mailing list?
Why not? That's the purpose of search engines! People even use search engines to find files they've put on their own machines.
I found no link on the linux raid wiki
From the home page of the raid wiki - and only a couple of lines down ...
"Linux RAID issues are discussed in the linux-raid mailing list to be found at http://vger.kernel.org/vger-lists.html#linux-raid"
Cheers, Wol
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 01/05/2017 à 23:46, Anthony Youngman a écrit :
From the home page of the raid wiki - and only a couple of lines down ...
"Linux RAID issues are discussed in the linux-raid mailing list to be found at http://vger.kernel.org/vger-lists.html#linux-raid"
yes I searched in the toc and in the left menu, not on the upper part As I use to fix things myself, I asked for an edit login to the linux raid wiki and added the mailing list title to the bottom of the page (I didn't remove the previous link), just for dumb people like me. I also slightly modified the wiki page https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm#Removing_a_disk_from...) and by doing so noticed that you are the main author of these page, congratulations this wiki is very good :-)) jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 02/05/17 08:47, jdd@dodin.org wrote:
Le 01/05/2017 à 23:46, Anthony Youngman a écrit :
From the home page of the raid wiki - and only a couple of lines down ...
"Linux RAID issues are discussed in the linux-raid mailing list to be found at http://vger.kernel.org/vger-lists.html#linux-raid"
yes
I searched in the toc and in the left menu, not on the upper part
As I use to fix things myself, I asked for an edit login to the linux raid wiki and added the mailing list title to the bottom of the page (I didn't remove the previous link), just for dumb people like me.
I also slightly modified the wiki page
https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm#Removing_a_disk_from...)
and by doing so noticed that you are the main author of these page, congratulations this wiki is very good :-))
"If I can see so far, it is only because I stand on the shoulders of giants". Yes, I've done a lot of hard work updating the wiki. But most of the grunt work was done by the people who originally created it - I've only taken their work and brought it in to the current day. But thanks for the compliment :-) for them as well as me. Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 05/01/2017 05:00 AM, jdd@dodin.org wrote:
Hello,
I tested a new server install with three disks as raid 1. No problem so far.
But for reason not related to the hardware, I finally decided to don't use this server as main as expected and so want to reclaim some disks.
I found various but not plain clues from the net, so I simply poweroff the computer, remove the disk (easy as this one was external esata) and reboot.
The reboot is happening without any warning, yast2 do not show any problem
as the removed disk is to be reformatted anyway, is there some other thing I have to do to get a clean system in the server?
I only see this in the logs:
md/raid1:md127: active with 2 out of 3 mirrors
and
Started Activate md array even though degraded
may be I can reduce boot time in some way?
thanks jdd
You actually should 'fail' and 'remove' the drive you are removing from the array. It will continue to run in degraded mode if you just pull the drive. Simple to do, just `mdadm /dev/mdX --fail /dev/partition`, e.g. to remove the disk providing /dev/sdb1 to the md0 array, you can use: # mdadm /dev/md0 --fail /dev/sdb1 This will place that disk in the 'failed' state so that it can be permanently removed from the array with # mdadm /dev/md0 --remove /dev/sdb1 (repeat for all sdbX partitions contained in mdX array). This will properly remove the drives from the array metadata rather than leaving references to drives in a failed state. You can check to confirm your array status, as always, with: # cat /proc/mdstat and for array specifics # mdadm -D /dev/mdX and for drive/partition specifics, e.g. for the sda1, sdb1, sdc1 partitions that comprise an array: # mdadm -E /dev/sd[abc]1 There are a number of good howtos out there. When all else fails, linux-raid@vger.kernel.org and one of the (the) lead developer, Neil Brown, will happily walk you through any issues. -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (8)
-
Anthony Youngman
-
Anton Aylward
-
Carlos E. R.
-
Dave Howorth
-
David C. Rankin
-
jdd@dodin.org
-
Per Jessen
-
Wols Lists