[opensuse] cannot delete a RAID drive
I am trying to free up a partition on my work desktop. One of the partitions on the extra disk is part of an old RAID-1 configuration, and I am trying to delete the raid drive from the system and re-format that partition. However, whenever I try to use fdisk, it re-activates the RAID even though I have already removed it. Here is my partition table: # fdisk -l /dev/sdb Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x12c2c333 Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 206847 204800 100M 7 HPFS/NTFS/exFAT /dev/sdb2 206848 324458495 324251648 154.6G 7 HPFS/NTFS/exFAT /dev/sdb3 324458496 358275071 33816576 16.1G 82 Linux swap / Solaris /dev/sdb4 * 358275072 1953523711 1595248640 760.7G 5 Extended /dev/sdb5 358277120 419717119 61440000 29.3G 83 Linux /dev/sdb6 419719168 481159167 61440000 29.3G 83 Linux /dev/sdb7 481161216 1953523711 1472362496 702.1G 83 Linux The device in question is /dev/sdb5. Note how if I look at the mdstat file, it shows it is active. So I try to fail the drive, but it won't let me. Then I stop the raid drive and remove it, and it shows it is no longer there. Seems like it should work. facofficeeng02:/home/george # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb5[0] 30703616 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdb7[0] 736050176 blocks super 1.2 [2/1] [U_] bitmap: 6/6 pages [24KB], 65536KB chunk unused devices: <none> facofficeeng02:/home/george # cat /etc/mdadm.conf ARRAY /dev/md/facofficeeng02:1 UUID=a8872a97:eecd6828:f759773c:cb18678e facofficeeng02:/home/george # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb5[0] 30703616 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdb7[0] 736050176 blocks super 1.2 [2/1] [U_] bitmap: 6/6 pages [24KB], 65536KB chunk unused devices: <none> facofficeeng02:/home/george # mdadm --fail /dev/md0 /dev/sdb5 mdadm: set device faulty failed for /dev/sdb5: Device or resource busy facofficeeng02:/home/george # mdadm --stop /dev/md0 mdadm: stopped /dev/md0 facofficeeng02:/home/george # mdadm --remove /dev/md0 facofficeeng02:/home/george # cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdb7[0] 736050176 blocks super 1.2 [2/1] [U_] bitmap: 6/6 pages [24KB], 65536KB chunk unused devices: <none> However, then when I run fdisk again, just to try and mess with it, the array re-activates itself: acofficeeng02:/home/george # fdisk /dev/sdb Welcome to fdisk (util-linux 2.28). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): q facofficeeng02:/home/george # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb5[0] 30703616 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdb7[0] 736050176 blocks super 1.2 [2/1] [U_] bitmap: 6/6 pages [24KB], 65536KB chunk unused devices: <none> I wanted to use fdisk to try and do something to that partition, but the array is reactivated as soon as I run fdisk. I tried gdisk for experiment (I can't use gdisk because the drive is mbr, not gpt), and it does the same thing. The drive re-activates itself. How do I completely get rid of my system seeing the raid in this partition and reactivating itself? I removed all the information from the /etc/mdadm.conf file for that drive, so it is not getting information from there. I am on a 42.2 system. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
31.01.2017 06:33, george from the tribe пишет: ...
Note how if I look at the mdstat file, it shows it is active. So I try to fail the drive, but it won't let me. Then I stop the raid drive and remove it, and it shows it is no longer there. Seems like it should work.
facofficeeng02:/home/george # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb5[0] 30703616 blocks super 1.2 [2/1] [U_]
md1 : active raid1 sdb7[0] 736050176 blocks super 1.2 [2/1] [U_] bitmap: 6/6 pages [24KB], 65536KB chunk
unused devices: <none> facofficeeng02:/home/george # cat /etc/mdadm.conf ARRAY /dev/md/facofficeeng02:1 UUID=a8872a97:eecd6828:f759773c:cb18678e facofficeeng02:/home/george # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb5[0] 30703616 blocks super 1.2 [2/1] [U_]
md1 : active raid1 sdb7[0] 736050176 blocks super 1.2 [2/1] [U_] bitmap: 6/6 pages [24KB], 65536KB chunk
unused devices: <none> facofficeeng02:/home/george # mdadm --fail /dev/md0 /dev/sdb5 mdadm: set device faulty failed for /dev/sdb5: Device or resource busy facofficeeng02:/home/george # mdadm --stop /dev/md0 mdadm: stopped /dev/md0 facofficeeng02:/home/george # mdadm --remove /dev/md0
This does nor remove anything so it comes back after next disk rescan. Use "mdadm --zero-superblock" or "wipefs" to remove Linux MD signatures from this partition. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/31/2017 11:51 AM, Andrei Borzenkov wrote:
31.01.2017 06:33, george from the tribe пишет: ...
Note how if I look at the mdstat file, it shows it is active. So I try to fail the drive, but it won't let me. Then I stop the raid drive and remove it, and it shows it is no longer there. Seems like it should work.
facofficeeng02:/home/george # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb5[0] 30703616 blocks super 1.2 [2/1] [U_]
md1 : active raid1 sdb7[0] 736050176 blocks super 1.2 [2/1] [U_] bitmap: 6/6 pages [24KB], 65536KB chunk
unused devices: <none> facofficeeng02:/home/george # cat /etc/mdadm.conf ARRAY /dev/md/facofficeeng02:1 UUID=a8872a97:eecd6828:f759773c:cb18678e facofficeeng02:/home/george # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb5[0] 30703616 blocks super 1.2 [2/1] [U_]
md1 : active raid1 sdb7[0] 736050176 blocks super 1.2 [2/1] [U_] bitmap: 6/6 pages [24KB], 65536KB chunk
unused devices: <none> facofficeeng02:/home/george # mdadm --fail /dev/md0 /dev/sdb5 mdadm: set device faulty failed for /dev/sdb5: Device or resource busy facofficeeng02:/home/george # mdadm --stop /dev/md0 mdadm: stopped /dev/md0 facofficeeng02:/home/george # mdadm --remove /dev/md0
This does nor remove anything so it comes back after next disk rescan. Use "mdadm --zero-superblock" or "wipefs" to remove Linux MD signatures from this partition.
Thanks, that did it! Here are my results: facofficeeng02:/home/george # mdadm --stop /dev/md0 mdadm: stopped /dev/md0 facofficeeng02:/home/george # mdadm --remove /dev/md0 facofficeeng02:/home/george # cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sda7[2] sdb7[0] 736050176 blocks super 1.2 [2/2] [UU] bitmap: 2/6 pages [8KB], 65536KB chunk unused devices: <none> facofficeeng02:/home/george # mdadm --zero-superblock /dev/sdb5 facofficeeng02:/home/george # fdisk /dev/sdb Welcome to fdisk (util-linux 2.28). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): p Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x12c2c333 Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 206847 204800 100M 7 HPFS/NTFS/exFAT /dev/sdb2 206848 324458495 324251648 154.6G 7 HPFS/NTFS/exFAT /dev/sdb3 324458496 358275071 33816576 16.1G 82 Linux swap / Solaris /dev/sdb4 * 358275072 1953523711 1595248640 760.7G 5 Extended /dev/sdb5 358277120 419717119 61440000 29.3G 83 Linux /dev/sdb6 419719168 481159167 61440000 29.3G 83 Linux /dev/sdb7 481161216 1953523711 1472362496 702.1G 83 Linux Command (m for help): q facofficeeng02:/home/george # cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sda7[2] sdb7[0] 736050176 blocks super 1.2 [2/2] [UU] bitmap: 0/6 pages [0KB], 65536KB chunk unused devices: <none> Note that the "zero-superblock" option has to specify the partition itself, not the md array, I found out. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
* george from the tribe
I am trying to free up a partition on my work desktop. One of the partitions on the extra disk is part of an old RAID-1 configuration, and I am trying to delete the raid drive from the system and re-format that partition. However, whenever I try to use fdisk, it re-activates the RAID even though I have already removed it.
try as root: yast2 disk -- (paka)Patrick Shanahan Plainfield, Indiana, USA @ptilopteri http://en.opensuse.org openSUSE Community Member facebook/ptilopteri Photos: http://wahoo.no-ip.org/gallery2 Registered Linux User #207535 Photos: http://wahoo.no-ip.org/piwigo @ http://linuxcounter.net -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 31/01/17 03:33, george from the tribe wrote:
How do I completely get rid of my system seeing the raid in this partition and reactivating itself? I removed all the information from the /etc/mdadm.conf file for that drive, so it is not getting information from there.
As you've found, mdadm.conf is optional. mdadm by default scans the drives looking for superblocks. My system doesn't even have an mdadm.conf. Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/31/2017 06:45 PM, Wols Lists wrote:
On 31/01/17 03:33, george from the tribe wrote:
How do I completely get rid of my system seeing the raid in this partition and reactivating itself? I removed all the information from the /etc/mdadm.conf file for that drive, so it is not getting information from there.
As you've found, mdadm.conf is optional. mdadm by default scans the drives looking for superblocks. My system doesn't even have an mdadm.conf.
Cheers, Wol
Yes, good to know now. I had always just assumed that the system scanned for superblocks and looked to mdadm.conf to correlate. I am wondering, though, does it get the drive numbers from mdadm.conf? When I boot into another linux based system, like gparted or the rescue system, it gives me funny mdadm numbers, like md126 and md127. My mdadm.conf looks like this in Leap: # cat /etc/mdadm.conf ARRAY /dev/md/facofficeeng02:1 UUID=a8872a97:eecd6828:f759773c:cb18678e # cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sda7[2] sdb7[0] 736050176 blocks super 1.2 [2/2] [UU] bitmap: 2/6 pages [8KB], 65536KB chunk unused devices: <none> So it makes the drive md1. I supposed I might test it at some point and see if changing :1 to :2 in mdadm.conf would change the drive number from md1 to md2. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/02/2017 00:15, george from the tribe wrote:
On 01/31/2017 06:45 PM, Wols Lists wrote:
On 31/01/17 03:33, george from the tribe wrote:
How do I completely get rid of my system seeing the raid in this partition and reactivating itself? I removed all the information from the /etc/mdadm.conf file for that drive, so it is not getting information from there.
As you've found, mdadm.conf is optional. mdadm by default scans the drives looking for superblocks. My system doesn't even have an mdadm.conf.
Cheers, Wol
Yes, good to know now. I had always just assumed that the system scanned for superblocks and looked to mdadm.conf to correlate. I am wondering, though, does it get the drive numbers from mdadm.conf? When I boot into another linux based system, like gparted or the rescue system, it gives me funny mdadm numbers, like md126 and md127. My mdadm.conf looks like this in Leap:
# cat /etc/mdadm.conf ARRAY /dev/md/facofficeeng02:1 UUID=a8872a97:eecd6828:f759773c:cb18678e
# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sda7[2] sdb7[0] 736050176 blocks super 1.2 [2/2] [UU] bitmap: 2/6 pages [8KB], 65536KB chunk
unused devices: <none>
So it makes the drive md1. I supposed I might test it at some point and see if changing :1 to :2 in mdadm.conf would change the drive number from md1 to md2.
You're supposed to create named arrays so they are unique - a bit like drives get UUIDs. The reason is simple ... as the system boots, it detects the hard drives, and allocates sda, sdb, et al in the order the system finds them. THIS IS EXPLICITLY NOT GUARANTEED TO BE REPRODUCIBLE. Then udev comes along and detects the array components, passing them to mdadm. THIS ORDER IS EXPLICITLY NOT GUARANTEED TO BE REPRODUCIBLE. And mdadm allocates numbers - from 127 downwards - to the arrays in the order it gets passed them. It just so happens - on x86 - that most systems just happen to have all these detects happening in a reproducible order, which fools us into thinking that everything should be the same every boot. But even on x86, as soon as you add hot-plug or whatever, this breaks down. Looking at your mdadm.conf, I'm guessing that the computer *the array was created on* was called facofficeeng02? And this was the first (only) array on that machine? I suspect you can use that name safely, you can certainly use the UUID safely, and you're best using your own names. The computer will use - *random* - sda, sdb, md127, md127, but it will always provide symlinks using the UUID or the name you provide, and it will keep these symlinks correct for you. You might find your md1 and md2 are reproducible, but as I don't have an mdadm.conf, mine count down from 127. And I'm a bit surprised you have md1 and md2 etc. Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (5)
-
Andrei Borzenkov
-
Anthonys Lists
-
george from the tribe
-
Patrick Shanahan
-
Wols Lists