Mailinglist Archive: opensuse (1231 mails)

< Previous Next >
Re: [opensuse] Root and boot partitions on a sw raid volume (opensuse 12.2 - 12.3 rc1 64bit)

2) /dev/md0 (which is mounted as /boot) is not correctly initialized
and, at first reboot, only /dev/sdb1 is active part of the raid
volume.

sda1 is not even part of the config? Your array is running degraded?
I'm asking because it takes very specific commands to set up a RAID1
with only _one_ drive.

After the first reboot this is the situation:

********************
linux:~ # cat /proc/mdstat
Personalities : [raid1]
md125 : active (auto-read-only) raid1 sdb3[1] sda3[0]
23069568 blocks super 1.0 [2/2] [UU]
resync=PENDING
bitmap: 1/1 pages [4KB], 65536KB chunk

md126 : active (auto-read-only) raid1 sdb1[1]
697280 blocks super 1.0 [2/1] [_U]
bitmap: 1/1 pages [4KB], 65536KB chunk

md127 : active (auto-read-only) raid1 sdb2[1] sda2[0]
1052608 blocks super 1.0 [2/2] [UU]
resync=PENDING
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
********************

Please note that /dev/md126 is the raid volume of our interest which i configured as /dev/md0, to be mounted as /boot

********************
linux:~ # mdadm --detail /dev/md126
/dev/md126:
Version : 1.0
Creation Time : Fri Mar 22 10:50:00 2013
Raid Level : raid1
Array Size : 697280 (681.05 MiB 714.01 MB)
Used Dev Size : 697280 (681.05 MiB 714.01 MB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Fri Mar 22 11:01:17 2013
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Name : linux:0
UUID : 70fb55c6:47dfef14:7f280172:f5642bcd
Events : 8

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 17 1 active sync /dev/sdb1
********************

So the array is in degraded mode and /dev/sda1 looks like it was removed

If I examine directly the two single devices I obtain this:

********************
linux:~ # mdadm --examine /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : 70fb55c6:47dfef14:7f280172:f5642bcd
Name : linux:0
Creation Time : Fri Mar 22 10:50:00 2013
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 1394664 (681.10 MiB 714.07 MB)
Array Size : 697280 (681.05 MiB 714.01 MB)
Used Dev Size : 1394560 (681.05 MiB 714.01 MB)
Super Offset : 1394672 sectors
State : clean
Device UUID : c9a37b38:98bda5c8:8e38f11b:6701829e

Internal Bitmap : -8 sectors from superblock
Update Time : Fri Mar 22 11:01:17 2013
Checksum : 976c3f59 - correct
Events : 8


Device Role : Active device 1
Array State : .A ('A' == active, '.' == missing)




linux:~ # mdadm --examine /dev/sda1
/dev/sda1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : 70fb55c6:47dfef14:7f280172:f5642bcd
Name : linux:0
Creation Time : Fri Mar 22 10:50:00 2013
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 1394664 (681.10 MiB 714.07 MB)
Array Size : 697280 (681.05 MiB 714.01 MB)
Used Dev Size : 1394560 (681.05 MiB 714.01 MB)
Super Offset : 1394672 sectors
State : active
Device UUID : de82b920:cd0eb164:f7fbfd57:7c27ca4d

Internal Bitmap : -8 sectors from superblock
Update Time : Fri Mar 22 10:50:05 2013
Checksum : 709567b - correct
Events : 1


Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing)
********************

So I can conclude that the array was created with both sda1 and sdb1, but, for some unknown reasons, sda1 has been pulled out before the first synchronization.

Yes please, open a bugreport. Your work-arounds look good to me, but
they're less important. Judging by the lack of response to your
postings here, very few people are installing onto RAID1.


I'm going to open the bug.

I Can't explain why so few people are using a raid configuration. At the beginning it was honestly very complicated, but since 2007 - 2008 a lot of distros have started to support it directly from the installation process, allowing an easier configuration.


--
To unsubscribe, e-mail: opensuse+unsubscribe@xxxxxxxxxxxx
To contact the owner, e-mail: opensuse+owner@xxxxxxxxxxxx

< Previous Next >
Follow Ups