Mailinglist Archive: opensuse (1231 mails)

< Previous Next >
Re: [opensuse] Root and boot partitions on a sw raid volume (opensuse 12.2 - 12.3 rc1 64bit)
2) /dev/md0 (which is mounted as /boot) is not correctly initialized
and, at first reboot, only /dev/sdb1 is active part of the raid
volume.

sda1 is not even part of the config? Your array is running degraded?
I'm asking because it takes very specific commands to set up a RAID1
with only _one_ drive.

After the first reboot this is the situation:

********************
linux:~ # cat /proc/mdstat
Personalities : [raid1]
md125 : active (auto-read-only) raid1 sdb3[1] sda3[0]
23069568 blocks super 1.0 [2/2] [UU]
resync=PENDING
bitmap: 1/1 pages [4KB], 65536KB chunk

md126 : active (auto-read-only) raid1 sdb1[1]
697280 blocks super 1.0 [2/1] [_U]
bitmap: 1/1 pages [4KB], 65536KB chunk

md127 : active (auto-read-only) raid1 sdb2[1] sda2[0]
1052608 blocks super 1.0 [2/2] [UU]
resync=PENDING
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
********************

Please note that /dev/md126 is the raid volume of our interest which
i configured as /dev/md0, to be mounted as /boot

I have seen situations with those numbers (md125/6/7) before - I am not
sure, but I think they were caused by left-over superblocks on drives
no longer configured/used for raid.


I'm sorry, I have not fully explained the situation. The mdstat file displayed above was the one i obtained after having booted the machine with the recovery cd (the system alone was not yet able to boot properly) device names like md125/6/7 are used when raid volume are autodetected by the recovery cd which has not an mdadm.conf file with the right definitions.


So I can conclude that the array was created with both sda1 and sdb1,
but, for some unknown reasons, sda1 has been pulled out before the
first synchronization.

Or rather, it was never added properly. Those device number are a clear
indication that something went wrong. If you search bugzilla for e.g.
md126, I'm sure you'll find a couple of hits.

If you can repeat the exercise, I would do a normal installation start,
then swap to a console (Ctrl-Alt-F1) and check the md status - if you
see any unwanted arrays, stop them, then swap back (Alt-F7) and
continue with the partitioner. Also, the yast logs will very likely
tell you what the problem is.


I'll do some more tests, but i'm already quite sure that disks were completely zeroed before of the installation because in a previous test i found that was almost impossible to use the yast partitioner on an already partitioned disk (probably it was another yast bug)


--
To unsubscribe, e-mail: opensuse+unsubscribe@xxxxxxxxxxxx
To contact the owner, e-mail: opensuse+owner@xxxxxxxxxxxx

< Previous Next >