mdadm --zero-superblock fails on md raid
sles10, x86_64. desired setup : disks 1 & 2 each have 2 partitions mirrored, resulting in two md devices (md0 raw raid 1 and md1 with lvm vg on top of it); disks 3-5 are in raid5 with single lvm vg on top of it. this setup can be easily achieved during manual setup and works nicely. attempting to do that during autoinstallation fails, using a profile generated from the system, previously installed manually. yast complains that "A logical volume with the requested size could not be created." during partition plan creation step 3 times, then fails with -4004 error during actual creation step. looking at y2log, i found some weird output : first, everything seem fine - INFO libstorage(3297) - SystemCmd.cc(addLine):625 Adding Line 1 " 1 logical volume(s) in volume group "data" now active" INFO libstorage(3297) - SystemCmd.cc(addLine):625 Adding Line 2 " 3 logical volume(s) in volume group "system" now active" later, two vgs are succesfully added and have correct sizes : INFO libstorage(3297) - SystemCmd.cc(addLine):625 Adding Line 1 " "data" 273.43 GB [273.40 GB used / 24.00 MB free]" INFO libstorage(3297) - SystemCmd.cc(addLine):625 Adding Line 2 " "system" 33.86 GB [33.80 GB used / 56.00 MB free]" then, at some point, yast has problems accessing md1, recreates it and decides to grab first lvm vg space (md1, system) for second vg (data, supposed to use md2 only) : INFO libstorage(6082) - MdCo.cc(getMdData):163 mdstat line:md1 : active raid1 cciss/c0d1p2[1] cciss/c0d0p2[0] INFO libstorage(6082) - LvmVg.cc(doCreatePv):1261 dev:/dev/md1 INFO libstorage(6082) - SystemCmd.cc(execute):160 SystemCmd Executing:"mdadm --zero-superblock /dev/md1" INFO libstorage(6082) - SystemCmd.cc(addLine):625 Adding Line 1 "mdadm: Unrecognised md component device - /dev/md1" INFO libstorage(6082) - SystemCmd.cc(logOutput):636 stderr:mdadm: Unrecognised md component device - /dev/md1 ... INFO libstorage(6082) - Storage.cc(removeDmMapsTo):3364 dm:Lv Device:/dev/data/var Name:var SizeK:2 86683136 Node <0:0> created format fs:reiserfs det_fs:none alt_names:</dev/mapper/data-var /dev/mapper/data-var> LE:69991 Ta ble:data-var Target: pe_map:</dev/md1:8667 /dev/md2:61324> ... INFO libstorage(6082) - SystemCmd.cc(addLine):625 Adding Line 1 " Physical volume "/dev/md1" successfully created" INFO libstorage(6082) - Storage.cc(removeDmMapsTo):3364 dm:Lv Device:/dev/data/var Name:var SizeK:286683136 Node <0:0> created format fs:reiserfs det_fs:none alt_names:</dev/mapper/data-var /dev/mapper/data-var> LE:69991 Table:data-var Target: pe_map:</dev/md1:8667 /dev/md2:61324> ... INFO libstorage(6082) - Container.cc(commitChanges):111 vol:Device:/dev/md1 Nr:1 SizeK:35503650 Node <9:1> created UsedBy:lvm[data] ... INFO libstorage(6082) - Storage.cc(setUsedBy):4099 dev:/dev/md1 usedBy 1:data ret:1 INFO libstorage(6082) - Storage.cc(setUsedBy):4099 dev:/dev/md2 usedBy 1:data ret:1 so the main problem seems to be mdadm, being unable to work with /dev/md1, which is supposedly active (according to mdstat) and has an lvm vg on top of it. is this a problem with the profile or any of the support tools ? how could this be resolved ? -- Rich
I currently have sles10 set up to support x86 (i586) processors. Now I need to configure the repository to support X86_64 and IA64 architectures, in addition to my x86 processors. Does anyone have any recommendations to set up a single repository for all 3 architectures, or do they need to be kept separate? Also, since I'm new to the other 2 architectures this is probably a stupid question, but is it possible to use the same boot image to install all 3, or do they each need a different boot image (CD or PXE, preferrably CD)?
On Wednesday 15 November 2006 12:41, Rich wrote:
is this a problem with the profile or any of the support tools ? how could this be resolved ?
Did you try to use the driverupdate for SLES10 from ftp://ftp.suse.com/pub/people/ug/autoyast/ There was a bug with multiple LVM groups. But I can't say if that driverupdate solves your problem or if there is another problem with RAID too. -- ciao, Uwe Gansert Uwe Gansert, Server Technologies Team SUSE LINUX Products GmbH, Maxfeldstrasse 5, D-90409 Nuernberg, Germany Business: http://www.suse.de/~ug now playing Diary Of Dreams - Giftraum
Uwe Gansert wrote:
On Wednesday 15 November 2006 12:41, Rich wrote:
is this a problem with the profile or any of the support tools ? how could this be resolved ?
Did you try to use the driverupdate for SLES10 from ftp://ftp.suse.com/pub/people/ug/autoyast/ There was a bug with multiple LVM groups.
the driverupdate has fixed the problem. thanks. btw, are updates like these (concerning autoyast) announced somewhere in an uniform way ? :)
But I can't say if that driverupdate solves your problem or if there is another problem with RAID too. -- Rich
On Friday 17 November 2006 11:51, Rich wrote:
the driverupdate has fixed the problem. thanks. btw, are updates like these (concerning autoyast) announced somewhere in an uniform way ? :)
it's always a good idea to keep an eye on this list for that. Sometimes I announce stuff like that here or mention it at least in a thread. Reading the changes files on www.suse.de/~ug is always good too. -- ciao, Uwe Gansert Uwe Gansert, Server Technologies Team SUSE LINUX Products GmbH, Maxfeldstrasse 5, D-90409 Nuernberg, Germany Business: http://www.suse.de/~ug now playing :Wumpscut: - Dying Culture [Second Movement]
participants (3)
-
Rich
-
Stephens, Bill {PBSG}
-
Uwe Gansert