sles10, x86_64. desired setup : disks 1 & 2 each have 2 partitions mirrored, resulting in two md devices (md0 raw raid 1 and md1 with lvm vg on top of it); disks 3-5 are in raid5 with single lvm vg on top of it. this setup can be easily achieved during manual setup and works nicely. attempting to do that during autoinstallation fails, using a profile generated from the system, previously installed manually. yast complains that "A logical volume with the requested size could not be created." during partition plan creation step 3 times, then fails with -4004 error during actual creation step. looking at y2log, i found some weird output : first, everything seem fine - INFO libstorage(3297) - SystemCmd.cc(addLine):625 Adding Line 1 " 1 logical volume(s) in volume group "data" now active" INFO libstorage(3297) - SystemCmd.cc(addLine):625 Adding Line 2 " 3 logical volume(s) in volume group "system" now active" later, two vgs are succesfully added and have correct sizes : INFO libstorage(3297) - SystemCmd.cc(addLine):625 Adding Line 1 " "data" 273.43 GB [273.40 GB used / 24.00 MB free]" INFO libstorage(3297) - SystemCmd.cc(addLine):625 Adding Line 2 " "system" 33.86 GB [33.80 GB used / 56.00 MB free]" then, at some point, yast has problems accessing md1, recreates it and decides to grab first lvm vg space (md1, system) for second vg (data, supposed to use md2 only) : INFO libstorage(6082) - MdCo.cc(getMdData):163 mdstat line:md1 : active raid1 cciss/c0d1p2[1] cciss/c0d0p2[0] INFO libstorage(6082) - LvmVg.cc(doCreatePv):1261 dev:/dev/md1 INFO libstorage(6082) - SystemCmd.cc(execute):160 SystemCmd Executing:"mdadm --zero-superblock /dev/md1" INFO libstorage(6082) - SystemCmd.cc(addLine):625 Adding Line 1 "mdadm: Unrecognised md component device - /dev/md1" INFO libstorage(6082) - SystemCmd.cc(logOutput):636 stderr:mdadm: Unrecognised md component device - /dev/md1 ... INFO libstorage(6082) - Storage.cc(removeDmMapsTo):3364 dm:Lv Device:/dev/data/var Name:var SizeK:2 86683136 Node <0:0> created format fs:reiserfs det_fs:none alt_names:</dev/mapper/data-var /dev/mapper/data-var> LE:69991 Ta ble:data-var Target: pe_map:</dev/md1:8667 /dev/md2:61324> ... INFO libstorage(6082) - SystemCmd.cc(addLine):625 Adding Line 1 " Physical volume "/dev/md1" successfully created" INFO libstorage(6082) - Storage.cc(removeDmMapsTo):3364 dm:Lv Device:/dev/data/var Name:var SizeK:286683136 Node <0:0> created format fs:reiserfs det_fs:none alt_names:</dev/mapper/data-var /dev/mapper/data-var> LE:69991 Table:data-var Target: pe_map:</dev/md1:8667 /dev/md2:61324> ... INFO libstorage(6082) - Container.cc(commitChanges):111 vol:Device:/dev/md1 Nr:1 SizeK:35503650 Node <9:1> created UsedBy:lvm[data] ... INFO libstorage(6082) - Storage.cc(setUsedBy):4099 dev:/dev/md1 usedBy 1:data ret:1 INFO libstorage(6082) - Storage.cc(setUsedBy):4099 dev:/dev/md2 usedBy 1:data ret:1 so the main problem seems to be mdadm, being unable to work with /dev/md1, which is supposedly active (according to mdstat) and has an lvm vg on top of it. is this a problem with the profile or any of the support tools ? how could this be resolved ? -- Rich