Hi Liste, ich habe heute fünf neue 9.1er Platten in meinen Server gebaut und will darauf ein SW-RAID5 fahren. Ich habe alle Platten partitioniert (jeweils gesamte Größe, typ 8e). Meine /etc/raidtab sieht so aus: raiddev /dev/md0 raid-level 5 nr-raid-disks 5 nr-spare-disks 0 persistent-superblock 1 parity-algorithm left-symmetric chunk-size 4 device /dev/sdb1 raid-disk 0 device /dev/sdc1 raid-disk 1 device /dev/sdd1 raid-disk 2 device /dev/sde1 raid-disk 3 device /dev/sdf1 raid-disk 4 Dann wollte ich das RAID erstellen: zvhc:~ # mkraid /dev/md0 handling MD device /dev/md0 analyzing super-block disk 0: /dev/sdb1, 8908011kB, raid superblock at 8907904kB disk 1: /dev/sdc1, 8908011kB, raid superblock at 8907904kB disk 2: /dev/sdd1, 8908011kB, raid superblock at 8907904kB disk 3: /dev/sde1, 8908011kB, raid superblock at 8907904kB disk 4: /dev/sdf1, 8908011kB, raid superblock at 8907904kB mkraid: aborted, see the syslog and /proc/mdstat for potential clues. Tja, die syslog sagt nichts. zvhc:~ # cat /proc/mdstat Personalities : [1 linear] [2 raid0] [3 raid1] [4 raid5] read_ahead not set md0 : inactive md1 : inactive md2 : inactive md3 : inactive --schnipp-- zvhc:~ # lvmdiskscan lvmdiskscan -- reading all disks / partitions (this may take a while...) lvmdiskscan -- /dev/sda1 [ 23.5 MB] Primary LINUX native partition [0x83] lvmdiskscan -- /dev/sda2 [ 4.23 GB] DOS extended partition [0x5] lvmdiskscan -- /dev/sda5 [ 133.32 MB] Extended LINUX swap partition [0x82] lvmdiskscan -- /dev/sda6 [ 133.32 MB] Extended LINUX swap partition [0x82] lvmdiskscan -- /dev/sda7 [ 3.97 GB] Extended LINUX native partition [0x83] lvmdiskscan -- /dev/sdb1 [ 8.5 GB] Primary LVM partition [0x8E] lvmdiskscan -- /dev/sdc1 [ 8.5 GB] Primary LVM partition [0x8E] lvmdiskscan -- /dev/sdd1 [ 8.5 GB] Primary LVM partition [0x8E] lvmdiskscan -- /dev/sde1 [ 8.5 GB] Primary LVM partition [0x8E] lvmdiskscan -- /dev/sdf1 [ 8.5 GB] Primary LVM partition [0x8E] lvmdiskscan -- 8 disks lvmdiskscan -- 0 whole disks lvmdiskscan -- 0 loop devices lvmdiskscan -- 0 multiple devices lvmdiskscan -- 0 network block devices lvmdiskscan -- 10 partitions lvmdiskscan -- 5 LVM physical volume partitions --schnipp-- zvhc:~ # vgscan vgscan -- LVM driver/module not loaded? Was mache ich falsch? Olli --------------------------------------------------------------------- To unsubscribe, e-mail: suse-linux-unsubscribe@suse.com For additional commands, e-mail: suse-linux-help@suse.com