On Mon January 2 2006 00:56, Carlos E. R. wrote:
The Sunday 2006-01-01 at 19:15 -0000, Con Hennessy wrote:
Yast2 Disk was used to make the RAID disk. It is supposed to be a reiserfs disk but I'm not sure if that means each are done independently or ... ?
My partition table (create using "yast2 disk") has : Device Boot Start End Blocks Id System /dev/hda6 2600 3641 8369833+ fd Linux raid autodetect
...
/dev/hdc6 76975 93411 8284216+ fd Linux raid autodetect
And my /etc/raidtab is :
Ok, it seems you have a raid device directly formatted as reiserfs.
/dev/md0 /raid0 reiserfs acl,user_xattr 1 2
Can you mount/umount it manually? No, I still get the error : # mount /raid0 mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so And dmesg shows me : ReiserFS: md0: warning: sh-2006: read_super_block: bread failed (dev md0, block 2, size 4096) ReiserFS: md0: warning: sh-2006: read_super_block: bread failed (dev md0, block 16, size 4096) ReiserFS: md0: warning: sh-2021: reiserfs_fill_super: can not find reiserfs on md0
If there are problems, run a fsck on it while umounted. I can do this for both partitions involved in the raid, and no problems were detected. I then tried to mount one of the partitions but it failed : # mkdir /tmp/xx # mount /dev/hda6 /tmp/xx mount: /dev/hda6 already mounted or /tmp/xx busy
If the md0 has problems being activated during boot up, it could be that you need to activate some modules in initrd - but as your raid seems to be a data disk not needed for booting up, it doesn't matter, it will activate just a little bit later during boot.
Then, if it will not mount automatically during boot, you can configure it "noauto" in fstab, so that you can manually mount it later. I also tried this, but the kernel still insists on trying to mount it during boot. The only way I could find to disable that was to comment out the line in fstab :(
I'm just guessing, but it seems you don't have problems with the raid per se, but with the reiserfs on it. Treat it as any other reiserfs problem. I'm really not 100% sure about that. My /proc/mdstat shows me : Personalities : [raid1] md0 : inactive hda6[0] hdc6[1] 16653824 blocks
But when I look at the size of the partitions I see : /dev/hdc6 76975 93411 8284216+ fd Linux raid autodetect and /dev/hda6 2600 3641 8369833+ fd Linux raid autodetect From my understanding of raid, the maximum blocks shown by /proc/mdstat should be 8284216 and not nearly twice that ! Thanks Con hennessy