В Thu, 05 Sep 2013 18:19:10 +0200 Claudio ML <claudioml@mediaservice.net> пишет:
Hello all,
After changing a failed disk, i am trying to verify is all ok, and i have noticed a strange thing:
cat /proc/mdstat Personalities : [raid1] [raid0] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[2] sda2[0] 31455160 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md3 : active raid1 sdb4[2] sda4[0] 200164280 blocks super 1.0 [2/2] [UU] bitmap: 2/2 pages [8KB], 65536KB chunk
md2 : active raid1 sdb3[2] sda3[0] 2096116 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md0 : active raid1 sda1[0] sdb1[2] 10481592 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
grub> find /boot/grub/stage1 (hd0,0)
Are you calling it during boot in grub legacy command line or on booted system in grub legacy shell?
Notice the grub find only seen a drive?
My question is...why? Ever seen all the two drives with Raid 1 systems... Now i have a paranoia...if the first drive fails, the second can boot? I cant test this, because that system is in production.... The system is an OpenSuSE 12.2 (x86_64). I am doing something wrong?
Thanks,
Claudio.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org