[opensuse] Strangeness with OpenSuse RAID 1 Grub system
Hello all, After changing a failed disk, i am trying to verify is all ok, and i have noticed a strange thing: cat /proc/mdstat Personalities : [raid1] [raid0] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[2] sda2[0] 31455160 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md3 : active raid1 sdb4[2] sda4[0] 200164280 blocks super 1.0 [2/2] [UU] bitmap: 2/2 pages [8KB], 65536KB chunk md2 : active raid1 sdb3[2] sda3[0] 2096116 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active raid1 sda1[0] sdb1[2] 10481592 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk unused devices: <none> grub> find /boot/grub/stage1 (hd0,0) Notice the grub find only seen a drive? My question is...why? Ever seen all the two drives with Raid 1 systems... Now i have a paranoia...if the first drive fails, the second can boot? I cant test this, because that system is in production.... The system is an OpenSuSE 12.2 (x86_64). I am doing something wrong? Thanks, Claudio. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
В Thu, 05 Sep 2013 18:19:10 +0200 Claudio ML <claudioml@mediaservice.net> пишет:
Hello all,
After changing a failed disk, i am trying to verify is all ok, and i have noticed a strange thing:
cat /proc/mdstat Personalities : [raid1] [raid0] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[2] sda2[0] 31455160 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md3 : active raid1 sdb4[2] sda4[0] 200164280 blocks super 1.0 [2/2] [UU] bitmap: 2/2 pages [8KB], 65536KB chunk
md2 : active raid1 sdb3[2] sda3[0] 2096116 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md0 : active raid1 sda1[0] sdb1[2] 10481592 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
grub> find /boot/grub/stage1 (hd0,0)
Are you calling it during boot in grub legacy command line or on booted system in grub legacy shell?
Notice the grub find only seen a drive?
My question is...why? Ever seen all the two drives with Raid 1 systems... Now i have a paranoia...if the first drive fails, the second can boot? I cant test this, because that system is in production.... The system is an OpenSuSE 12.2 (x86_64). I am doing something wrong?
Thanks,
Claudio.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Il 05/09/2013 18:42, Andrey Borzenkov ha scritto:
В Thu, 05 Sep 2013 18:19:10 +0200 Claudio ML <claudioml@mediaservice.net> пишет:
Hello all,
After changing a failed disk, i am trying to verify is all ok, and i have noticed a strange thing:
cat /proc/mdstat Personalities : [raid1] [raid0] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[2] sda2[0] 31455160 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md3 : active raid1 sdb4[2] sda4[0] 200164280 blocks super 1.0 [2/2] [UU] bitmap: 2/2 pages [8KB], 65536KB chunk
md2 : active raid1 sdb3[2] sda3[0] 2096116 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md0 : active raid1 sda1[0] sdb1[2] 10481592 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
grub> find /boot/grub/stage1 (hd0,0)
Are you calling it during boot in grub legacy command line or on booted system in grub legacy shell? I am calling it on the booted system in a grub legacy shell.
Notice the grub find only seen a drive?
My question is...why? Ever seen all the two drives with Raid 1 systems... Now i have a paranoia...if the first drive fails, the second can boot? I cant test this, because that system is in production.... The system is an OpenSuSE 12.2 (x86_64). I am doing something wrong?
Thanks,
Claudio.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
В Fri, 06 Sep 2013 09:32:07 +0200 Claudio ML <claudioml@mediaservice.net> пишет:
Il 05/09/2013 18:42, Andrey Borzenkov ha scritto:
В Thu, 05 Sep 2013 18:19:10 +0200 Claudio ML <claudioml@mediaservice.net> пишет:
Hello all,
After changing a failed disk, i am trying to verify is all ok, and i have noticed a strange thing:
cat /proc/mdstat Personalities : [raid1] [raid0] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[2] sda2[0] 31455160 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md3 : active raid1 sdb4[2] sda4[0] 200164280 blocks super 1.0 [2/2] [UU] bitmap: 2/2 pages [8KB], 65536KB chunk
md2 : active raid1 sdb3[2] sda3[0] 2096116 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md0 : active raid1 sda1[0] sdb1[2] 10481592 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
grub> find /boot/grub/stage1 (hd0,0)
Are you calling it during boot in grub legacy command line or on booted system in grub legacy shell? I am calling it on the booted system in a grub legacy shell.
Ok, and what is in /boot/grub/device.map?
Notice the grub find only seen a drive?
Grub legacy does not normally search anything - it takes whatever is in device.map.
My question is...why? Ever seen all the two drives with Raid 1 systems... Now i have a paranoia...if the first drive fails, the second can boot? I cant test this, because that system is in production.... The system is an OpenSuSE 12.2 (x86_64). I am doing something wrong?
Thanks,
Claudio.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Il 06/09/2013 10:00, Andrey Borzenkov ha scritto:
В Fri, 06 Sep 2013 09:32:07 +0200 Claudio ML <claudioml@mediaservice.net> пишет:
Il 05/09/2013 18:42, Andrey Borzenkov ha scritto:
В Thu, 05 Sep 2013 18:19:10 +0200 Claudio ML <claudioml@mediaservice.net> пишет:
Hello all,
After changing a failed disk, i am trying to verify is all ok, and i have noticed a strange thing:
cat /proc/mdstat Personalities : [raid1] [raid0] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[2] sda2[0] 31455160 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md3 : active raid1 sdb4[2] sda4[0] 200164280 blocks super 1.0 [2/2] [UU] bitmap: 2/2 pages [8KB], 65536KB chunk
md2 : active raid1 sdb3[2] sda3[0] 2096116 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md0 : active raid1 sda1[0] sdb1[2] 10481592 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
grub> find /boot/grub/stage1 (hd0,0)
Are you calling it during boot in grub legacy command line or on booted system in grub legacy shell? I am calling it on the booted system in a grub legacy shell. Ok, and what is in /boot/grub/device.map? cat /boot/grub/device.map (hd0) /dev/disk/by-id/ata-GB0250EAFYK_WCAT1J075574 (hd1) /dev/disk/by-id/ata-GB0250EAFYK_WCAT1H882820
Notice the grub find only seen a drive?
Grub legacy does not normally search anything - it takes whatever is in device.map.
My question is...why? Ever seen all the two drives with Raid 1 systems... Now i have a paranoia...if the first drive fails, the second can boot? I cant test this, because that system is in production.... The system is an OpenSuSE 12.2 (x86_64). I am doing something wrong?
Thanks,
Claudio.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Il 06/09/2013 10:05, Claudio ML ha scritto:
Il 06/09/2013 10:00, Andrey Borzenkov ha scritto:
В Fri, 06 Sep 2013 09:32:07 +0200 Claudio ML <claudioml@mediaservice.net> пишет:
Il 05/09/2013 18:42, Andrey Borzenkov ha scritto:
В Thu, 05 Sep 2013 18:19:10 +0200 Claudio ML <claudioml@mediaservice.net> пишет:
Hello all,
After changing a failed disk, i am trying to verify is all ok, and i have noticed a strange thing:
cat /proc/mdstat Personalities : [raid1] [raid0] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[2] sda2[0] 31455160 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md3 : active raid1 sdb4[2] sda4[0] 200164280 blocks super 1.0 [2/2] [UU] bitmap: 2/2 pages [8KB], 65536KB chunk
md2 : active raid1 sdb3[2] sda3[0] 2096116 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
md0 : active raid1 sda1[0] sdb1[2] 10481592 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
grub> find /boot/grub/stage1 (hd0,0)
Are you calling it during boot in grub legacy command line or on booted system in grub legacy shell? I am calling it on the booted system in a grub legacy shell. Ok, and what is in /boot/grub/device.map? cat /boot/grub/device.map (hd0) /dev/disk/by-id/ata-GB0250EAFYK_WCAT1J075574 (hd1) /dev/disk/by-id/ata-GB0250EAFYK_WCAT1H882820 I have noticed now that device.map is wrong, and report the old (hd1), not the new.... In my system now i have:
/dev/disk/by-id/ata-GB0250EAFYK_WCAT1J075574 /dev/disk/by-id/ata-ST3360320AS_6QF1N350 I have to change manually the wrong entry on device.map, or there is a method more "SuSE like" to do that?
Notice the grub find only seen a drive?
Grub legacy does not normally search anything - it takes whatever is in device.map.
My question is...why? Ever seen all the two drives with Raid 1 systems... Now i have a paranoia...if the first drive fails, the second can boot? I cant test this, because that system is in production.... The system is an OpenSuSE 12.2 (x86_64). I am doing something wrong?
Thanks,
Claudio.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
В Fri, 06 Sep 2013 10:12:20 +0200 Claudio ML <claudioml@mediaservice.net> пишет:
grub> find /boot/grub/stage1 (hd0,0)
Are you calling it during boot in grub legacy command line or on booted system in grub legacy shell? I am calling it on the booted system in a grub legacy shell. Ok, and what is in /boot/grub/device.map? cat /boot/grub/device.map (hd0) /dev/disk/by-id/ata-GB0250EAFYK_WCAT1J075574 (hd1) /dev/disk/by-id/ata-GB0250EAFYK_WCAT1H882820 I have noticed now that device.map is wrong, and report the old (hd1), not the new.... In my system now i have:
/dev/disk/by-id/ata-GB0250EAFYK_WCAT1J075574 /dev/disk/by-id/ata-ST3360320AS_6QF1N350
I have to change manually the wrong entry on device.map, or there is a method more "SuSE like" to do that?
Unfortunately, I could not find any way to do it using yast (I had chance to play after replacing dying hard drive). I think there was bug report for this but I cannot find it right now ... or may be I just intended to file one but got distracted. So you will need to change it manually. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Il 06/09/2013 10:17, Andrey Borzenkov ha scritto:
В Fri, 06 Sep 2013 10:12:20 +0200 Claudio ML <claudioml@mediaservice.net> пишет:
/dev/disk/by-id/ata-GB0250EAFYK_WCAT1J075574 /dev/disk/by-id/ata-ST3360320AS_6QF1N350
I have to change manually the wrong entry on device.map, or there is a method more "SuSE like" to do that? Unfortunately, I could not find any way to do it using yast (I had chance to play after replacing dying hard drive). I think there was bug report for this but I cannot find it right now ... or may be I just intended to file one but got distracted.
So you will need to change it manually. Ok, after manually editing the file, now the find of grub finds all the two disks:
grub> find /boot/grub/stage1 (hd0,0) (hd1,0) But, if i do a grub-install, it is installed only into 1 disk: grub-install GNU GRUB version 0.97 (640K lower / 3072K upper memory) [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> setup --stage2=/boot/grub/stage2 --force-lba (hd0) (hd0,0) Checking if "/boot/grub/stage1" exists... yes Checking if "/boot/grub/stage2" exists... yes Checking if "/boot/grub/e2fs_stage1_5" exists... yes Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 17 sectors are embedded. succeeded Running "install --force-lba --stage2=/boot/grub/stage2 /boot/grub/stage1 (hd0) (hd0)1+17 p (hd0,0)/boot/grub/stage2 /boot/grub/menu.lst"... succeeded Done. grub> quit How to make it installed into the two disks? Manually from grub with setup --stage2=/boot/grub/stage2 --force-lba (hd1) (hd1,0) ? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Il 06/09/2013 10:26, Claudio ML ha scritto:
Il 06/09/2013 10:17, Andrey Borzenkov ha scritto:
В Fri, 06 Sep 2013 10:12:20 +0200 Claudio ML <claudioml@mediaservice.net> пишет:
/dev/disk/by-id/ata-GB0250EAFYK_WCAT1J075574 /dev/disk/by-id/ata-ST3360320AS_6QF1N350
I have to change manually the wrong entry on device.map, or there is a method more "SuSE like" to do that? Unfortunately, I could not find any way to do it using yast (I had chance to play after replacing dying hard drive). I think there was bug report for this but I cannot find it right now ... or may be I just intended to file one but got distracted.
So you will need to change it manually. Ok, after manually editing the file, now the find of grub finds all the two disks:
grub> find /boot/grub/stage1 (hd0,0) (hd1,0)
But, if i do a grub-install, it is installed only into 1 disk:
grub-install
GNU GRUB version 0.97 (640K lower / 3072K upper memory)
[ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> setup --stage2=/boot/grub/stage2 --force-lba (hd0) (hd0,0) Checking if "/boot/grub/stage1" exists... yes Checking if "/boot/grub/stage2" exists... yes Checking if "/boot/grub/e2fs_stage1_5" exists... yes Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 17 sectors are embedded. succeeded Running "install --force-lba --stage2=/boot/grub/stage2 /boot/grub/stage1 (hd0) (hd0)1+17 p (hd0,0)/boot/grub/stage2 /boot/grub/menu.lst"... succeeded Done. grub> quit
How to make it installed into the two disks? Manually from grub with setup --stage2=/boot/grub/stage2 --force-lba (hd1) (hd1,0) ?
Ok i have find i think the right solution, from this thread: http://forums.opensuse.org/english/get-technical-help-here/install-boot-logi... 1) Find the the stage 1 file: grub> find /boot/grub/stage1 (hd0,0) (hd1,0) grub> The output could be different, depending on the partition where /boot is located. 2) Asumming your disks are /dev/sda (hd0) and /dev/sdb (hd1) and you have grub installed in the MBR of /dev/sda, do the following to install grub into /dev/sdb MBR:
device (hd0) /dev/sdb root (hd0,0) setup (hd0)
Anyone can confirm me that is the right procedure to install grub on the secondary disk? Cordially, Claudio. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2013-09-06 10:52 (GMT+0200) Claudio ML composed:
Ok i have find i think the right solution, from this thread:
http://forums.opensuse.org/english/get-technical-help-here/install-boot-logi...
1) Find the the stage 1 file:
grub> find /boot/grub/stage1 (hd0,0) (hd1,0) grub>
The output could be different, depending on the partition where /boot is located.
2) Asumming your disks are /dev/sda (hd0) and /dev/sdb (hd1) and you have grub installed in the MBR of /dev/sda, do the following to install grub into /dev/sdb MBR:
device (hd0) /dev/sdb root (hd0,0) setup (hd0)
That same thread includes: "SuSE recommends not using the MBR. but the boot sector "
Anyone can confirm me that is the right procedure to install grub on the secondary disk?
Depends what you want. If you want Grub on MBR, then you only want hd0 within the setup commands. If you want Grub on a partition, you need hd0,0 or whatever applies to the actual partition targets. If the targets are sda1 and sdb1, then you want hd0,0 and hd1,0 within your setup commands. Whether this ever actually works on partitions comprising RAID devices I don't remember, but I don't think it does. My /boot partitions in my MD RAID systems are never RAID devices. I just create partitions for /boot as if they would be components of a RAID device, but keep them independent, and don't even mount any to /boot. Instead, I have Grub and menu.lst on the "boot" partitions, but I install Grub there and maintain menu.lst manually using the Grub shell using the same method as the quoted thread. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (3)
-
Andrey Borzenkov
-
Claudio ML
-
Felix Miata