https://bugzilla.novell.com/show_bug.cgi?id=752869 https://bugzilla.novell.com/show_bug.cgi?id=752869#c0 Summary: md raid1 doesn't boot after removing 1 disk, when the server is turned off Classification: openSUSE Product: openSUSE 12.1 Version: Final Platform: x86-64 OS/Version: openSUSE 12.1 Status: NEW Severity: Major Priority: P5 - None Component: Other AssignedTo: bnc-team-screening@forge.provo.novell.com ReportedBy: wvvelzen@gmail.com QAContact: qa-bugs@suse.de Found By: --- Blocker: --- Created an attachment (id=482002) --> (http://bugzilla.novell.com/attachment.cgi?id=482002) State after booting with 1 raid disk removed User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:10.0.2) Gecko/20100101 Firefox/10.0.2 I was testing a md raid 1 setup, for different fail scenarios, and came across the following problem. Given the following raid setup: # grep '/dev/md' /etc/fstab /dev/md0 swap swap defaults 0 0 /dev/md2 / ext4 noatime,data=writeback,noacl,user_xattr 1 1 /dev/md1 /boot ext4 noatime,data=writeback,noacl,user_xattr 1 2 /dev/md3 /home ext4 noatime,data=writeback,noacl,user_xattr 1 2 # cat /proc/mdstat Personalities : [raid1] [raid0] [raid10] [raid6] [raid5] [raid4] md3 : active raid1 sdb5[2] sda5[0] 248460152 blocks super 1.0 [2/2] [UU] bitmap: 0/2 pages [0KB], 65536KB chunk md1 : active raid1 sdb2[2] sda2[0] 522228 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md0 : active raid1 sdb1[2] sda1[0] 2095092 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md2 : active raid1 sdb3[2] sda3[0] 41946040 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk unused devices: <none> When I turn off the server and remove 1 disk (the second in this case). The server doesn't properly boot, and ends up in the emergency console. I will attach a screen photograph of this situation. Switching between systemd or sysinit v with F5 on the bootscreen doesn't make a difference. Starting in 'failsafe' mode doesn't help either. Adding an empty disk instead of the removed disk or a pre-partioned disk doesn't help either. On previous versions of openSUSE (10.3) this test case worked just fine. However when I do a hot remove of 1 disk of the raid 1 array, on a running server, so the md raid knows it's degraded, and can save this state to the mdraid superblock. I can reboot just fine without any problems. Reproducible: Always Steps to Reproduce: 1. 2. 3. Expected Results: The raid1 in this state should have just booted the os. This is the purpose of having a raid1 in the first place! -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.