https://bugzilla.novell.com/show_bug.cgi?id=775746 https://bugzilla.novell.com/show_bug.cgi?id=775746#c1 --- Comment #1 from John Langley <j.langley@gmx.net> 2012-08-13 22:35:49 UTC --- mdadm /dev/md1 --add /dev/sdb4 starts the array again and it works perfectly for hours until the next reboot. after reboot today i ended up in a shell: [ 11.263189] input: Venus USB2.0 Camera as /devices/pci0000:00/0000:00:0b.1/usb1/1-6/1-6:1.0/input/input13 [ 11.263360] usbcore: registered new interface driver uvcvideo [ 11.263363] USB Video Class driver (1.1.1) [ 11.343139] Adding 1051644k swap on /dev/sda1. Priority:0 extents:1 across:1051644k [ 11.354278] md: bind<sda4> [ 11.418756] Adding 1051644k swap on /dev/sdb1. Priority:0 extents:1 across:1051644k [ 11.515482] nvidia 0000:03:00.0: PCI INT A -> Link[APC5] -> GSI 16 (level, low) -> IRQ 16 [ 11.515492] nvidia 0000:03:00.0: setting latency timer to 64 [ 11.515498] vgaarb: device changed decodes: PCI:0000:03:00.0,olddecodes=io+mem,decodes=none:owns=io+mem [ 11.515806] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 295.59 Wed Jun 6 21:19:40 PDT 2012 [ 11.550355] md: could not open unknown-block(8,20). [ 11.550455] md: md_import_device returned -16 [ 11.550766] md: could not open unknown-block(8,20). [ 11.550857] md: md_import_device returned -16 [ 11.601393] boot.md[448]: Starting MD RAID mdadm: /dev/md/1 is already in use. [ 11.601940] boot.md[448]: ..failed [ 11.602543] systemd[1]: md.service: control process exited, code=exited status=1 [ 11.608358] systemd[1]: Unit md.service entered failed state. [ 97.101581] systemd[1]: Job dev-md1.device/start timed out. [ 97.101819] systemd[1]: Job remote-fs-pre.target/start failed with result 'dependency'. [ 97.101826] systemd[1]: Job local-fs.target/start failed with result 'dependency'. [ 97.101831] systemd[1]: Triggering OnFailure= dependencies of local-fs.target. [ 97.102766] systemd[1]: Job home.mount/start failed with result 'dependency'. [ 97.102774] systemd[1]: Job dev-md1.device/start failed with result 'timeout'. [ 97.390477] systemd[1]: Startup finished in 5s 50ms 326us (kernel) + 1min 32s 340ms 36us (userspace) = 1min 37s 390ms 362us. [ 351.569324] md/raid1:md1: active with 1 out of 2 mirrors [ 351.569509] created bitmap (4 pages) for device md1 [ 351.569729] md1: bitmap initialized from disk: read 1/1 pages, set 7 of 6701 bits [ 351.605682] md1: detected capacity change from 0 to 449637638144 [ 369.175955] md1: unknown partition table But the individual disks of the array actually were fine: /dev/sda4: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : 909d58d0:3a4ee94e:8897cfe8:d7aefeea Name : linux-99ig:1 Creation Time : Fri Oct 21 21:36:33 2011 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 878198512 (418.76 GiB 449.64 GB) Array Size : 878198512 (418.76 GiB 449.64 GB) Super Offset : 878198768 sectors State : clean Device UUID : 82cdb9ef:19aaa29b:1d7b1d7c:ea687817 Internal Bitmap : -8 sectors from superblock Update Time : Mon Aug 13 22:46:08 2012 Checksum : d62737a5 - correct Events : 93354 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing) /dev/sdb4: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : 909d58d0:3a4ee94e:8897cfe8:d7aefeea Name : linux-99ig:1 Creation Time : Fri Oct 21 21:36:33 2011 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 878198512 (418.76 GiB 449.64 GB) Array Size : 878198512 (418.76 GiB 449.64 GB) Super Offset : 878198768 sectors State : clean Device UUID : 46360291:b7806b6c:ce1f892e:fa56fc78 Internal Bitmap : -8 sectors from superblock Update Time : Mon Aug 13 22:46:08 2012 Checksum : 5c2809c9 - correct Events : 93354 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing) "mdadm --run /dev/md1" started the array "mdadm /dev/md1 --add /dev/sdb4" fixed the array and it works well again That the array is degraded happens only on reboot. Mostly sdb4 is missing but i also had sda4 (md1) or sda2 or sdb2 (md0) Always just one partition! -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.