[opensuse] DegradedArray event
On an openSUSE 12.3 system, I have started getting the following: This is an automatically generated mail message from mdadm running on acme A DegradedArray event had been detected on md device /dev/md127. Faithfully yours, etc. P.S. The /proc/mdstat file currently contains the following: Personalities : [raid1] md127 : active raid1 sdf1[0] 1953513280 blocks super 1.0 [2/1] [U_] bitmap: 15/15 pages [60KB], 65536KB chunk md1 : active raid1 sde1[1] sdd1[0] 1953513280 blocks super 1.0 [2/2] [UU] bitmap: 0/15 pages [0KB], 65536KB chunk md0 : active raid1 sdc1[1] sdb1[0] 1953513280 blocks super 1.0 [2/2] [UU] bitmap: 0/15 pages [0KB], 65536KB chunk md2 : active raid1 sdg1[1] 1953513280 blocks super 1.0 [2/1] [_U] bitmap: 15/15 pages [60KB], 65536KB chunk unused devices: <none> This is comprised of a number of SATA 3 disks. It seems that the complaint is for sdf1. How can I get more details about the nature of the problem? It is a new software raid1 setup I am not yet using. -- Yours sincerely, Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 04/09/13 15:33, Roger Oberholtzer wrote:
This is comprised of a number of SATA 3 disks. It seems that the complaint is for sdf1. How can I get more details about the nature of the problem? It is a new software raid1 setup I am not yet using.
$ mdadm -Q -D /dev/md127 should give you more details about what is wrong (which one of the mirrored disks is missing, for instance). You can also add a '-v' for increased verbosity. HTH Cheers. Bye. Ph. A. -- *Philippe Andersson* Unix System Administrator IBA Particle Therapy | Tel: +32-10-475.983 Fax: +32-10-487.707 eMail: pan@iba-group.com <http://www.iba-worldwide.com>
On Wednesday, September 04, 2013 03:48:00 PM Philippe Andersson wrote:
mdadm -Q -D /dev/md127
Sadly, I am none the wiser: # mdadm -v -Q -D /dev/md127 /dev/md127: Version : 1.0 Creation Time : Mon Apr 8 14:06:44 2013 Raid Level : raid1 Array Size : 1953513280 (1863.02 GiB 2000.40 GB) Used Dev Size : 1953513280 (1863.02 GiB 2000.40 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Wed Sep 4 17:09:16 2013 State : active, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : acme.pacific:2 UUID : c6b77f62:9e280889:353ed7d7:9c8686d1 Events : 84030 Number Major Minor RaidDevice State 0 8 81 0 active sync /dev/sdf1 1 0 0 1 removed -- Yours sincerely, Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, 4 Sep 2013 17:10:50 Roger Oberholtzer wrote:
On Wednesday, September 04, 2013 03:48:00 PM Philippe Andersson wrote:
mdadm -Q -D /dev/md127
Sadly, I am none the wiser:
# mdadm -v -Q -D /dev/md127
/dev/md127: Version : 1.0 Creation Time : Mon Apr 8 14:06:44 2013 Raid Level : raid1 Array Size : 1953513280 (1863.02 GiB 2000.40 GB) Used Dev Size : 1953513280 (1863.02 GiB 2000.40 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Sep 4 17:09:16 2013 State : active, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0
Name : acme.pacific:2 UUID : c6b77f62:9e280889:353ed7d7:9c8686d1 Events : 84030
Number Major Minor RaidDevice State 0 8 81 0 active sync /dev/sdf1 1 0 0 1 removed
This last line tells you the answer. ^^^^ /dev/md127 has only one active member - /dev/sdf1. Whatever else the system thinks was part of that array has been "removed" from the array, therefore it is reported as running in degraded mode. It could be, though, that /dev/sdf1 used to be part of one of your other arrays and for some reason is now detected as separate (but still having the Linux-RAID partition type). Did you document what partitions you used when first assembling the raid array(s)? -- ============================================================== Rodney Baker VK5ZTV rodney.baker@iinet.net.au ============================================================== -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
2013. szeptember 4. 15:33 napon Roger Oberholtzer <roger@opq.se> írta:
On an openSUSE 12.3 system, I have started getting the following:
This is an automatically generated mail message from mdadm running on acme
A DegradedArray event had been detected on md device /dev/md127.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid1] md127 : active raid1 sdf1[0] 1953513280 blocks super 1.0 [2/1] [U_] bitmap: 15/15 pages [60KB], 65536KB chunk
md1 : active raid1 sde1[1] sdd1[0] 1953513280 blocks super 1.0 [2/2] [UU] bitmap: 0/15 pages [0KB], 65536KB chunk
md0 : active raid1 sdc1[1] sdb1[0] 1953513280 blocks super 1.0 [2/2] [UU] bitmap: 0/15 pages [0KB], 65536KB chunk
md2 : active raid1 sdg1[1] 1953513280 blocks super 1.0 [2/1] [_U] bitmap: 15/15 pages [60KB], 65536KB chunk
unused devices: <none>
This is comprised of a number of SATA 3 disks. It seems that the complaint is for sdf1. How can I get more details about the nature of the problem? It is a new software raid1 setup I am not yet using.
It is hard to tell what the problem is. Look at your /etc/mdadm.donf file. It list the arrays ant belonging disks should be configured. Also look at dmesg and/or var/log/messages and search for "md:" lines. It also seems that your have problem with md2 raid1 array as well. Most likely the arrays have become degraded due to a non-clean shutdown. If you see messages in dmesg like: Aug 2 18:01:30 linux kernel: [ 16.152421] md: md0 stopped. Aug 2 18:01:30 linux kernel: [ 16.153198] md: bind<sdc3> Aug 2 18:01:30 linux kernel: [ 16.153313] md: bind<sdb3> Aug 2 18:01:30 linux kernel: [ 16.153323] md: kicking non-fresh sdc3 from array! Aug 2 18:01:30 linux kernel: [ 16.153326] md: unbind<sdc3> Aug 2 18:01:30 linux kernel: [ 16.164094] md: export_rdev(sdc3) Aug 2 18:01:30 linux kernel: [ 16.165335] md/raid1:md0: active with 1 out of 2 mirrors Aug 2 18:01:30 linux kernel: [ 16.165434] created bitmap (1 pages) for device md0 Aug 2 18:01:30 linux kernel: [ 16.165533] md0: bitmap initialized from disk: read 1/1 pages, set 166 of 324 bits Aug 2 18:01:30 linux kernel: [ 16.185522] md0: detected capacity change from 0 to 21731115008 Aug 2 18:01:30 linux kernel: [ 16.193240] md0: unknown partition table Search for 'kicking'. In such a case you can easily fix the array by adding back the missing device to the array. In the above case the command would be: # madm /dev/md0 --add /dev/sdc3 You have to adjust array name and device name. In /proc/mdstat you can check that the array is synchronized and its progress. By the way the linux RAID wiki site is here: https://raid.wiki.kernel.org/index.php/Linux_Raid Unfortunately the site is very badly organized, it is hard to find or locate information on specific cases. Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Roger Oberholtzer wrote:
On an openSUSE 12.3 system, I have started getting the following:
This is an automatically generated mail message from mdadm running on acme
A DegradedArray event had been detected on md device /dev/md127.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid1] md127 : active raid1 sdf1[0] 1953513280 blocks super 1.0 [2/1] [U_] bitmap: 15/15 pages [60KB], 65536KB chunk
Was the system rebooted after which you started receiving these messages? I suspect the md startup found superblock data on sdf1, which caused the automatic creation of md127 (notice the odd number) with only a single drive and no mention of a second one. It's a RAID1 array with only one drive, hence degraded. -- Per Jessen, Zürich (16.8°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Seems there was a bad cable that caused one disk to get this status. I replaced the cable and redid the RAID/LVM. The LVM is mounted. Next step is to see that there are no complaints. I guess I will do something that does lots of disk activity to see that there is no complaint. On Thursday, September 05, 2013 08:05:21 AM Per Jessen wrote:
Roger Oberholtzer wrote:
On an openSUSE 12.3 system, I have started getting the following:
This is an automatically generated mail message from mdadm running on acme
A DegradedArray event had been detected on md device /dev/md127.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid1] md127 : active raid1 sdf1[0]
1953513280 blocks super 1.0 [2/1] [U_] bitmap: 15/15 pages [60KB], 65536KB chunk
Was the system rebooted after which you started receiving these messages? I suspect the md startup found superblock data on sdf1, which caused the automatic creation of md127 (notice the odd number) with only a single drive and no mention of a second one. It's a RAID1 array with only one drive, hence degraded. -- Yours sincerely,
Roger Oberholtzer Ramböll RST / Systems Office: Int +46 10-615 60 20 Mobile: Int +46 70-815 1696 roger.oberholtzer@ramboll.se ________________________________________ Ramböll Sverige AB Krukmakargatan 21 P.O. Box 17009 SE-104 62 Stockholm, Sweden www.rambollrst.se -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (5)
-
Istvan Gabor
-
Per Jessen
-
Philippe Andersson
-
Rodney Baker
-
Roger Oberholtzer