[opensuse] MD RAID output
cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sdc1[2] sdb1[0] 976760640 blocks super 1.0 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk unused devices: <none> Is this a correctly working raid? I would say yes. UU should show it is working as expecte. But I do not understand sdc1(2) and sdb1(0). Could somebody shed some light on how to read this output correctly? What does (2) and the (0) stand for? _________________________________________________________________ ________________________________________________________ Ihre E-Mail-Postf�cher sicher & zentral an einem Ort. Jetzt wechseln und alte E-Mail-Adresse mitnehmen! https://www.eclipso.de -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
In data lunedì 14 gennaio 2019 10:20:13 CET, stakanov ha scritto:
cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sdc1[2] sdb1[0] 976760640 blocks super 1.0 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk
unused devices: <none>
Is this a correctly working raid? I would say yes. UU should show it is working as expecte. But I do not understand sdc1(2) and sdb1(0). Could somebody shed some light on how to read this output correctly? What does (2) and the (0) stand for? So I continue to read all over that the status md127 shows a degraded raid. And both discs are in use. So I do not understand a) which disk is broken / compromised / needs to rebuild. b) how to check why this actually happened. I did look in journalctl and dmsg but did not find an error message (so probably I do not know for what to look). c) how to rebuild it without making further damage.
_________________________________________________________________ ________________________________________________________ Ihre E-Mail-Postfächer sicher & zentral an einem Ort. Jetzt wechseln und alte E-Mail-Adresse mitnehmen! https://www.eclipso.de -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
stakanov wrote:
In data lunedì 14 gennaio 2019 10:20:13 CET, stakanov ha scritto:
cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sdc1[2] sdb1[0] 976760640 blocks super 1.0 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk
unused devices: <none>
Is this a correctly working raid? I would say yes. UU should show it is working as expecte. But I do not understand sdc1(2) and sdb1(0). Could somebody shed some light on how to read this output correctly? What does (2) and the (0) stand for?
So I continue to read all over that the status md127 shows a degraded raid. And both discs are in use. So I do not understand a) which disk is broken / compromised / needs to rebuild.
None.
b) how to check why this actually happened. I did look in journalctl and dmsg but did not find an error message (so probably I do not know for what to look).
log files.
c) how to rebuild it without making further damage.
Just don't touch it. :-) The 127 number is because it was automatically generated, based on superblock information found. If you want it reset to e.g. md0, you have to update /etc/mdadm.conf. Try running "mdadm --examine --scan" and see what it says. -- Per Jessen, Zürich (4.2°C) http://www.cloudsuisse.com/ - your owncloud, hosted in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
stakanov wrote:
cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sdc1[2] sdb1[0] 976760640 blocks super 1.0 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk
unused devices: <none>
Is this a correctly working raid? I would say yes. UU should show it is working as expecte.
Yes, it's working.
But I do not understand sdc1(2) and sdb1(0). Could somebody shed some light on how to read this output correctly? What does (2) and the (0) stand for?
It's an enumeration of the devices used in the array. In this array, it used to 0 and 1, 1 then went away and 2 was added. See also https://raid.wiki.kernel.org/index.php/Mdstat -- Per Jessen, Zürich (4.1°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/14/2019 03:20 AM, stakanov wrote:
cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sdc1[2] sdb1[0] 976760640 blocks super 1.0 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk
unused devices: <none>
Is this a correctly working raid? I would say yes. UU should show it is working as expecte. But I do not understand sdc1(2) and sdb1(0). Could somebody shed some light on how to read this output correctly? What does (2) and the (0) stand for?
The [2] and [0] are "device numbers" and it means that for your raid1 array (/dev/md127) sdc1[is attached as device 2] and sdb1[is attached as device 0] (and no it doesn't matter that there is no device 1 -- which was most likely used with your old sdd1) -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/14/2019 03:46 PM, David C. Rankin wrote:
The [2] and [0] are "device numbers" and it means that for your raid1 array (/dev/md127)
sdc1[is attached as device 2]
and
sdb1[is attached as device 0]
(and no it doesn't matter that there is no device 1 -- which was most likely used with your old sdd1)
See: e.g. https://raid.wiki.kernel.org/index.php/Mdstat -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
In data lunedì 14 gennaio 2019 22:48:25 CET, David C. Rankin ha scritto:
On 01/14/2019 03:46 PM, David C. Rankin wrote:
The [2] and [0] are "device numbers" and it means that for your raid1 array (/dev/md127)
sdc1[is attached as device 2]
and
sdb1[is attached as device 0]
(and no it doesn't matter that there is no device 1 -- which was most likely used with your old sdd1)
See: e.g. https://raid.wiki.kernel.org/index.php/Mdstat Something is wrong with that disc sdc. I have now: cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sdb1[0] 976760640 blocks super 1.0 [2/1] [U_] bitmap: 5/8 pages [20KB], 65536KB chunk
This means the raid is broken again. So I did mdadm --examine /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : d9640ee4:3a9d7b72:68fa6b80:1b61dc7d Name : any:homeraid Creation Time : Mon Jan 5 02:03:18 2015 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1953521384 (931.51 GiB 1000.20 GB) Array Size : 976760640 (931.51 GiB 1000.20 GB) Used Dev Size : 1953521280 (931.51 GiB 1000.20 GB) Super Offset : 1953521648 sectors Unused Space : before=0 sectors, after=352 sectors State : clean Device UUID : fb0c2b23:77a6057c:2147a4c4:a53a04c4 Internal Bitmap : -16 sectors from superblock Update Time : Tue Jan 15 11:26:42 2019 Bad Block Log : 512 entries available at offset -8 sectors Checksum : 3836aa00 - correct Events : 1037106 Device Role : Active device 0 Array State : A. ('A' == active, '.' == missing, 'R' == replacing) mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : d9640ee4:3a9d7b72:68fa6b80:1b61dc7d Name : any:homeraid Creation Time : Mon Jan 5 02:03:18 2015 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1953521632 (931.51 GiB 1000.20 GB) Array Size : 976760640 (931.51 GiB 1000.20 GB) Used Dev Size : 1953521280 (931.51 GiB 1000.20 GB) Super Offset : 1953521648 sectors Unused Space : before=0 sectors, after=352 sectors State : clean Device UUID : 5dd7fc0b:78a17f7f:38dcfece:e90c420f Internal Bitmap : -16 sectors from superblock Update Time : Mon Jan 14 19:00:17 2019 Bad Block Log : 512 entries available at offset -8 sectors Checksum : 7a18e077 - correct Events : 1032992 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) BUT when I mdadm /dev/md127 --fail /dev/sdc1 --remove /dev/sdc1 mdadm: set device faulty failed for /dev/sdc1: No such device Is this hdd dead? _________________________________________________________________ ________________________________________________________ Ihre E-Mail-Postfächer sicher & zentral an einem Ort. Jetzt wechseln und alte E-Mail-Adresse mitnehmen! https://www.eclipso.de -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, Jan 15, 2019 at 1:38 PM stakanov <stakanov@eclipso.eu> wrote:
In data lunedì 14 gennaio 2019 22:48:25 CET, David C. Rankin ha scritto:
On 01/14/2019 03:46 PM, David C. Rankin wrote:
The [2] and [0] are "device numbers" and it means that for your raid1 array (/dev/md127)
sdc1[is attached as device 2]
and
sdb1[is attached as device 0]
(and no it doesn't matter that there is no device 1 -- which was most likely used with your old sdd1)
See: e.g. https://raid.wiki.kernel.org/index.php/Mdstat Something is wrong with that disc sdc. I have now: cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sdb1[0] 976760640 blocks super 1.0 [2/1] [U_] bitmap: 5/8 pages [20KB], 65536KB chunk
This means the raid is broken again. So I did
mdadm --examine /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : d9640ee4:3a9d7b72:68fa6b80:1b61dc7d Name : any:homeraid Creation Time : Mon Jan 5 02:03:18 2015 Raid Level : raid1 Raid Devices : 2
Avail Dev Size : 1953521384 (931.51 GiB 1000.20 GB) Array Size : 976760640 (931.51 GiB 1000.20 GB) Used Dev Size : 1953521280 (931.51 GiB 1000.20 GB) Super Offset : 1953521648 sectors Unused Space : before=0 sectors, after=352 sectors State : clean Device UUID : fb0c2b23:77a6057c:2147a4c4:a53a04c4
Internal Bitmap : -16 sectors from superblock Update Time : Tue Jan 15 11:26:42 2019 Bad Block Log : 512 entries available at offset -8 sectors Checksum : 3836aa00 - correct Events : 1037106
Device Role : Active device 0 Array State : A. ('A' == active, '.' == missing, 'R' == replacing)
mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : d9640ee4:3a9d7b72:68fa6b80:1b61dc7d Name : any:homeraid Creation Time : Mon Jan 5 02:03:18 2015 Raid Level : raid1 Raid Devices : 2
Avail Dev Size : 1953521632 (931.51 GiB 1000.20 GB) Array Size : 976760640 (931.51 GiB 1000.20 GB) Used Dev Size : 1953521280 (931.51 GiB 1000.20 GB) Super Offset : 1953521648 sectors Unused Space : before=0 sectors, after=352 sectors State : clean Device UUID : 5dd7fc0b:78a17f7f:38dcfece:e90c420f
Internal Bitmap : -16 sectors from superblock Update Time : Mon Jan 14 19:00:17 2019 Bad Block Log : 512 entries available at offset -8 sectors Checksum : 7a18e077 - correct Events : 1032992
Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
BUT when I
mdadm /dev/md127 --fail /dev/sdc1 --remove /dev/sdc1 mdadm: set device faulty failed for /dev/sdc1: No such device
Of course. In multiple places it clearly shows that currently array is running with single device. You cannot fail or remove device that is not part of array.
Is this hdd dead?
How should we know? It is obviously not dead, otherwise you could not display superblock on it. Show output of "journalctl -b" if system was booted before Mon Jan 14 19:00:17 2019 or at least "journalctl --since '2019-01-14 19:00'" if system was rebooted in between. At Mon Jan 14 19:00:17 2019 you had good array consisting of two devices. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, Jan 15, 2019 at 1:38 PM stakanov <stakanov@eclipso.eu> wrote:
In data lunedì 14 gennaio 2019 22:48:25 CET, David C. Rankin ha scritto:
On 01/14/2019 03:46 PM, David C. Rankin wrote:
The [2] and [0] are "device numbers" and it means that for your raid1 array (/dev/md127)
sdc1[is attached as device 2]
and
sdb1[is attached as device 0]
(and no it doesn't matter that there is no device 1 -- which was most likely used with your old sdd1)
Something is wrong with that disc sdc. I have now: cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sdb1[0]
976760640 blocks super 1.0 [2/1] [U_] bitmap: 5/8 pages [20KB], 65536KB chunk
This means the raid is broken again. So I did
mdadm --examine /dev/sdb1
/dev/sdb1: Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : d9640ee4:3a9d7b72:68fa6b80:1b61dc7d
Name : any:homeraid
Creation Time : Mon Jan 5 02:03:18 2015
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1953521384 (931.51 GiB 1000.20 GB)
Array Size : 976760640 (931.51 GiB 1000.20 GB)
Used Dev Size : 1953521280 (931.51 GiB 1000.20 GB)
Super Offset : 1953521648 sectors Unused Space : before=0 sectors, after=352 sectors
State : clean
Device UUID : fb0c2b23:77a6057c:2147a4c4:a53a04c4
Internal Bitmap : -16 sectors from superblock
Update Time : Tue Jan 15 11:26:42 2019
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : 3836aa00 - correct
Events : 1037106
Device Role : Active device 0 Array State : A. ('A' == active, '.' == missing, 'R' == replacing)
mdadm --examine /dev/sdc1
/dev/sdc1: Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : d9640ee4:3a9d7b72:68fa6b80:1b61dc7d
Name : any:homeraid
Creation Time : Mon Jan 5 02:03:18 2015
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1953521632 (931.51 GiB 1000.20 GB)
Array Size : 976760640 (931.51 GiB 1000.20 GB)
Used Dev Size : 1953521280 (931.51 GiB 1000.20 GB)
Super Offset : 1953521648 sectors Unused Space : before=0 sectors, after=352 sectors
State : clean
Device UUID : 5dd7fc0b:78a17f7f:38dcfece:e90c420f
Internal Bitmap : -16 sectors from superblock
Update Time : Mon Jan 14 19:00:17 2019
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : 7a18e077 - correct
Events : 1032992
Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
BUT when I
mdadm /dev/md127 --fail /dev/sdc1 --remove /dev/sdc1
mdadm: set device faulty failed for /dev/sdc1: No such device
Of course. In multiple places it clearly shows that currently array is running with single device. You cannot fail or remove device that is not part of array.
Is this hdd dead?
How should we know? It is obviously not dead, otherwise you could not display superblock on it. Show output of "journalctl -b" if system was booted before Mon Jan 14 19:00:17 2019 or at least "journalctl --since '2019-01-14 19:00'" if system was rebooted in between. At Mon Jan 14 19:00:17 2019 you had good array consisting of two devices. at that time I did run a smart on that drive. It is possible that the smart
In data martedì 15 gennaio 2019 11:55:53 CET, hai scritto: procedure caused the drive to drop from array because the raid wanted to sync while the control was running? When trying the smart on the drive right now, ti blocks at 10% or at times it does not respond. The output of journalctl: Jan 14 16:59:26 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 16:59:22 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:15:01 azzurro.fritz.box systemd[1]: Stopping User Manager for UID 0... Jan 14 19:15:01 azzurro.fritz.box CRON[14935]: pam_unix(crond:session): session closed for user root Jan 14 19:15:01 azzurro.fritz.box systemd[1]: Started User Manager for UID 0. Jan 14 19:15:01 azzurro.fritz.box systemd[14936]: Startup finished in 21ms. Jan 14 19:15:01 azzurro.fritz.box systemd[14936]: Reached target Default. Jan 14 19:15:01 azzurro.fritz.box systemd[14936]: Reached target Basic System. Jan 14 19:15:01 azzurro.fritz.box systemd[14936]: Reached target Sockets. Jan 14 19:15:01 azzurro.fritz.box systemd[14936]: Reached target Timers. Jan 14 19:15:01 azzurro.fritz.box systemd[14936]: Reached target Paths. Jan 14 19:15:01 azzurro.fritz.box systemd[14936]: pam_unix(systemd- user:session): session opened for user root by (uid=0) Jan 14 19:15:01 azzurro.fritz.box systemd[1]: Started Session 40 of user root. Jan 14 19:15:01 azzurro.fritz.box systemd[1]: Starting User Manager for UID 0... Jan 14 19:15:01 azzurro.fritz.box systemd[1]: Created slice User Slice of root. Jan 14 19:15:01 azzurro.fritz.box cron[14935]: pam_unix(crond:session): session opened for user root by (uid=0) Jan 14 19:14:38 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:14:34 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:14:32 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:14:31 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:13:36 azzurro.fritz.box bluetoothd[1154]: connect error: Host is down (112) Jan 14 19:13:36 azzurro.fritz.box bluetoothd[1154]: Unable to get Headset Voice gateway SDP record: Host is down Jan 14 19:13:31 azzurro.fritz.box bluetoothd[1154]: Reconnecting services failed: Device or resource busy (16) Jan 14 19:13:31 azzurro.fritz.box bluetoothd[1154]: connect error: Host is down (112) Jan 14 19:11:16 azzurro.fritz.box postfix/qmgr[2102]: 9BE629CD5: removed Jan 14 19:11:16 azzurro.fritz.box postfix/local[14868]: 9BE629CD5: to=<root@localhost.site>, orig_to=<root@localhost>, relay=local, delay=0.02, delays=0.02/0/0/0, dsn=2.0.0, sta Jan 14 19:11:16 azzurro.fritz.box postfix/qmgr[2102]: 9BE629CD5: from=<root@azzurro.site>, size=819, nrcpt=1 (queue active) Jan 14 19:11:16 azzurro.fritz.box postfix/cleanup[14866]: 9BE629CD5: message- id=<20190114181116.9BE629CD5@azzurro.site> Jan 14 19:11:16 azzurro.fritz.box postfix/trivial-rewrite[14867]: using backwards-compatible default setting append_dot_mydomain=yes to rewrite "localhost" to "localhost.site"; Jan 14 19:11:16 azzurro.fritz.box postfix/pickup[13381]: 9BE629CD5: uid=0 from=<root> Jan 14 19:11:16 azzurro.fritz.box udisksd[2855]: Error updating ATA smart for /org/freedesktop/UDisks2/drives/SAMSUNG_HD103UI_S1LMJ90QC21981 while polling during self-test: Erro 0000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 0010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ (g-io-error-quark, 0) Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#29 CDB: ATA command pass through(16) 85 06 20 00 00 00 00 00 00 00 00 00 00 00 e5 00 Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#29 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK Jan 14 19:11:16 azzurro.fritz.box kernel: md/raid1:md127: Disk failure on sdc1, disabling device. md/raid1:md127: Operation continuing on 1 devices. Jan 14 19:11:16 azzurro.fritz.box kernel: md: super_written gets error=-5 Jan 14 19:11:16 azzurro.fritz.box kernel: blk_update_request: I/O error, dev sdc, sector 1953523680 Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#28 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00 Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#28 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK Jan 14 19:11:16 azzurro.fritz.box kernel: ata4: EH complete Jan 14 19:11:16 azzurro.fritz.box kernel: ata4.00: disabled Jan 14 19:11:16 azzurro.fritz.box kernel: ata4: reset failed, giving up Jan 14 19:11:16 azzurro.fritz.box kernel: ata4: softreset failed (device not ready) Jan 14 19:11:11 azzurro.fritz.box kernel: ata4: hard resetting link Jan 14 19:11:11 azzurro.fritz.box kernel: ata4: limiting SATA link speed to 1.5 Gbps Jan 14 19:11:11 azzurro.fritz.box kernel: ata4: softreset failed (device not ready) Jan 14 19:10:46 azzurro.fritz.box kernel: ata4: link is slow to respond, please be patient (ready=0) Jan 14 19:10:36 azzurro.fritz.box kernel: ata4: hard resetting link Jan 14 19:10:36 azzurro.fritz.box kernel: ata4: softreset failed (device not ready) Jan 14 19:10:26 azzurro.fritz.box kernel: ata4: hard resetting link Jan 14 19:10:26 azzurro.fritz.box kernel: ata4: softreset failed (device not ready) Jan 14 19:10:16 azzurro.fritz.box kernel: ata4: hard resetting link Jan 14 19:10:16 azzurro.fritz.box kernel: ata4.00: status: { DRDY } Jan 14 19:10:16 azzurro.fritz.box kernel: ata4.00: cmd ea/ 00:00:00:00:00/00:00:00:00:00/a0 tag 26 res 40/00:ff: 00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 14 19:10:16 azzurro.fritz.box kernel: ata4.00: failed command: FLUSH CACHE EXT Jan 14 19:10:16 azzurro.fritz.box kernel: ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Jan 14 19:09:26 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:09:22 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:09:20 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:09:19 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:07:10 azzurro.fritz.box clamd[3177]: SelfCheck: Database status OK. Jan 14 19:06:36 azzurro.fritz.box smartd[1107]: Device: /dev/sdc [SAT], self- test in progress, 90% remaining Jan 14 19:06:36 azzurro.fritz.box smartd[1107]: Device: /dev/sdc [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 72 to 71 Jan 14 19:04:14 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:04:10 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:04:08 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:04:07 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:00:01 azzurro.fritz.box systemd[1]: Removed slice User Slice of root. Jan 14 19:00:01 azzurro.fritz.box systemd[1]: Stopped User Manager for UID 0. Jan 14 19:00:01 azzurro.fritz.box systemd[14620]: Received SIGRTMIN+24 from PID 14671 (kill). Jan 14 19:00:01 azzurro.fritz.box systemd[14620]: Stopped target Sockets. Jan 14 19:00:01 azzurro.fritz.box systemd[14620]: Stopped target Timers. Jan 14 19:00:01 azzurro.fritz.box systemd[14620]: Stopped target Paths. Jan 14 19:10:26 azzurro.fritz.box kernel: ata4: hard resetting link Jan 14 19:10:26 azzurro.fritz.box kernel: ata4: softreset failed (device not ready) Jan 14 19:10:16 azzurro.fritz.box kernel: ata4: hard resetting link Jan 14 19:10:16 azzurro.fritz.box kernel: ata4.00: status: { DRDY } Jan 14 19:10:16 azzurro.fritz.box kernel: ata4.00: cmd ea/ 00:00:00:00:00/00:00:00:00:00/a0 tag 26 res 40/00:ff: 00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 14 19:10:16 azzurro.fritz.box kernel: ata4.00: failed command: FLUSH CACHE EXT Jan 14 19:10:16 azzurro.fritz.box kernel: ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Jan 14 19:09:26 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:09:22 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:09:20 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:09:19 azzurro.fritz.box kernel: SFW2-INext-DROP-DEFLT IN=eth1 OUT= MAC=52:57:c9:4b:dc:d0:c0:25:06:f3:48:f7:08:00 SRC=192.168.178.1 DST=192.168.178.29 LEN=60 TOS=0x00 P Jan 14 19:07:10 azzurro.fritz.box clamd[3177]: SelfCheck: Database status OK. Jan 14 19:06:36 azzurro.fritz.box smartd[1107]: Device: /dev/sdc [SAT], self- test in progress, 90% remaining Jan 14 19:06:36 azzurro.fritz.box smartd[1107]: Device: /dev/sdc [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 72 to 71 _________________________________________________________________ ________________________________________________________ Ihre E-Mail-Postfächer sicher & zentral an einem Ort. Jetzt wechseln und alte E-Mail-Adresse mitnehmen! https://www.eclipso.de -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
stakanov wrote:
Jan 14 19:11:16 azzurro.fritz.box udisksd[2855]: Error updating ATA smart for /org/freedesktop/UDisks2/drives/SAMSUNG_HD103UI_S1LMJ90QC21981 while polling during self-test: Erro [snip] Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#29 CDB: ATA command pass through(16) 85 06 20 00 00 00 00 00 00 00 00 00 00 00 e5 00 Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#29 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK Jan 14 19:11:16 azzurro.fritz.box kernel: md/raid1:md127: Disk failure on sdc1, disabling device. md/raid1:md127: Operation continuing on 1 devices.
sdc is clearly not good. -- Per Jessen, Zürich (4.8°C) http://www.hostsuisse.com/ - virtual servers, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, Jan 15, 2019 at 3:37 PM Per Jessen <per@computer.org> wrote:
stakanov wrote:
Jan 14 19:11:16 azzurro.fritz.box udisksd[2855]: Error updating ATA smart for /org/freedesktop/UDisks2/drives/SAMSUNG_HD103UI_S1LMJ90QC21981 while polling during self-test: Erro [snip] Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#29 CDB: ATA command pass through(16) 85 06 20 00 00 00 00 00 00 00 00 00 00 00 e5 00 Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#29 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK Jan 14 19:11:16 azzurro.fritz.box kernel: md/raid1:md127: Disk failure on sdc1, disabling device. md/raid1:md127: Operation continuing on 1 devices.
sdc is clearly not good.
Well, at the same time self test was running; I do not know what impact it has on HDD. I would not run any maintenance operation on drive while using it. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Andrei Borzenkov wrote:
On Tue, Jan 15, 2019 at 3:37 PM Per Jessen <per@computer.org> wrote:
stakanov wrote:
Jan 14 19:11:16 azzurro.fritz.box udisksd[2855]: Error updating ATA smart for /org/freedesktop/UDisks2/drives/SAMSUNG_HD103UI_S1LMJ90QC21981 while polling during self-test: Erro [snip] Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#29 CDB: ATA command pass through(16) 85 06 20 00 00 00 00 00 00 00 00 00 00 00 e5 00 Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#29 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK Jan 14 19:11:16 azzurro.fritz.box kernel: md/raid1:md127: Disk failure on sdc1, disabling device. md/raid1:md127: Operation continuing on 1 devices.
sdc is clearly not good.
Well, at the same time self test was running; I do not know what impact it has on HDD. I would not run any maintenance operation on drive while using it.
We do that every day, on some 40+ machines with RAID1 - every day a short self-test, during the weekend a long selftest. Have done since 2006 - the drives are mixed makes and sizes, and not particular server drives. YMMV. -- Per Jessen, Zürich (4.0°C) http://www.cloudsuisse.com/ - your owncloud, hosted in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
In data martedì 15 gennaio 2019 14:06:34 CET, Per Jessen ha scritto:
Andrei Borzenkov wrote:
On Tue, Jan 15, 2019 at 3:37 PM Per Jessen <per@computer.org> wrote:
stakanov wrote:
Jan 14 19:11:16 azzurro.fritz.box udisksd[2855]: Error updating ATA smart for /org/freedesktop/UDisks2/drives/SAMSUNG_HD103UI_S1LMJ90QC21981 while polling during self-test: Erro
[snip]
Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#29 CDB: ATA command pass through(16) 85 06 20 00 00 00 00 00 00 00 00 00 00 00 e5 00 Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#29 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK Jan 14 19:11:16 azzurro.fritz.box kernel: md/raid1:md127: Disk failure on sdc1, disabling device. md/raid1:md127: Operation continuing on 1 devices.
sdc is clearly not good.
Well, at the same time self test was running; I do not know what impact it has on HDD. I would not run any maintenance operation on drive while using it.
We do that every day, on some 40+ machines with RAID1 - every day a short self-test, during the weekend a long selftest. Have done since 2006 - the drives are mixed makes and sizes, and not particular server drives. YMMV. O.K. I have a new disc. On the indications of David Rankin: recreate the partition before you add it back to the array. I would
generally use sfdisk -d to dump the partition information from dev/sdc1 and then use that to repartition dev/sdd1 for use in the array. I have now the brand new disc without partition as /dev/sdb and the partition working as /dev/sdc1 In order to follow the advice: sfdisk -d /dev/sdc | sfdisk /dev/sdb
Please advice in case of "bullshit alert". Then, I will ask the raid to manage sdb1 created and the raid will sync. So far O.K? _________________________________________________________________ ________________________________________________________ Ihre E-Mail-Postfächer sicher & zentral an einem Ort. Jetzt wechseln und alte E-Mail-Adresse mitnehmen! https://www.eclipso.de -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
stakanov wrote:
O.K. I have a new disc. On the indications of David Rankin:
recreate the partition before you add it back to the array. I would
generally use sfdisk -d to dump the partition information from dev/sdc1 and then use that to repartition dev/sdd1 for use in the array.
I have now the brand new disc without partition as /dev/sdb and the partition working as /dev/sdc1 In order to follow the advice: sfdisk -d /dev/sdc | sfdisk /dev/sdb
That looks good. I usually use 'dd', but sfdisk is proabbaly better.
Please advice in case of "bullshit alert". Then, I will ask the raid to manage sdb1 created and the raid will sync. So far O.K?
Sounds good, yes. -- Per Jessen, Zürich (1.1°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
15.01.2019 22:51, Per Jessen пишет:
stakanov wrote:
O.K. I have a new disc. On the indications of David Rankin:
recreate the partition before you add it back to the array. I would
generally use sfdisk -d to dump the partition information from dev/sdc1 and then use that to repartition dev/sdd1 for use in the array.
I have now the brand new disc without partition as /dev/sdb and the partition working as /dev/sdc1 In order to follow the advice: sfdisk -d /dev/sdc | sfdisk /dev/sdb
That looks good. I usually use 'dd', but sfdisk is proabbaly better.
Both will duplicate MBR signature or GPT GUIDs/labels. While Linux probably does not use this information directly, this may potentially confuse other software/firmware. I am not sure if there is tool that allows you to easily "create new partitions of the same size as on other disk". Hmm ... YaST2 "Clone disk" feature comes in mind. Otherwise for GPT gdisk offers "Randomize the disk's GUID and all partitions' unique GUIDs" after cloning partition table.
Please advice in case of "bullshit alert". Then, I will ask the raid to manage sdb1 created and the raid will sync. So far O.K?
Sounds good, yes.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
15.01.2019 22:51, Per Jessen пишет:
stakanov wrote:
O.K. I have a new disc.
On the indications of David Rankin:
recreate the partition before you add it back to the array. I would
generally use sfdisk -d to dump the partition information from dev/sdc1 and then use that to repartition dev/sdd1 for use in the array.
I have now the brand new disc without partition as /dev/sdb and the partition working as /dev/sdc1 In order to follow the advice: sfdisk -d /dev/sdc | sfdisk /dev/sdb
That looks good. I usually use 'dd', but sfdisk is proabbaly better.
Both will duplicate MBR signature or GPT GUIDs/labels. While Linux probably does not use this information directly, this may potentially confuse other software/firmware. I am not sure if there is tool that allows you to easily "create new partitions of the same size as on other disk". Hmm ... YaST2 "Clone disk" feature comes in mind. Otherwise for GPT gdisk offers "Randomize the disk's GUID and all partitions' unique GUIDs" after cloning partition table.
Please advice in case of "bullshit alert". Then, I will ask the raid to manage sdb1 created and the raid will sync. So far O.K?
Sounds good, yes. Well, I have to say that the sfdisk command did work as expected. As I am writing you the disc is in sync and that looks all good up to now. Hope for
In data martedì 15 gennaio 2019 21:05:44 CET, Andrei Borzenkov ha scritto: the best. _________________________________________________________________ ________________________________________________________ Ihre E-Mail-Postf�cher sicher & zentral an einem Ort. Jetzt wechseln und alte E-Mail-Adresse mitnehmen! https://www.eclipso.de -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 15/01/2019 12.51, stakanov wrote: There are errors on the disk sdc, and not only from raid or smart. (I'll try to unwrap for easier reading)
Jan 14 19:11:16 azzurro.fritz.box udisksd[2855]: Error updating ATA smart for /org/freedesktop/UDisks2/drives/SAMSUNG_HD103UI_S1LMJ90QC21981 while polling during self-test: Erro 0000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 0010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ (g-io-error-quark, 0)
Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#29 CDB: ATA command pass through(16) 85 06 20 00 00 00 00 00 00 00 00 00 00 00 e5 00 Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#29 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK Jan 14 19:11:16 azzurro.fritz.box kernel: md/raid1:md127: Disk failure on sdc1, disabling device. md/raid1:md127: Operation continuing on 1 devices. Jan 14 19:11:16 azzurro.fritz.box kernel: md: super_written gets error=-5 Jan 14 19:11:16 azzurro.fritz.box kernel: blk_update_request: I/O error, dev sdc, sector 1953523680 Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#28 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00 Jan 14 19:11:16 azzurro.fritz.box kernel: sd 3:0:0:0: [sdc] tag#28 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
and then more, the kernel has problems accessing the disk.
Jan 14 19:11:16 azzurro.fritz.box kernel: ata4: EH complete Jan 14 19:11:16 azzurro.fritz.box kernel: ata4.00: disabled Jan 14 19:11:16 azzurro.fritz.box kernel: ata4: reset failed, giving up Jan 14 19:11:16 azzurro.fritz.box kernel: ata4: softreset failed (device not ready) Jan 14 19:11:11 azzurro.fritz.box kernel: ata4: hard resetting link Jan 14 19:11:11 azzurro.fritz.box kernel: ata4: limiting SATA link speed to 1.5 Gbps Jan 14 19:11:11 azzurro.fritz.box kernel: ata4: softreset failed (device not ready) Jan 14 19:10:46 azzurro.fritz.box kernel: ata4: link is slow to respond, please be patient (ready=0) Jan 14 19:10:36 azzurro.fritz.box kernel: ata4: hard resetting link Jan 14 19:10:36 azzurro.fritz.box kernel: ata4: softreset failed (device not ready) Jan 14 19:10:26 azzurro.fritz.box kernel: ata4: hard resetting link Jan 14 19:10:26 azzurro.fritz.box kernel: ata4: softreset failed (device not ready) Jan 14 19:10:16 azzurro.fritz.box kernel: ata4: hard resetting link Jan 14 19:10:16 azzurro.fritz.box kernel: ata4.00: status: { DRDY } Jan 14 19:10:16 azzurro.fritz.box kernel: ata4.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 26 res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 14 19:10:16 azzurro.fritz.box kernel: ata4.00: failed command: FLUSH CACHE EXT Jan 14 19:10:16 azzurro.fritz.box kernel: ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Maybe a firmware problem on the disk? Perhaps there is an upgrade available. -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)
On 01/15/2019 04:35 AM, stakanov wrote:
mdadm /dev/md127 --fail /dev/sdc1 --remove /dev/sdc1 mdadm: set device faulty failed for /dev/sdc1: No such device
100% expected output. No harm in removing an already failed device (in fact I would do it anyway just to ensure all internal working unambiguously know /dev/sdc1 was removed)
cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sdb1[0] <-- /dev/sdb1 in use device[0] 976760640 blocks super 1.0 [2/1] [U_] ^ ^ ^ | | | | / +- device missing from array | +- 2 disks/1 in use
mdadm --examine /dev/sdb1 /dev/sdb1: <snip> Events : 1037106
Device Role : Active device 0 Array State : A. ('A' == active, '.' == missing, 'R' == replacing) ^^ || |+- '.' device missing in array | +-- 'A' one active device
mdadm --examine /dev/sdc1 /dev/sdc1:
<snip>
Events : 1032992
Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
^^ || |+- 'A' one active device | +-- 'A' second active device You need to show 'cat /proc/partitions | grep md' and then show the output of: # mdadm -D /dev/md... for each md device listed in partitions. I have a hard time believing both sdb1 and sdc1 are in the same array. Possible that they are now in separate arrays. Look at the differing "Event" count for each device. -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/15/2019 07:23 PM, David C. Rankin wrote:
You need to show 'cat /proc/partitions | grep md' and then show the output of:
# mdadm -D /dev/md...
for each md device listed in partitions. I have a hard time believing both sdb1 and sdc1 are in the same array. Possible that they are now in separate arrays. Look at the differing "Event" count for each device.
Later posts show you have repartitioned a new drive and it is syncing (good). There is no need for the further output if all is well. -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/15/2019 07:23 PM, David C. Rankin wrote:
You need to show 'cat /proc/partitions | grep md' and then show the output of: # mdadm -D /dev/md...
for each md device listed in partitions. I have a hard time believing both sdb1 and sdc1 are in the same array. Possible that they are now in separate arrays. Look at the differing "Event" count for each device.
Later posts show you have repartitioned a new drive and it is syncing (good). There is no need for the further output if all is well. All is well. In fact one of the disc was faulty probably some time. It appears
In data mercoledì 16 gennaio 2019 02:34:43 CET, David C. Rankin ha scritto: that the PSU was failing, probably not starting both disc but random one of them. It appears that the second disc was damaged by this and at the end was dropping out randomly off the array, even if setup new, this because the disc was blocking when ever the bios would try a smart self test, or when a heavy disc load was going on. Cable were good. It was tricky to understand the disc was faulty because for long time it literally loved to fool me. Once rebuild, the array would work for a few days, and then the disc would disappear. Of course it could have been that the PSU failing was just concomitant with the failing of the hdd. When I was leaving, the new disc was in sync, the system stable. The disc (unmounted) had a higher temperature than the other. I will do a /dev/urandom prior to setting it to "waste". To be honest I am happier now that the two discs are of different age and of different manufacturer. Logic says that the probability that both will fail the same time will be lower. Than you for the command to create the partition, very powerful. Goes into the "savvy advice box". Thanks to all who helped to make my life less miserable :-) _________________________________________________________________ ________________________________________________________ Ihre E-Mail-Postfächer sicher & zentral an einem Ort. Jetzt wechseln und alte E-Mail-Adresse mitnehmen! https://www.eclipso.de -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (5)
-
Andrei Borzenkov
-
Carlos E. R.
-
David C. Rankin
-
Per Jessen
-
stakanov