http://bugzilla.novell.com/show_bug.cgi?id=599885
http://bugzilla.novell.com/show_bug.cgi?id=599885#c3
Franz Huber changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEEDINFO |NEW
CC| |miketc@kabsi.at
InfoProvider|miketc@kabsi.at |
--- Comment #3 from Franz Huber 2010-04-27 11:32:30 UTC ---
At first I changed to the console before yast started anything raid related.
mdadm -Esvvv:
/dev/sde:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.2.02
Orig Family : f5301bd5
Family : f5301bd5
Generation : 00143187
UUID : 7d149a87:a33f3f14:9b5655b2:2e1d1490
Checksum : bdd84f90 correct
MPB Sectors : 2
Disks : 4
RAID Devices : 1
Disk01 Serial : WD-WMASY2708759
State : active
Id : 00040000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
[Raid5]:
UUID : 3d4409ed:21143660:bda67464:b6c5a60b
RAID Level : 5
Members : 4
This Slot : 1
Array Size : 3750776832 (1788.51 GiB 1920.40 GB)
Per Dev Size : 1250259208 (596.17 GiB 640.13 GB)
Sector Offset : 0
Num Stripes : 9767648
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
Disk00 Serial : WD-WMASY2694410
State : active
Id : 00020000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
Disk02 Serial : WD-WMASY6437503
State : active
Id : 00010000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
Disk03 Serial : WD-WCASY5619508
State : active
Id : 00030000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
/dev/sdc2:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.2.02
Orig Family : f5301bd5
Family : f5301bd5
Generation : 00143187
UUID : 7d149a87:a33f3f14:9b5655b2:2e1d1490
Checksum : bdd84f90 correct
MPB Sectors : 2
Disks : 4
RAID Devices : 1
Disk00 Serial : WD-WMASY2694410
State : active
Id : 00020000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
[Raid5]:
UUID : 3d4409ed:21143660:bda67464:b6c5a60b
RAID Level : 5
Members : 4
This Slot : 0
Array Size : 3750776832 (1788.51 GiB 1920.40 GB)
Per Dev Size : 1250259208 (596.17 GiB 640.13 GB)
Sector Offset : 0
Num Stripes : 9767648
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
Disk01 Serial : WD-WMASY2708759
State : active
Id : 00040000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
Disk02 Serial : WD-WMASY6437503
State : active
Id : 00010000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
Disk03 Serial : WD-WCASY5619508
State : active
Id : 00030000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
/dev/sdc:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.2.02
Orig Family : f5301bd5
Family : f5301bd5
Generation : 00143187
UUID : 7d149a87:a33f3f14:9b5655b2:2e1d1490
Checksum : bdd84f90 correct
MPB Sectors : 2
Disks : 4
RAID Devices : 1
Disk00 Serial : WD-WMASY2694410
State : active
Id : 00020000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
[Raid5]:
UUID : 3d4409ed:21143660:bda67464:b6c5a60b
RAID Level : 5
Members : 4
This Slot : 0
Array Size : 3750776832 (1788.51 GiB 1920.40 GB)
Per Dev Size : 1250259208 (596.17 GiB 640.13 GB)
Sector Offset : 0
Num Stripes : 9767648
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
Disk01 Serial : WD-WMASY2708759
State : active
Id : 00040000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
Disk02 Serial : WD-WMASY6437503
State : active
Id : 00010000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
Disk03 Serial : WD-WCASY5619508
State : active
Id : 00030000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
/dev/sdd2:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.2.02
Orig Family : f5301bd5
Family : f5301bd5
Generation : 00143187
UUID : 7d149a87:a33f3f14:9b5655b2:2e1d1490
Checksum : bdd84f90 correct
MPB Sectors : 2
Disks : 4
RAID Devices : 1
Disk03 Serial : WD-WCASY5619508
State : active
Id : 00030000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
[Raid5]:
UUID : 3d4409ed:21143660:bda67464:b6c5a60b
RAID Level : 5
Members : 4
This Slot : 3
Array Size : 3750776832 (1788.51 GiB 1920.40 GB)
Per Dev Size : 1250259208 (596.17 GiB 640.13 GB)
Sector Offset : 0
Num Stripes : 9767648
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
Disk00 Serial : WD-WMASY2694410
State : active
Id : 00020000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
Disk01 Serial : WD-WMASY2708759
State : active
Id : 00040000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
Disk02 Serial : WD-WMASY6437503
State : active
Id : 00010000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
/dev/sdd:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.2.02
Orig Family : f5301bd5
Family : f5301bd5
Generation : 00143187
UUID : 7d149a87:a33f3f14:9b5655b2:2e1d1490
Checksum : bdd84f90 correct
MPB Sectors : 2
Disks : 4
RAID Devices : 1
Disk03 Serial : WD-WCASY5619508
State : active
Id : 00030000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
[Raid5]:
UUID : 3d4409ed:21143660:bda67464:b6c5a60b
RAID Level : 5
Members : 4
This Slot : 3
Array Size : 3750776832 (1788.51 GiB 1920.40 GB)
Per Dev Size : 1250259208 (596.17 GiB 640.13 GB)
Sector Offset : 0
Num Stripes : 9767648
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
Disk00 Serial : WD-WMASY2694410
State : active
Id : 00020000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
Disk01 Serial : WD-WMASY2708759
State : active
Id : 00040000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
Disk02 Serial : WD-WMASY6437503
State : active
Id : 00010000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
/dev/sdb:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.2.02
Orig Family : f5301bd5
Family : f5301bd5
Generation : 00143187
UUID : 7d149a87:a33f3f14:9b5655b2:2e1d1490
Checksum : bdd84f90 correct
MPB Sectors : 2
Disks : 4
RAID Devices : 1
Disk02 Serial : WD-WMASY6437503
State : active
Id : 00010000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
[Raid5]:
UUID : 3d4409ed:21143660:bda67464:b6c5a60b
RAID Level : 5
Members : 4
This Slot : 2
Array Size : 3750776832 (1788.51 GiB 1920.40 GB)
Per Dev Size : 1250259208 (596.17 GiB 640.13 GB)
Sector Offset : 0
Num Stripes : 9767648
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
Disk00 Serial : WD-WMASY2694410
State : active
Id : 00020000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
Disk01 Serial : WD-WMASY2708759
State : active
Id : 00040000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
Disk03 Serial : WD-WCASY5619508
State : active
Id : 00030000
Usable Size : 1250259214 (596.17 GiB 640.13 GB)
dmraid -gs (apparently dmraid -ssg would not work, I suppose you meant this
list?):
*** Superset
name : isw_ebbdfhahhd
size : 5001054900
stride : 0
type : GROUP
status : setup
subsets: 1
devs : 4
spares : 0
--> Subset
name : isw_ebbdfhahhd_Raid5
size : 3750776832
stride : 128
type : raid5_la
status : ok
subsets: 0
devs : 4
spares : 0
dmraid -d -v -a y:
DEBUG: _find_set: searching isw_ebbdfhahhd
DEBUG: _find_set: not found isw_ebbdfhahhd
DEBUG: _find_set: searching isw_ebbdfhahhd_Raid5
DEBUG: _find_set: searching isw_ebbdfhahhd_Raid5
DEBUG: _find_set: not found isw_ebbdfhahhd_Raid5
DEBUG: _find_set: not found isw_ebbdfhahhd_Raid5
DEBUG: _find_set: searching isw_ebbdfhahhd
DEBUG: _find_set: found isw_ebbdfhahhd
DEBUG: _find_set: searching isw_ebbdfhahhd_Raid5
DEBUG: _find_set: searching isw_ebbdfhahhd_Raid5
DEBUG: _find_set: found isw_ebbdfhahhd_Raid5
DEBUG: _find_set: found isw_ebbdfhahhd_Raid5
DEBUG: _find_set: searching isw_ebbdfhahhd
DEBUG: _find_set: found isw_ebbdfhahhd
DEBUG: _find_set: searching isw_ebbdfhahhd_Raid5
DEBUG: _find_set: searching isw_ebbdfhahhd_Raid5
DEBUG: _find_set: found isw_ebbdfhahhd_Raid5
DEBUG: _find_set: found isw_ebbdfhahhd_Raid5
DEBUG: _find_set: searching isw_ebbdfhahhd
DEBUG: _find_set: found isw_ebbdfhahhd
DEBUG: _find_set: searching isw_ebbdfhahhd_Raid5
DEBUG: _find_set: searching isw_ebbdfhahhd_Raid5
DEBUG: _find_set: found isw_ebbdfhahhd_Raid5
DEBUG: _find_set: found isw_ebbdfhahhd_Raid5
DEBUG: set status of set "isw_ebbdfhahhd_Raid5" to 16
RAID set "isw_ebbdfhahhd_Raid5" was not activated
ERROR: device "isw_ebbdfhahhd_Raid5" could not be found
INFO: Activating GROUP raid set "isw_ebbdfhahhd"
DEBUG: freeing devices of RAID set "isw_ebbdfhahhd_Raid5"
DEBUG: freeing device "isw_ebbdfhahhd_Raid5", path "/dev/sdc"
DEBUG: freeing device "isw_ebbdfhahhd_Raid5", path "/dev/sde"
DEBUG: freeing device "isw_ebbdfhahhd_Raid5", path "/dev/sdb"
DEBUG: freeing device "isw_ebbdfhahhd_Raid5", path "/dev/sdd"
DEBUG: freeing devices of RAID set "isw_ebbdfhahhd"
DEBUG: freeing device "isw_ebbdfhahhd", path "/dev/sdc"
DEBUG: freeing device "isw_ebbdfhahhd", path "/dev/sde"
DEBUG: freeing device "isw_ebbdfhahhd", path "/dev/sdb"
DEBUG: freeing device "isw_ebbdfhahhd", path "/dev/sdd"
After that I tried letting yast configure mdraid for my raid.
I quickly changed to the kernel messages because the system is unresponsive
after only a few seconds. I could not get a log but there seems to be an
infinite loop with this kernel message recurring:
kernel: BUG: soft lockup - CPU#1 stuck for 61s! [sh:4083]
After rebooting I selected No for mdraid in yast.
the "mdadm -Esvvv" output was the same (as far as I could see) but dmraid would
not produce any output regardless of the options.
There was also a bug in dmesg, the relevant lines beeing:
[ 171.918250] md: raid0 personality registered for level 0
[ 171.928164] md: raid1 personality registered for level 1
[ 171.937602] async_tx: api initialized (async)
[ 171.937996] xor: automatically using best checksumming function: generic_sse
[ 171.956511] generic_sse: 12100.000 MB/sec
[ 171.956512] xor: using function: generic_sse (12100.000 MB/sec)
[ 172.024535] raid6: int64x1 2486 MB/s
[ 172.092512] raid6: int64x2 3178 MB/s
[ 172.160515] raid6: int64x4 2470 MB/s
[ 172.228522] raid6: int64x8 2404 MB/s
[ 172.296516] raid6: sse2x1 5552 MB/s
[ 172.364512] raid6: sse2x2 5710 MB/s
[ 172.432511] raid6: sse2x4 9363 MB/s
[ 172.432512] raid6: using algorithm sse2x4 (9363 MB/s)
[ 172.435276] md: raid6 personality registered for level 6
[ 172.435277] md: raid5 personality registered for level 5
[ 172.435279] md: raid4 personality registered for level 4
[ 172.454060] md: raid10 personality registered for level 10
[ 172.463329] md: multipath personality registered for level -4
[ 177.248363] device-mapper: uevent: version 1.0.3
[ 177.248531] device-mapper: ioctl: 4.17.0-ioctl (2010-03-05) initialised:
dm-devel@redhat.com
[ 177.292901] device-mapper: dm-raid45: initialized v0.2431
[ 177.516067] device-mapper: dm-raid45: /dev/sdc is raid disk 0
[ 177.516069] device-mapper: dm-raid45: /dev/sde is raid disk 1
[ 177.516071] device-mapper: dm-raid45: /dev/sdb is raid disk 2
[ 177.516073] device-mapper: dm-raid45: /dev/sdd is raid disk 3
[ 177.516075] device-mapper: dm-raid45: 128/128/256 sectors chunk/io/recovery
size, 64 stripes
[ 177.516078] device-mapper: dm-raid45: algorithm "xor_32", 5 chunks with
6750MB/s
[ 177.516080] device-mapper: dm-raid45: RAID5 (left asymmetric) set with net
3/4 devices
[ 177.528195] ------------[ cut here ]------------
[ 177.528204] kernel BUG at
/usr/src/packages/BUILD/kernel-default-2.6.34/linux-2.6.33/drivers/md/dm.c:2213!
[ 177.528214] invalid opcode: 0000 [#1] SMP
[ 177.528223] last sysfs file: /sys/module/dm_mod/initstate
[ 177.528231] CPU 0
[ 177.528232] Modules linked in: dm_raid45 dm_region_hash dm_log dm_memcache
dm_message dm_snapshot dm_mod multipath raid10 raid456 async_raid6_recov
async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 parport_pc
parport nls_utf8 vfat fat sbp2 arc4 ecb acpi_cpufreq fan nfs nfs_acl lockd
fscache auth_rpcgss sunrpc nls_iso8859_1 nls_cp437 af_packet st sr_mod sg cdrom
sd_mod usbhid usb_storage ohci1394 uhci_hcd radeon ahci thermal ttm
drm_kms_helper libata ieee1394 processor skge thermal_sys button intel_agp drm
sky2 i2c_algo_bit floppy scsi_mod ehci_hcd usbcore edd squashfs loop [last
unloaded: parport]
[ 177.528305]
[ 177.528313] Pid: 3962, comm: dmraid Not tainted 2.6.34-rc5-6-default #1
P5Q-E/P5Q-E
[ 177.528321] RIP: 0010:[<ffffffffa0497ba5>] [<ffffffffa0497ba5>]
dm_put+0x115/0x120 [dm_mod]
[ 177.528339] RSP: 0018:ffff880114b93e18 EFLAGS: 00010202
[ 177.528347] RAX: 0000000000000000 RBX: ffff88012c0ed000 RCX:
0000000000000000
[ 177.528356] RDX: 0000000000000000 RSI: 0000000000000296 RDI:
ffff88012c0ed000
[ 177.528364] RBP: ffffc90011285000 R08: 0000000000000000 R09:
ffffffff817d664c
[ 177.528372] R10: 0000000000000001 R11: 0000000000000000 R12:
0000000000000000
[ 177.528381] R13: 0000000000000000 R14: 0000000000000000 R15:
0000000000609150
[ 177.528390] FS: 00007fadf15e57a0(0000) GS:ffff880001e00000(0000)
knlGS:0000000000000000
[ 177.528393] device-mapper: dm-raid45: No regions to recover
[ 177.528405] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 177.528413] CR2: 00007f7d658033f0 CR3: 000000012c1c2000 CR4:
00000000000406f0
[ 177.528422] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[ 177.528430] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[ 177.528439] Process dmraid (pid: 3962, threadinfo ffff880114b92000, task
ffff88012169c840)
[ 177.528447] Stack:
[ 177.528453] ffff88012c0ed000 ffffc90011285000 0000000000000000
ffffffffa049cdc1
[ 177.528458] <0> 0000000000609100 ffff88012169c840 0000000000609150
ffffffffa049ce90
[ 177.528468] <0> 0000000000000006 ffffffffa049dadc 0000000000000000
ffff88012c0b83c8
[ 177.528483] Call Trace:
[ 177.528509] [<ffffffffa049cdc1>] do_resume+0xe1/0x1b0 [dm_mod]
[ 177.528532] [<ffffffffa049dadc>] ctl_ioctl+0x1ac/0x250 [dm_mod]
[ 177.528556] [<ffffffffa049db8e>] dm_ctl_ioctl+0xe/0x20 [dm_mod]
[ 177.528578] [<ffffffff8113e4b9>] vfs_ioctl+0x39/0xc0
[ 177.528590] [<ffffffff8113e9ef>] do_vfs_ioctl+0x5f/0x3c0
[ 177.528601] [<ffffffff8113edd0>] sys_ioctl+0x80/0xa0
[ 177.528613] [<ffffffff81002efb>] system_call_fastpath+0x16/0x1b
[ 177.528626] [<00007fadf0ce2247>] 0x7fadf0ce2247
[ 177.528633] Code: 48 8b 1c 24 4c 8b 64 24 10 48 83 c4 18 e9 84 dc ff ff 0f
1f 40 00 48 89 ef e8 78 2c 00 00 48 89 ef e8 90 2c 00 00 e9 7b ff ff ff <0f> 0b
66 0f 1f 84 00 00 00 00 00 48 83 ec 08 48 8b bf c8 02 00
[ 177.528670] RIP [<ffffffffa0497ba5>] dm_put+0x115/0x120 [dm_mod]
[ 177.528682] RSP <ffff880114b93e18>
[ 177.528691] ---[ end trace 15a83921c1ee1eb7 ]---
[ 177.650701] BUG: unable to handle kernel NULL pointer dereference at
0000000000000231
[ 177.650715] IP: [<ffffffffa049c7e5>] list_devices+0xf5/0x190 [dm_mod]
[ 177.650726] PGD 108c37067 PUD 108c36067 PMD 0
[ 177.650732] Oops: 0000 [#2] SMP
[ 177.650737] last sysfs file:
/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sr0/dev
[ 177.650744] CPU 1
[ 177.650745] Modules linked in: dm_raid45 dm_region_hash dm_log dm_memcache
dm_message dm_snapshot dm_mod multipath raid10 raid456 async_raid6_recov
async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 parport_pc
parport nls_utf8 vfat fat sbp2 arc4 ecb acpi_cpufreq fan nfs nfs_acl lockd
fscache auth_rpcgss sunrpc nls_iso8859_1 nls_cp437 af_packet st sr_mod sg cdrom
sd_mod usbhid usb_storage ohci1394 uhci_hcd radeon ahci thermal ttm
drm_kms_helper libata ieee1394 processor skge thermal_sys button intel_agp drm
sky2 i2c_algo_bit floppy scsi_mod ehci_hcd usbcore edd squashfs loop [last
unloaded: parport]
[ 177.650793]
[ 177.650798] Pid: 3976, comm: vgscan Tainted: G D
2.6.34-rc5-6-default #1 P5Q-E/P5Q-E
[ 177.650803] RIP: 0010:[<ffffffffa049c7e5>] [<ffffffffa049c7e5>]
list_devices+0xf5/0x190 [dm_mod]
[ 177.650812] RSP: 0018:ffff880114a0de18 EFLAGS: 00010246
[ 177.650817] RAX: 0000000000000001 RBX: ffff880121581480 RCX:
0000000000003ec8
[ 177.650822] RDX: ffffffffa04a5b20 RSI: 0000000000004000 RDI:
ffff88012c0ed000
[ 177.650827] RBP: ffffc900110b1138 R08: ffff880127a56e80 R09:
ffff880114a0dc54
[ 177.650832] R10: 0000160000000000 R11: 0000000000000001 R12:
ffffffffa04a5b20
[ 177.650837] R13: 0000000000000026 R14: 000000000000001a R15:
0000000000000000
[ 177.650843] FS: 00007f164a7d37a0(0000) GS:ffff880001e80000(0000)
knlGS:0000000000000000
[ 177.650849] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 177.650854] CR2: 0000000000000231 CR3: 0000000108c63000 CR4:
00000000000406e0
[ 177.650859] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[ 177.650864] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[ 177.650870] Process vgscan (pid: 3976, threadinfo ffff880114a0c000, task
ffff8801217b40c0)
[ 177.650875] Stack:
[ 177.650879] 0000000000000000 0000000000004000 0000000000000000
ffff8801217b40c0
[ 177.650882] <0> 00000000006d8550 ffffffffa049c6f0 0000000000000002
0000000000000000
[ 177.650888] <0> 00000000006d8550 ffffffffa049dadc ffff88012f77a4c0
0000000000000296
[ 177.650898] Call Trace:
[ 177.650921] [<ffffffffa049dadc>] ctl_ioctl+0x1ac/0x250 [dm_mod]
[ 177.650936] [<ffffffffa049db8e>] dm_ctl_ioctl+0xe/0x20 [dm_mod]
[ 177.650951] [<ffffffff8113e4b9>] vfs_ioctl+0x39/0xc0
[ 177.650959] [<ffffffff8113e9ef>] do_vfs_ioctl+0x5f/0x3c0
[ 177.650965] [<ffffffff8113edd0>] sys_ioctl+0x80/0xa0
[ 177.650973] [<ffffffff81002efb>] system_call_fastpath+0x16/0x1b
[ 177.650981] [<00007f1649899247>] 0x7f1649899247
[ 177.650986] Code: ea 48 c1 e2 04 48 8b 9a c0 58 4a a0 4c 8d a2 c0 58 4a a0
eb 67 90 48 85 c0 74 07 89 ea 29 c2 89 50 08 48 8b 7b 30 e8 7b 8d ff ff <8b> 90
30 02 00 00 c7 45 08 00 00 00 00 48 8d 7d 0c 89 d0 0f b6
[ 177.651009] RIP [<ffffffffa049c7e5>] list_devices+0xf5/0x190 [dm_mod]
[ 177.651016] RSP <ffff880114a0de18>
[ 177.651021] CR2: 0000000000000231
[ 177.651026] ---[ end trace 15a83921c1ee1eb8 ]---
[ 178.344450] end_request: I/O error, dev fd0, sector 0
I was however not able to reproduce the Segfaulting of dmraid with this
sequence of commands.
--
Configure bugmail: http://bugzilla.novell.com/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.