[Bug 1123328] New: obs-arm-7 hangs on umount/mount (obs worker kvm builds) with kernel >= 4.19
http://bugzilla.suse.com/show_bug.cgi?id=1123328 Bug ID: 1123328 Summary: obs-arm-7 hangs on umount/mount (obs worker kvm builds) with kernel >= 4.19 Classification: openSUSE Product: openSUSE Tumbleweed Version: Current Hardware: Other OS: Other Status: NEW Severity: Normal Priority: P5 - None Component: Kernel Assignee: kernel-maintainers@forge.provo.novell.com Reporter: dmueller@suse.com QA Contact: qa-bugs@suse.de Found By: --- Blocker: --- when obs-arm-7 (Cavium ThunderX CN99xx) builds with kernel >= 4.19, the initial umount/mount of filesystems is hanging: # ps -elf | grep umou 4 D root 60909 1 0 80 0 - 995 blk_mq Jan22 ? 00:00:00 umount /var/cache/obs/worker/root_3/.mount 0 D root 102921 102894 0 80 0 - 998 blkdev 10:11 pts/20 00:00:00 mount -o noatime,loop /var/cache/obs/worker/root_5/root /var/cache/obs/worker/root_5/.mount 0 D root 102949 102922 0 80 0 - 998 blkdev 10:11 pts/23 00:00:00 mount -o noatime,loop /var/cache/obs/worker/root_8/root /var/cache/obs/worker/root_8/.mount 0 D root 102977 102950 0 80 0 - 998 blkdev 10:11 pts/22 00:00:00 mount -o noatime,loop /var/cache/obs/worker/root_7/root /var/cache/obs/worker/root_7/.mount 0 D root 103005 102978 0 80 0 - 998 blkdev 10:11 pts/25 00:00:00 mount -o noatime,loop /var/cache/obs/worker/root_10/root /var/cache/obs/worker/root_10/.mount 0 D root 103063 103036 0 80 0 - 998 blkdev 10:12 pts/24 00:00:00 mount -o noatime,loop /var/cache/obs/worker/root_9/root /var/cache/obs/worker/root_9/.mount 0 D root 103137 103110 0 80 0 - 998 blkdev 10:13 pts/17 00:00:00 mount -o noatime,loop /var/cache/obs/worker/root_2/root /var/cache/obs/worker/root_2/.mount 0 D root 103253 103217 0 80 0 - 998 blkdev 10:14 pts/26 00:00:00 mount -o noatime,loop /var/cache/obs/worker/root_11/root /var/cache/obs/worker/root_11/.mount 0 D root 103273 103226 0 80 0 - 998 blkdev 10:14 pts/27 00:00:00 mount -o noatime,loop /var/cache/obs/worker/root_12/root /var/cache/obs/worker/root_12/.mount 0 D root 103324 103292 0 80 0 - 998 blkdev 10:14 pts/30 00:00:00 mount -o noatime,loop /var/cache/obs/worker/root_15/root /var/cache/obs/worker/root_15/.mount 0 D root 103875 103843 0 80 0 - 998 blkdev 10:21 pts/16 00:00:00 mount -o noatime,loop /var/cache/obs/worker/root_1/root /var/cache/obs/worker/root_1/.mount sysrq-W gives: [1213689.321930] task PC stack pid father [1213689.325022] umount D 0 60909 1 0x00000009 [1213689.325027] Call trace: [1213689.325035] __switch_to+0x9c/0xd8 [1213689.325042] __schedule+0x2a0/0x888 [1213689.325045] schedule+0x30/0x88 [1213689.325050] blk_mq_freeze_queue_wait+0x54/0xa0 [1213689.325053] blk_freeze_queue+0x30/0x58 [1213689.325055] blk_mq_freeze_queue+0x20/0x30 [1213689.325064] loop_clr_fd+0x4c/0x288 [loop] [1213689.325067] lo_release+0xa4/0xc8 [loop] [1213689.325072] __blkdev_put+0x22c/0x260 [1213689.325074] blkdev_put+0xe4/0x118 [1213689.325079] kill_block_super+0x44/0x58 [1213689.325082] deactivate_locked_super+0x50/0x90 [1213689.325084] deactivate_super+0x74/0x80 [1213689.325087] cleanup_mnt+0x44/0x88 [1213689.325089] __cleanup_mnt+0x20/0x30 [1213689.325093] task_work_run+0xb8/0xe8 [1213689.325096] do_notify_resume+0x2f4/0x390 [1213689.325097] work_pending+0x8/0x14 [1213689.325101] mount D 0 61620 1 0x00000001 [1213689.325104] Call trace: [1213689.325106] __switch_to+0x9c/0xd8 [1213689.325109] __schedule+0x2a0/0x888 [1213689.325111] schedule+0x30/0x88 [1213689.325113] schedule_preempt_disabled+0x14/0x20 [1213689.325115] __mutex_lock.isra.1+0x2c0/0x4b8 [1213689.325117] __mutex_lock_slowpath+0x24/0x30 [1213689.325120] mutex_lock+0x48/0x50 [1213689.325122] __blkdev_get+0x70/0x488 [1213689.325124] blkdev_get+0x10c/0x368 [1213689.325125] blkdev_open+0x9c/0xb0 [1213689.325128] do_dentry_open+0x11c/0x340 [1213689.325130] vfs_open+0x38/0x48 [1213689.325133] do_last+0x224/0x830 [1213689.325135] path_openat+0x68/0x238 [1213689.325137] do_filp_open+0x70/0xd0 [1213689.325139] do_sys_open+0x15c/0x1e8 [1213689.325141] __arm64_sys_openat+0x2c/0x38 [1213689.325145] el0_svc_common+0x98/0x100 [1213689.325147] el0_svc_handler+0x38/0x78 [1213689.325149] el0_svc+0x8/0xc [1213689.325153] mount D 0 62347 1 0x00000001 [1213689.325155] Call trace: [1213689.325157] __switch_to+0x9c/0xd8 [1213689.325160] __schedule+0x2a0/0x888 [1213689.325162] schedule+0x30/0x88 [1213689.325164] schedule_preempt_disabled+0x14/0x20 [1213689.325166] __mutex_lock.isra.1+0x2c0/0x4b8 [1213689.325168] __mutex_lock_slowpath+0x24/0x30 [1213689.325170] mutex_lock+0x48/0x50 [1213689.325172] __blkdev_get+0x70/0x488 [1213689.325174] blkdev_get+0x10c/0x368 [1213689.325176] blkdev_open+0x9c/0xb0 [1213689.325178] do_dentry_open+0x11c/0x340 [1213689.325179] vfs_open+0x38/0x48 [1213689.325182] do_last+0x224/0x830 [1213689.325184] path_openat+0x68/0x238 [1213689.325186] do_filp_open+0x70/0xd0 [1213689.325187] do_sys_open+0x15c/0x1e8 [1213689.325189] __arm64_sys_openat+0x2c/0x38 [1213689.325191] el0_svc_common+0x98/0x100 [1213689.325193] el0_svc_handler+0x38/0x78 [1213689.325195] el0_svc+0x8/0xc [1213689.325199] mount D 0 62405 1 0x00000001 [1213689.325201] Call trace: [1213689.325203] __switch_to+0x9c/0xd8 [1213689.325205] __schedule+0x2a0/0x888 [1213689.325207] schedule+0x30/0x88 [1213689.325209] schedule_preempt_disabled+0x14/0x20 [1213689.325211] __mutex_lock.isra.1+0x2c0/0x4b8 [1213689.325214] __mutex_lock_slowpath+0x24/0x30 [1213689.325216] mutex_lock+0x48/0x50 [1213689.325218] __blkdev_get+0x70/0x488 [1213689.325220] blkdev_get+0x10c/0x368 [1213689.325222] blkdev_open+0x9c/0xb0 [1213689.325223] do_dentry_open+0x11c/0x340 [1213689.325225] vfs_open+0x38/0x48 [1213689.325227] do_last+0x224/0x830 [1213689.325229] path_openat+0x68/0x238 [1213689.325231] do_filp_open+0x70/0xd0 [1213689.325233] do_sys_open+0x15c/0x1e8 [1213689.325234] __arm64_sys_openat+0x2c/0x38 [1213689.325237] el0_svc_common+0x98/0x100 [1213689.325239] el0_svc_handler+0x38/0x78 [1213689.325240] el0_svc+0x8/0xc [1213689.325244] mount D 0 62568 1 0x00000001 [1213689.325246] Call trace: [1213689.325248] __switch_to+0x9c/0xd8 [1213689.325250] __schedule+0x2a0/0x888 [1213689.325252] schedule+0x30/0x88 [1213689.325254] schedule_preempt_disabled+0x14/0x20 [1213689.325256] __mutex_lock.isra.1+0x2c0/0x4b8 [1213689.325258] __mutex_lock_slowpath+0x24/0x30 [1213689.325260] mutex_lock+0x48/0x50 [1213689.325262] __blkdev_get+0x70/0x488 [1213689.325264] blkdev_get+0x10c/0x368 [1213689.325266] blkdev_open+0x9c/0xb0 [1213689.325267] do_dentry_open+0x11c/0x340 [1213689.325269] vfs_open+0x38/0x48 [1213689.325271] do_last+0x224/0x830 [1213689.325273] path_openat+0x68/0x238 [1213689.325275] do_filp_open+0x70/0xd0 [1213689.325277] do_sys_open+0x15c/0x1e8 [1213689.325279] __arm64_sys_openat+0x2c/0x38 [1213689.325281] el0_svc_common+0x98/0x100 [1213689.325283] el0_svc_handler+0x38/0x78 [1213689.325284] el0_svc+0x8/0xc [1213689.325288] mount D 0 62723 1 0x00000001 [1213689.325290] Call trace: [1213689.325292] __switch_to+0x9c/0xd8 [1213689.325294] __schedule+0x2a0/0x888 [1213689.325296] schedule+0x30/0x88 [1213689.325298] schedule_preempt_disabled+0x14/0x20 [1213689.325300] __mutex_lock.isra.1+0x2c0/0x4b8 [1213689.325302] __mutex_lock_slowpath+0x24/0x30 [1213689.325304] mutex_lock+0x48/0x50 [1213689.325306] __blkdev_get+0x70/0x488 [1213689.325308] blkdev_get+0x10c/0x368 [1213689.325310] blkdev_open+0x9c/0xb0 [1213689.325312] do_dentry_open+0x11c/0x340 [1213689.325313] vfs_open+0x38/0x48 [1213689.325315] do_last+0x224/0x830 [1213689.325317] path_openat+0x68/0x238 [1213689.325319] do_filp_open+0x70/0xd0 [1213689.325321] do_sys_open+0x15c/0x1e8 [1213689.325323] __arm64_sys_openat+0x2c/0x38 [1213689.325325] el0_svc_common+0x98/0x100 [1213689.325327] el0_svc_handler+0x38/0x78 [1213689.325328] el0_svc+0x8/0xc [1213689.325333] mount D 0 66786 1 0x00000001 [1213689.325335] Call trace: [1213689.325337] __switch_to+0x9c/0xd8 [1213689.325340] __schedule+0x2a0/0x888 [1213689.325342] schedule+0x30/0x88 [1213689.325344] schedule_preempt_disabled+0x14/0x20 [1213689.325346] __mutex_lock.isra.1+0x2c0/0x4b8 [1213689.325348] __mutex_lock_slowpath+0x24/0x30 [1213689.325350] mutex_lock+0x48/0x50 [1213689.325352] __blkdev_get+0x70/0x488 [1213689.325354] blkdev_get+0x10c/0x368 [1213689.325356] blkdev_open+0x9c/0xb0 [1213689.325358] do_dentry_open+0x11c/0x340 [1213689.325360] vfs_open+0x38/0x48 [1213689.325362] do_last+0x224/0x830 [1213689.325364] path_openat+0x68/0x238 [1213689.325366] do_filp_open+0x70/0xd0 [1213689.325368] do_sys_open+0x15c/0x1e8 [1213689.325370] __arm64_sys_openat+0x2c/0x38 [1213689.325372] el0_svc_common+0x98/0x100 [1213689.325374] el0_svc_handler+0x38/0x78 [1213689.325375] el0_svc+0x8/0xc [1213689.325379] mount D 0 72349 1 0x00000001 [1213689.325381] Call trace: [1213689.325383] __switch_to+0x9c/0xd8 [1213689.325386] __schedule+0x2a0/0x888 [1213689.325388] schedule+0x30/0x88 [1213689.325390] schedule_preempt_disabled+0x14/0x20 [1213689.325392] __mutex_lock.isra.1+0x2c0/0x4b8 [1213689.325394] __mutex_lock_slowpath+0x24/0x30 [1213689.325396] mutex_lock+0x48/0x50 [1213689.325399] __blkdev_get+0x70/0x488 [1213689.325400] blkdev_get+0x10c/0x368 [1213689.325402] blkdev_open+0x9c/0xb0 [1213689.325404] do_dentry_open+0x11c/0x340 [1213689.325406] vfs_open+0x38/0x48 [1213689.325408] do_last+0x224/0x830 [1213689.325410] path_openat+0x68/0x238 [1213689.325412] do_filp_open+0x70/0xd0 [1213689.325414] do_sys_open+0x15c/0x1e8 [1213689.325416] __arm64_sys_openat+0x2c/0x38 [1213689.325418] el0_svc_common+0x98/0x100 [1213689.325420] el0_svc_handler+0x38/0x78 [1213689.325422] el0_svc+0x8/0xc [1213689.325425] mount D 0 75917 1 0x00000001 [1213689.325427] Call trace: [1213689.325429] __switch_to+0x9c/0xd8 [1213689.325432] __schedule+0x2a0/0x888 [1213689.325434] schedule+0x30/0x88 [1213689.325436] schedule_preempt_disabled+0x14/0x20 [1213689.325438] __mutex_lock.isra.1+0x2c0/0x4b8 [1213689.325440] __mutex_lock_slowpath+0x24/0x30 [1213689.325442] mutex_lock+0x48/0x50 [1213689.325444] __blkdev_get+0x70/0x488 [1213689.325446] blkdev_get+0x10c/0x368 [1213689.325448] blkdev_open+0x9c/0xb0 [1213689.325449] do_dentry_open+0x11c/0x340 [1213689.325451] vfs_open+0x38/0x48 [1213689.325453] do_last+0x224/0x830 [1213689.325455] path_openat+0x68/0x238 [1213689.325457] do_filp_open+0x70/0xd0 [1213689.325459] do_sys_open+0x15c/0x1e8 [1213689.325461] __arm64_sys_openat+0x2c/0x38 [1213689.325463] el0_svc_common+0x98/0x100 [1213689.325465] el0_svc_handler+0x38/0x78 [1213689.325467] el0_svc+0x8/0xc [1213689.325471] mount D 0 77793 1 0x00000001 [1213689.325472] Call trace: [1213689.325474] __switch_to+0x9c/0xd8 [1213689.325477] __schedule+0x2a0/0x888 [1213689.325479] schedule+0x30/0x88 [1213689.325481] schedule_preempt_disabled+0x14/0x20 [1213689.325483] __mutex_lock.isra.1+0x2c0/0x4b8 [1213689.325485] __mutex_lock_slowpath+0x24/0x30 [1213689.325487] mutex_lock+0x48/0x50 [1213689.325489] __blkdev_get+0x70/0x488 [1213689.325491] blkdev_get+0x10c/0x368 [1213689.325493] blkdev_open+0x9c/0xb0 [1213689.325494] do_dentry_open+0x11c/0x340 [1213689.325496] vfs_open+0x38/0x48 [1213689.325498] do_last+0x224/0x830 [1213689.325500] path_openat+0x68/0x238 [1213689.325502] do_filp_open+0x70/0xd0 [1213689.325504] do_sys_open+0x15c/0x1e8 [1213689.325505] __arm64_sys_openat+0x2c/0x38 [1213689.325507] el0_svc_common+0x98/0x100 [1213689.325509] el0_svc_handler+0x38/0x78 [1213689.325511] el0_svc+0x8/0xc [1213689.325514] mount D 0 79041 1 0x00000001 [1213689.325516] Call trace: [1213689.325518] __switch_to+0x9c/0xd8 [1213689.325521] __schedule+0x2a0/0x888 [1213689.325523] schedule+0x30/0x88 [1213689.325525] schedule_preempt_disabled+0x14/0x20 [1213689.325527] __mutex_lock.isra.1+0x2c0/0x4b8 [1213689.325529] __mutex_lock_slowpath+0x24/0x30 [1213689.325531] mutex_lock+0x48/0x50 [1213689.325533] __blkdev_get+0x70/0x488 [1213689.325535] blkdev_get+0x10c/0x368 [1213689.325536] blkdev_open+0x9c/0xb0 [1213689.325538] do_dentry_open+0x11c/0x340 [1213689.325540] vfs_open+0x38/0x48 [1213689.325542] do_last+0x224/0x830 [1213689.325544] path_openat+0x68/0x238 [1213689.325546] do_filp_open+0x70/0xd0 [1213689.325547] do_sys_open+0x15c/0x1e8 [1213689.325549] __arm64_sys_openat+0x2c/0x38 [1213689.325551] el0_svc_common+0x98/0x100 [1213689.325553] el0_svc_handler+0x38/0x78 [1213689.325555] el0_svc+0x8/0xc [1213689.325559] mount D 0 79830 1 0x00000001 [1213689.325560] Call trace: [1213689.325562] __switch_to+0x9c/0xd8 [1213689.325565] __schedule+0x2a0/0x888 [1213689.325567] schedule+0x30/0x88 [1213689.325569] schedule_preempt_disabled+0x14/0x20 [1213689.325571] __mutex_lock.isra.1+0x2c0/0x4b8 [1213689.325573] __mutex_lock_slowpath+0x24/0x30 [1213689.325575] mutex_lock+0x48/0x50 [1213689.325577] __blkdev_get+0x70/0x488 [1213689.325579] blkdev_get+0x10c/0x368 [1213689.325581] blkdev_open+0x9c/0xb0 [1213689.325583] do_dentry_open+0x11c/0x340 [1213689.325584] vfs_open+0x38/0x48 [1213689.325586] do_last+0x224/0x830 [1213689.325588] path_openat+0x68/0x238 [1213689.325590] do_filp_open+0x70/0xd0 [1213689.325592] do_sys_open+0x15c/0x1e8 [1213689.325594] __arm64_sys_openat+0x2c/0x38 [1213689.325596] el0_svc_common+0x98/0x100 [1213689.325598] el0_svc_handler+0x38/0x78 [1213689.325600] el0_svc+0x8/0xc [1213689.325606] mount D 0 83698 1 0x00000001 [1213689.325607] Call trace: [1213689.325609] __switch_to+0x9c/0xd8 [1213689.325612] __schedule+0x2a0/0x888 [1213689.325614] schedule+0x30/0x88 [1213689.325616] schedule_preempt_disabled+0x14/0x20 [1213689.325619] __mutex_lock.isra.1+0x2c0/0x4b8 [1213689.325621] __mutex_lock_slowpath+0x24/0x30 [1213689.325623] mutex_lock+0x48/0x50 [1213689.325625] __blkdev_get+0x70/0x488 [1213689.325627] blkdev_get+0x10c/0x368 [1213689.325629] blkdev_open+0x9c/0xb0 [1213689.325630] do_dentry_open+0x11c/0x340 [1213689.325632] vfs_open+0x38/0x48 [1213689.325634] do_last+0x224/0x830 [1213689.325636] path_openat+0x68/0x238 [1213689.325638] do_filp_open+0x70/0xd0 [1213689.325640] do_sys_open+0x15c/0x1e8 [1213689.325641] __arm64_sys_openat+0x2c/0x38 [1213689.325643] el0_svc_common+0x98/0x100 [1213689.325645] el0_svc_handler+0x38/0x78 [1213689.325647] el0_svc+0x8/0xc ... -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c3
Jiri Slaby
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c4
Dirk Mueller
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c5
--- Comment #5 from Andreas Färber
http://bugzilla.suse.com/show_bug.cgi?id=1123328
Andreas Färber
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c6
Ruediger Oertel
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c7
Andreas Färber
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c9
--- Comment #9 from Robert Richter
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c10
--- Comment #10 from Jiri Slaby
Proposed patch:
https://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs.git/commit/ ?h=fixes&id=d4f4de5e5ef8efde85febb6876cd3c8ab1631999
queued for 5.3.7 (to be released tomorrow). -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c11
Jiri Slaby
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c12
--- Comment #12 from Ruediger Oertel
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c13
--- Comment #13 from Ruediger Oertel
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c14
--- Comment #14 from Ruediger Oertel
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c15
--- Comment #15 from Jiri Slaby
ah .. we started compressing the modules. such kiwi is not finding any and no modules need no firmware files either ...
FWIW we compress firmware files too. Dunno if it matters for kiwi... -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c16
--- Comment #16 from Ruediger Oertel
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c17
Ruediger Oertel
FWIW we compress firmware files too. Dunno if it matters for kiwi...
well, probably will break things again when it hits. file selection is done on parsing the output of modinfo, since the file is listed ending in .bin there the code will probably not find anything any more. -- You are receiving this mail because: You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c18
--- Comment #18 from Ruediger Oertel
http://bugzilla.suse.com/show_bug.cgi?id=1123328
Mian Yousaf Kaukab
http://bugzilla.suse.com/show_bug.cgi?id=1123328
Philip Oswald
http://bugzilla.suse.com/show_bug.cgi?id=1123328
Mian Yousaf Kaukab
http://bugzilla.suse.com/show_bug.cgi?id=1123328
Matthias Brugger
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c26
Matthias Brugger
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c32
--- Comment #32 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c34
--- Comment #34 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c35
--- Comment #35 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c36
--- Comment #36 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c37
--- Comment #37 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c38
--- Comment #38 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c39
--- Comment #39 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c40
--- Comment #40 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c41
--- Comment #41 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c42
--- Comment #42 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c43
--- Comment #43 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c44
--- Comment #44 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c46
--- Comment #46 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c47
--- Comment #47 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c51
--- Comment #51 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c52
--- Comment #52 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c53
--- Comment #53 from Swamp Workflow Management
http://bugzilla.suse.com/show_bug.cgi?id=1123328
http://bugzilla.suse.com/show_bug.cgi?id=1123328#c54
--- Comment #54 from Swamp Workflow Management
participants (1)
-
bugzilla_noreply@novell.com