Markus Ko�mann changed bug 1155836
What Removed Added
Status RESOLVED REOPENED
Resolution FIXED ---

Comment # 16 on bug 1155836 from
Unfortunately I have to reopen. I have noticed that the problem still exists
with tumbleweeds 5.3.11 kernel, which according to the source code has the fix
applied. The syslog shows when starting:

2019-11-21T04:11:21.813272+01:00 linux-2kgy kernel: [    0.000000] microcode:
microcode updated early to revision 0xd4, date = 2019-08-14
2019-11-21T04:11:21.813275+01:00 linux-2kgy kernel: [    0.000000] Linux
version 5.3.11-1-default (geeko@buildhost) (gcc version 9.2.1 20190903
[gcc-9-branch revision 275330] (SUSE Linux)) #1 SMP Tue Nov 12 18:57:39 UTC
2019 (0a195a8)
2019-11-21T04:11:21.813277+01:00 linux-2kgy kernel: [    0.000000] Command
line: BOOT_IMAGE=/boot/vmlinuz-5.3.11-1-default
root=/dev/mapper/system--nvme-root_factory
resume=/dev/disk/by-uuid/151d9a92-bb3f-40f9-a89c-21bd30ce3017 splash=silent
quiet showopts

and then a few seconds later:

2019-11-21T04:12:14.892023+01:00 linux-2kgy kernel: [   62.541351] BUG:
workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 52s!
2019-11-21T04:12:14.892032+01:00 linux-2kgy kernel: [   62.541358] Showing busy
workqueues and worker pools:
2019-11-21T04:12:14.892033+01:00 linux-2kgy kernel: [   62.541359] workqueue
events: flags=0x0
2019-11-21T04:12:14.892034+01:00 linux-2kgy kernel: [   62.541360]   pwq 2:
cpus=1 node=0 flags=0x0 nice=0 active=3/256
2019-11-21T04:12:14.892035+01:00 linux-2kgy kernel: [   62.541362]    
in-flight: 157:snd_hdac_bus_process_unsol_events [snd_hda_core]
snd_hdac_bus_process_unsol_events [snd_hda_core]
2019-11-21T04:12:14.892046+01:00 linux-2kgy kernel: [   62.541368]     pending:
push_to_pool
2019-11-21T04:12:14.892047+01:00 linux-2kgy kernel: [   62.541385]   pwq 0:
cpus=0 node=0 flags=0x0 nice=0 active=1/256
2019-11-21T04:12:14.892047+01:00 linux-2kgy kernel: [   62.541386]     pending:
check_corruption
2019-11-21T04:12:14.892048+01:00 linux-2kgy kernel: [   62.541390] workqueue
events_unbound: flags=0x2
2019-11-21T04:12:14.892049+01:00 linux-2kgy kernel: [   62.541390]   pwq 16:
cpus=0-7 flags=0x4 nice=0 active=1/512
2019-11-21T04:12:14.892049+01:00 linux-2kgy kernel: [   62.541394]    
in-flight: 200:fsnotify_mark_destroy_workfn BAR(1)
2019-11-21T04:12:14.892062+01:00 linux-2kgy kernel: [   62.541402] workqueue
events_freezable: flags=0x4
2019-11-21T04:12:14.892063+01:00 linux-2kgy kernel: [   62.541403]   pwq 0:
cpus=0 node=0 flags=0x0 nice=0 active=1/256
2019-11-21T04:12:14.892064+01:00 linux-2kgy kernel: [   62.541404]     pending:
pci_pme_list_scan
2019-11-21T04:12:14.892066+01:00 linux-2kgy kernel: [   62.541406] workqueue
events_power_efficient: flags=0x80
2019-11-21T04:12:14.892066+01:00 linux-2kgy kernel: [   62.541407]   pwq 2:
cpus=1 node=0 flags=0x0 nice=0 active=2/256
2019-11-21T04:12:14.892067+01:00 linux-2kgy kernel: [   62.541408]     pending:
neigh_periodic_work, neigh_periodic_work
2019-11-21T04:12:14.892068+01:00 linux-2kgy kernel: [   62.541412] workqueue
rcu_gp: flags=0x8
2019-11-21T04:12:14.892069+01:00 linux-2kgy kernel: [   62.541413]   pwq 2:
cpus=1 node=0 flags=0x0 nice=0 active=1/256
2019-11-21T04:12:14.892070+01:00 linux-2kgy kernel: [   62.541414]     pending:
srcu_invoke_callbacks
2019-11-21T04:12:14.892070+01:00 linux-2kgy kernel: [   62.541416] workqueue
mm_percpu_wq: flags=0x8
2019-11-21T04:12:14.892071+01:00 linux-2kgy kernel: [   62.541417]   pwq 2:
cpus=1 node=0 flags=0x0 nice=0 active=1/256
2019-11-21T04:12:14.892072+01:00 linux-2kgy kernel: [   62.541418]     pending:
vmstat_update
2019-11-21T04:12:14.892073+01:00 linux-2kgy kernel: [   62.541433] pool 2:
cpus=1 node=0 flags=0x0 nice=0 hung=52s workers=4 idle: 300 743 19
2019-11-21T04:12:14.892076+01:00 linux-2kgy kernel: [   62.541440] pool 16:
cpus=0-7 flags=0x4 nice=0 hung=0s workers=12 idle: 194 193 69 177 192 195 191
190 8 201 203

At that point I rebooted the system with the powerswitch because I had no time
to look further and there was no occurrance of the problem any more until and
including today ( booting the system at least two times a day and now running 
5.3.12) 
Before there were two occurrances with 5.3.9 and at least one with 5.3.8 
Is there anything that I could do when there is the problems next occurrance to
collect more information about the problem than it is written in the syslog now
?


You are receiving this mail because: