![](https://seccdn.libravatar.org/avatar/aea1d8248292e6482742234c5cb514de.jpg?s=120&d=mm&r=g)
jdd wrote:
I don't know what cpu 4 is doing :-)
may I change kernel or any kernel config?
déc. 08 15:45:32 linux-uegt kernel: BUG: soft lockup - CPU#1 stuck for 22s! [kwin:1542] déc. 08 15:45:32 linux-uegt kernel: BUG: soft lockup - CPU#2 stuck for 22s! [Xorg:1054] déc. 08 15:45:32 linux-uegt kernel: BUG: soft lockup - CPU#3 stuck for 23s! [systemd-journal:470] ==== In *****my mind*****, 22-23 seconds is awfully low -- from the below, most things seem to be waiting on flush I/O timeout which indicates you are running way too many disk I/O intensive processes for your disk to keep up -- if any process (highest priority process will get 1st dibs, and systemd puts itself at the top of the food chain). From PAST discussion,
What do you mean? The best thing to do is too copy a working "config" into your source tree, then run make 'oldconfig' to make the new config compatible w/the old, then run "make xconfig". Gives you a nice gui you can use to go through about 200-400 options. most you won't need as they apply to Hw you won't have or other platforms. the designers of systemd-journal designed it for a high speed SSD, and using it on a normal spinning disk can bring many systems to a crawl. On top of that ... well... I have a relatively beefy I/O system (RAID 10 for all data disks) with about 11 separate RAID groups -- which means the system can do UP TO around 10-11 separate R/W at the same time -- AND I have my timeout set to 180seconds (yeah, I try to go for over-engineered solutions and often find out I was lucky cuz worst cases happen). At the same time you are running your regular system load.. try running 'latencytop -c >& ~/latency.log. That may be able to tell you which commands are causing the most latency problems that are causing a **cascade** of timeouts an could be causing your issue. Also, cpu'4' is likely a red-herring (next time it might be 3 or 0... whatever). You **could** use schedtool to set affinities to groups of processes (all background daemons on 1 core, all real-time ones on cores 2+3, all user procs on 4...etc)... manipulating what cores they can run on might help gather data... but hopefully the logging of latency issues will be sufficient to point the way... Unless systemd-journal has been improved alot, it had a rep for being poorly designed for normal disks, so you might try to see if that is fixed, and/or use another syslog daemon for the actual output. Another thing -- if you are logging to a tmpfs system, could it be filling? and slowing down while it is fragmenting more and looking for space? BTW -- I hate bugs like these -- so hard to know what direction to go. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org