[Bug 1232402] New: wake up after hibernation stopped working after switching to 6.4.0-150600.23.25 from 6.4.0-150600.23.22
https://bugzilla.suse.com/show_bug.cgi?id=1232402 Bug ID: 1232402 Summary: wake up after hibernation stopped working after switching to 6.4.0-150600.23.25 from 6.4.0-150600.23.22 Classification: openSUSE Product: openSUSE Distribution Version: Leap 15.6 Hardware: Other OS: Other Status: NEW Severity: Normal Priority: P5 - None Component: Kernel Assignee: kernel-bugs@opensuse.org Reporter: oleksii.prudkyi@gmail.com QA Contact: qa-bugs@suse.de Target Milestone: --- Found By: --- Blocker: --- Hi, in logs I got ``` [ T1040] Freezing user space processes failed after 20.003 seconds (2 tasks refusing to freeze, wq_busy=0): ``` with processes like udev-worker and mdadm on 6.4.0-150600.23.22 everything fine ``` [ T1040] PM: Image signature found, resuming [ T1040] PM: hibernation: resume from hibernation [ T1040] random: crng reseeded on system resumption [ T1040] Freezing user space processes [ T1040] Freezing user space processes failed after 20.003 seconds (2 tasks refusing to freeze, wq_busy=0): [ T1040] task:mdadm state:D stack:0 pid:985 tgid:985 ppid:642 flags:0x00000006 [ T1040] Call Trace: [ T1040] <TASK> [ T1040] __schedule+0x381/0x1540 [ T1040] ? kmem_cache_alloc+0x102/0x300 [ T1040] ? mempool_alloc+0x64/0x180 [ T1040] ? __pfx_autoremove_wake_function+0x10/0x10 [ T1040] schedule+0x24/0xb0 [ T1040] _md_handle_reqeust+0x7e/0x1e0 [md_mod dc71c1fc6aeb160a219080a6f7f64626cf165910] [ T1040] ? __pfx_autoremove_wake_function+0x10/0x10 [ T1040] __submit_bio+0xa8/0x150 [ T1040] submit_bio_noacct_nocheck+0x153/0x370 [ T1040] ? block_read_full_folio+0x206/0x350 [ T1040] block_read_full_folio+0x206/0x350 [ T1040] ? __pfx_blkdev_get_block+0x10/0x10 [ T1040] ? __mod_memcg_lruvec_state+0x9d/0xf0 [ T1040] ? __mod_lruvec_page_state+0x74/0xb0 [ T1040] ? __pfx_blkdev_read_folio+0x10/0x10 [ T1040] filemap_read_folio+0x41/0x2b0 [ T1040] ? __pfx_workingset_update_node+0x10/0x10 [ T1040] ? __pfx_blkdev_read_folio+0x10/0x10 [ T1040] do_read_cache_folio+0x108/0x390 [ T1040] ? slab_post_alloc_hook+0x69/0x2e0 [ T1040] read_part_sector+0x32/0xa0 [ T1040] read_lba+0xe5/0x180 [ T1040] efi_partition+0xed/0x7e0 [ T1040] ? vsnprintf+0x102/0x4c0 [ T1040] ? snprintf+0x45/0x70 [ T1040] ? __pfx_efi_partition+0x10/0x10 [ T1040] ? bdev_disk_changed+0x228/0x570 [ T1040] bdev_disk_changed+0x228/0x570 [ T1040] blkdev_get_whole+0x8e/0x90 [ T1040] blkdev_get_by_dev+0x298/0x2f0 [ T1040] ? __pfx_blkdev_open+0x10/0x10 [ T1040] blkdev_open+0x45/0xb0 [ T1040] do_dentry_open+0x22f/0x420 [ T1040] path_openat+0xde8/0x1050 [ T1040] ? security_inode_alloc+0x24/0x90 [ T1040] do_filp_open+0xc5/0x140 [ T1040] ? kmem_cache_alloc+0x163/0x300 [ T1040] ? getname_flags+0x46/0x1e0 [ T1040] ? do_sys_openat2+0x248/0x320 [ T1040] do_sys_openat2+0x248/0x320 [ T1040] do_sys_open+0x57/0x80 [ T1040] do_syscall_64+0x58/0x80 [ T1040] ? syscall_exit_to_user_mode+0x1e/0x40 [ T1040] ? do_syscall_64+0x67/0x80 [ T1040] ? syscall_exit_to_user_mode+0x1e/0x40 [ T1040] ? do_syscall_64+0x67/0x80 [ T1040] ? syscall_exit_to_user_mode+0x1e/0x40 [ T1040] ? do_syscall_64+0x67/0x80 [ T1040] entry_SYSCALL_64_after_hwframe+0x7c/0xe6 [ T1040] RIP: 0033:0x7fc5d274e17e [ T1040] RSP: 002b:00007fff11a04e60 EFLAGS: 00000202 ORIG_RAX: 0000000000000101 [ T1040] RAX: ffffffffffffffda RBX: 0000000000004000 RCX: 00007fc5d274e17e [ T1040] RDX: 0000000000004000 RSI: 00007fff11a04ef0 RDI: 00000000ffffff9c [ T1040] RBP: 00007fff11a04ef0 R08: 0000000000000064 R09: 0000000000000000 [ T1040] R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000009 [ T1040] R13: 0000000000000002 R14: 000055ad8ac57ae0 R15: 00007fff11a05e50 [ T1040] </TASK> [ T1040] task:mdadm state:D stack:0 pid:1046 tgid:1046 ppid:626 flags:0x00000006 [ T1040] Call Trace: [ T1040] <TASK> [ T1040] __schedule+0x381/0x1540 [ T1040] ? release_pages+0x151/0x660 [ T1040] schedule+0x24/0xb0 [ T1040] schedule_preempt_disabled+0x11/0x20 [ T1040] __mutex_lock.isra.15+0x1c0/0x700 [ T1040] ? ilookup+0x79/0x110 [ T1040] ? blkdev_get_by_dev+0xb6/0x2f0 [ T1040] blkdev_get_by_dev+0xb6/0x2f0 [ T1040] ? __pfx_blkdev_open+0x10/0x10 [ T1040] blkdev_open+0x45/0xb0 [ T1040] do_dentry_open+0x22f/0x420 [ T1040] path_openat+0xde8/0x1050 [ T1040] ? __alloc_pages+0x18b/0x350 [ T1040] do_filp_open+0xc5/0x140 [ T1040] ? kmem_cache_alloc+0x163/0x300 [ T1040] ? getname_flags+0x46/0x1e0 [ T1040] ? do_sys_openat2+0x248/0x320 [ T1040] do_sys_openat2+0x248/0x320 [ T1040] do_sys_open+0x57/0x80 [ T1040] do_syscall_64+0x58/0x80 [ T1040] ? exc_page_fault+0x69/0x150 [ T1040] entry_SYSCALL_64_after_hwframe+0x7c/0xe6 [ T1040] RIP: 0033:0x7f4870e8717e [ T1040] RSP: 002b:00007ffdbacffa80 EFLAGS: 00000202 ORIG_RAX: 0000000000000101 [ T1040] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4870e8717e [ T1040] RDX: 0000000000000000 RSI: 00007ffdbad04f0a RDI: 00000000ffffff9c [ T1040] RBP: 00007ffdbad04f0a R08: 00007ffdbad02ad0 R09: 00007ffdbad02c50 [ T1040] R10: 0000000000000000 R11: 0000000000000202 R12: 000055db601292a0 [ T1040] R13: 00007ffdbad03300 R14: 0000000000000000 R15: 00007ffdbad02e50 [ T1040] </TASK> [ T1040] OOM killer enabled. [ T1040] Restarting tasks ... done. [ T985] md2: p1 [ T1040] PM: hibernation: resume failed (-16) ``` -- You are receiving this mail because: You are the assignee for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1232402 https://bugzilla.suse.com/show_bug.cgi?id=1232402#c2 --- Comment #2 from Oleksii Prudkyi <oleksii.prudkyi@gmail.com> --- Hi Takashi Iwai I have checked and both kernels works fine i.e. from http://download.opensuse.org/repositories/Kernel:/SLE15-SP6/pool/ and http://download.opensuse.org/repositories/home:/tiwai:/bsc1232402/pool/ but also I gave a second try to 6.4.0-150600.23.25 (loaded by error) and it also works (though didn't work two times in row before) so it seems there is some hard to track random issue with something also (or just sometimes this 20 sec timeout is too small) but previously I didn't have issues with hibernate for long , i.e. 13/14/15 OpenSUSE (had hibernate issues with nvidia drivers but this is other story) -- You are receiving this mail because: You are the assignee for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1232402 https://bugzilla.suse.com/show_bug.cgi?id=1232402#c4 --- Comment #4 from Oleksii Prudkyi <oleksii.prudkyi@gmail.com> --- I had also the same error with udev-worker (mentioned it in first description but without logs) on 6.4.0-150600.23.25-default so, probably, it is not md alone , maybe some part related to freezing all processes ``` [ T998] Freezing user space processes failed after 20.004 seconds (1 tasks refusing to freeze, wq_busy=0): [ T998] task:(udev-worker) state:D stack:0 pid:620 tgid:620 ppid:430 flags:0x00000006 [ T998] Call Trace: [ T998] <TASK> [ T998] __schedule+0x381/0x1540 [ T998] ? __pfx_autoremove_wake_function+0x10/0x10 [ T998] schedule+0x24/0xb0 [ T998] _md_handle_reqeust+0x7e/0x1e0 [md_mod dc71c1fc6aeb160a219080a6f7f64626cf165910] [ T998] ? __pfx_autoremove_wake_function+0x10/0x10 [ T998] __submit_bio+0xa8/0x150 [ T998] submit_bio_noacct_nocheck+0x153/0x370 [ T998] ? mpage_readahead+0x10c/0x140 [ T998] mpage_readahead+0x10c/0x140 [ T998] ? __pfx_blkdev_get_block+0x10/0x10 [ T998] read_pages+0x5a/0x220 [ T998] page_cache_ra_unbounded+0x131/0x180 [ T998] filemap_get_pages+0xff/0x5a0 [ T998] ? do_filp_open+0xd9/0x140 [ T998] filemap_read+0xcc/0x330 [ T998] ? aa_file_perm+0x125/0x500 [ T998] blkdev_read_iter+0xb8/0x150 [ T998] vfs_read+0x22d/0x2e0 [ T998] ksys_read+0xa5/0xe0 [ T998] do_syscall_64+0x58/0x80 [ T998] ? exc_page_fault+0x69/0x150 [ T998] entry_SYSCALL_64_after_hwframe+0x7c/0xe6 [ T998] RIP: 0033:0x7fa36ba44b6d [ T998] RSP: 002b:00007ffc6c1351e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [ T998] RAX: ffffffffffffffda RBX: 000056397d6a8dd0 RCX: 00007fa36ba44b6d [ T998] RDX: 0000000000000040 RSI: 000056397d6713d8 RDI: 000000000000000c [ T998] RBP: 00000064515e0000 R08: 0000000000000070 R09: 0000000000000001 [ T998] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000040 [ T998] R13: 000056397d6a8e28 R14: 000056397d6713c8 R15: 000056397d6713b0 [ T998] </TASK> [ T998] OOM killer enabled. [ T998] Restarting tasks ... done. [ T998] PM: hibernation: resume failed (-16) ``` -- You are receiving this mail because: You are the assignee for the bug.
https://bugzilla.suse.com/show_bug.cgi?id=1232402 https://bugzilla.suse.com/show_bug.cgi?id=1232402#c5 --- Comment #5 from Oleksii Prudkyi <oleksii.prudkyi@gmail.com> --- though I have relatively busy mdadm setup , it may just have a lot to do ``` Personalities : [raid1] md4 : active raid1 sdc4[6](W)(S) sda5[5](W) sdb5[3](W) nvme0n1p3[4] 419298304 blocks super 1.2 [3/3] [UUU] bitmap: 0/4 pages [0KB], 65536KB chunk md3 : active raid1 nvme0n1p4[3] sdb4[4](W) nvme1n1p4[2] 8379392 blocks super 1.2 [3/3] [UUU] bitmap: 0/1 pages [0KB], 65536KB chunk md2 : active raid1 sdc3[3](W)(S) sda3[0](W) sdb3[2](W) nvme1n1p3[1] 420763584 blocks [3/3] [UUU] bitmap: 3/4 pages [12KB], 65536KB chunk md0 : active raid1 sdc1[4](W) sda1[2] sdb1[1] nvme1n1p1[3] nvme0n1p1[0] 512960 blocks [5/5] [UUUUU] md1 : active raid1 sdc2[3](W)(S) sda2[2](W) nvme1n1p2[0] nvme0n1p2[1] 67108800 blocks [3/3] [UUU] bitmap: 0/1 pages [0KB], 65536KB chunk ``` -- You are receiving this mail because: You are the assignee for the bug.
participants (1)
-
bugzilla_noreply@suse.com