[Bug 1188896] New: "BUG: Bad rss-counter state mm:00000000e555b579 type:MM_FILEPAGES val:-256" during build of cross-epiphany-gcc11-bootstrap
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 Bug ID: 1188896 Summary: "BUG: Bad rss-counter state mm:00000000e555b579 type:MM_FILEPAGES val:-256" during build of cross-epiphany-gcc11-bootstrap Classification: openSUSE Product: openSUSE Tumbleweed Version: Current Hardware: S/390-64 OS: Other Status: NEW Severity: Normal Priority: P5 - None Component: Kernel Assignee: kernel-bugs@opensuse.org Reporter: ada.lovelace@gmx.de QA Contact: qa-bugs@suse.de Found By: --- Blocker: --- This bug report is a mixture of gcc11 and Linux kernel. The build process of cross-epiphany-gcc11-bootstrap is hanging since 1 day with following information: [ 865s] g++ -fno-PIE -c -DIN_GCC_FRONTEND -O2 -g -fmessage-length=0 -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -U_FORTIFY_SOURCE -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -fno-exceptions -fno-rtti -fasynchronous-unwind-tables -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -Wno-error=format-diag -Wmissing-format-attribute -Woverloaded-virtual -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -DHAVE_CONFIG_H -I. -Ic -I../../gcc -I../../gcc/c -I../../gcc/../include -I../../gcc/../libcpp/include -I../../gcc/../libcody -I../../gcc/../libdecnumber -I../../gcc/../libdecnumber/dpd -I../libdecnumber -I../../gcc/../libbacktrace -o c/c-aux-info.o -MT c/c-aux-info.o -MMD -MP -MF c/.deps/c-aux-info.TPo ../../gcc/c/c-aux-info.c [ 866s] In file included from ./tm.h:23, [ 866s] from ../../gcc/c/c-aux-info.c:26: [ 866s] ../../gcc/config/elfos.h:201: warning: "READONLY_DATA_SECTION_ASM_OP" redefined [ 866s] 201 | #define READONLY_DATA_SECTION_ASM_OP "\t.section\t.rodata" [ 866s] | [ 866s] In file included from ./tm.h:21, [ 866s] from ../../gcc/c/c-aux-info.c:26: [ 866s] ../../gcc/config/epiphany/epiphany.h:671: note: this is the location of the previous definition [ 866s] 671 | #define READONLY_DATA_SECTION_ASM_OP "\t.section .rodata" [ 866s] | [ 871s] g++ -fno-PIE -c -DIN_GCC_FRONTEND -O2 -g -fmessage-length=0 -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -U_FORTIFY_SOURCE -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -fno-exceptions -fno-rtti -fasynchronous-unwind-tables -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -Wno-error=format-diag -Wmissing-format-attribute -Woverloaded-virtual -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -DHAVE_CONFIG_H -I. -Ic -I../../gcc -I../../gcc/c -I../../gcc/../include -I../../gcc/../libcpp/include -I../../gcc/../libcody -I../../gcc/../libdecnumber -I../../gcc/../libdecnumber/dpd -I../libdecnumber -I../../gcc/../libbacktrace -o c/c-objc-common.o -MT c/c-objc-common.o -MMD -MP -MF c/.deps/c-objc-common.TPo ../../gcc/c/c-objc-common.c [ 876s] [ 846.778933] BUG: Bad rss-counter state mm:00000000e555b579 type:MM_FILEPAGES val:-256 ... [ 2241s] g++ -fno-PIE -c -O2 -g -fmessage-length=0 -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -U_FORTIFY_SOURCE -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -fno-exceptions -fno-rtti -fasynchronous-unwind-tables -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -Wno-error=format-diag -Wmissing-format-attribute -Woverloaded-virtual -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -DHAVE_CONFIG_H -I. -I. -I../../gcc -I../../gcc/. -I../../gcc/../include -I../../gcc/../libcpp/include -I../../gcc/../libcody -I../../gcc/../libdecnumber -I../../gcc/../libdecnumber/dpd -I../libdecnumber -I../../gcc/../libbacktrace -o reload1.o -MT reload1.o -MMD -MP -MF ./.deps/reload1.TPo ../../gcc/reload1.c [ 2241s] [ 2211.961786] BUG: Bad page state in process cc1plus pfn:11d401 [ 2241s] [ 2211.967831] BUG: Bad page state in process cc1plus pfn:11d402 [ 2241s] [ 2211.968097] BUG: Bad page state in process cc1plus pfn:11d403 [ 2241s] [ 2211.968458] BUG: Bad page state in process cc1plus pfn:11d404 [ 2241s] [ 2211.968668] BUG: Bad page state in process cc1plus pfn:11d405 [ 2241s] [ 2211.968878] BUG: Bad page state in process cc1plus pfn:11d406 [ 2241s] [ 2211.969081] BUG: Bad page state in process cc1plus pfn:11d407 [ 2241s] [ 2211.969287] BUG: Bad page state in process cc1plus pfn:11d408 . [ 865s] g++ -fno-PIE -c -DIN_GCC_FRONTEND -O2 -g -fmessage-length=0 -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -U_FORTIFY_SOURCE -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -fno-exceptions -fno-rtti -fasynchronous-unwind-tables -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -Wno-error=format-diag -Wmissing-format-attribute -Woverloaded-virtual -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -DHAVE_CONFIG_H -I. -Ic -I../../gcc -I../../gcc/c -I../../gcc/../include -I../../gcc/../libcpp/include -I../../gcc/../libcody -I../../gcc/../libdecnumber -I../../gcc/../libdecnumber/dpd -I../libdecnumber -I../../gcc/../libbacktrace -o c/c-aux-info.o -MT c/c-aux-info.o -MMD -MP -MF c/.deps/c-aux-info.TPo ../../gcc/c/c-aux-info.c [ 866s] In file included from ./tm.h:23, [ 866s] from ../../gcc/c/c-aux-info.c:26: [ 866s] ../../gcc/config/elfos.h:201: warning: "READONLY_DATA_SECTION_ASM_OP" redefined [ 866s] 201 | #define READONLY_DATA_SECTION_ASM_OP "\t.section\t.rodata" [ 866s] | [ 866s] In file included from ./tm.h:21, [ 866s] from ../../gcc/c/c-aux-info.c:26: [ 866s] ../../gcc/config/epiphany/epiphany.h:671: note: this is the location of the previous definition [ 866s] 671 | #define READONLY_DATA_SECTION_ASM_OP "\t.section .rodata" [ 866s] | [ 871s] g++ -fno-PIE -c -DIN_GCC_FRONTEND -O2 -g -fmessage-length=0 -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -U_FORTIFY_SOURCE -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -fno-exceptions -fno-rtti -fasynchronous-unwind-tables -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -Wno-error=format-diag -Wmissing-format-attribute -Woverloaded-virtual -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -DHAVE_CONFIG_H -I. -Ic -I../../gcc -I../../gcc/c -I../../gcc/../include -I../../gcc/../libcpp/include -I../../gcc/../libcody -I../../gcc/../libdecnumber -I../../gcc/../libdecnumber/dpd -I../libdecnumber -I../../gcc/../libbacktrace -o c/c-objc-common.o -MT c/c-objc-common.o -MMD -MP -MF c/.deps/c-objc-common.TPo ../../gcc/c/c-objc-common.c [ 876s] [ 846.778933] BUG: Bad rss-counter state mm:00000000e555b579 type:MM_FILEPAGES val:-256 [ 2241s] In file included from ./tm.h:23, [ 2241s] from ../../gcc/backend.h:28, [ 2241s] from ../../gcc/reload1.c:23: [ 2241s] ../../gcc/config/elfos.h:201: warning: "READONLY_DATA_SECTION_ASM_OP" redefined [ 2241s] 201 | #define READONLY_DATA_SECTION_ASM_OP "\t.section\t.rodata" [ 2241s] | [ 2241s] In file included from ./tm.h:21, [ 2241s] from ../../gcc/backend.h:28, [ 2241s] from ../../gcc/reload1.c:23: [ 2241s] ../../gcc/config/epiphany/epiphany.h:671: note: this is the location of the previous definition [ 2241s] 671 | #define READONLY_DATA_SECTION_ASM_OP "\t.section .rodata" ... [ 2243s] g++ -fno-PIE -c -O2 -g -fmessage-length=0 -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -U_FORTIFY_SOURCE -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -fno-exceptions -fno-rtti -fasynchronous-unwind-tables -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -Wno-error=format-diag -Wmissing-format-attribute -Woverloaded-virtual -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -DHAVE_CONFIG_H -I. -I. -I../../gcc -I../../gcc/. -I../../gcc/../include -I../../gcc/../libcpp/include -I../../gcc/../libcody -I../../gcc/../libdecnumber -I../../gcc/../libdecnumber/dpd -I../libdecnumber -I../../gcc/../libbacktrace -o resource.o -MT resource.o -MMD -MP -MF ./.deps/resource.TPo ../../gcc/resource.c [ 2243s] In file included from ./tm.h:23, [ 2243s] from ../../gcc/backend.h:28, [ 2243s] from ../../gcc/resource.c:23: [ 2243s] ../../gcc/config/elfos.h:201: warning: "READONLY_DATA_SECTION_ASM_OP" redefined [ 2243s] 201 | #define READONLY_DATA_SECTION_ASM_OP "\t.section\t.rodata" [ 2243s] | [ 2243s] In file included from ./tm.h:21, [ 2243s] from ../../gcc/backend.h:28, [ 2243s] from ../../gcc/resource.c:23: [ 2243s] ../../gcc/config/epiphany/epiphany.h:671: note: this is the location of the previous definition [ 2243s] 671 | #define READONLY_DATA_SECTION_ASM_OP "\t.section .rodata" [ 2243s] | [ 2301s] [ 2272.129159] rcu: INFO: rcu_sched self-detected stall on CPU [ 2301s] [ 2272.129930] rcu: 1-...!: (2004 ticks this GP) idle=cda/1/0x4000000000000002 softirq=72272/72272 fqs=21 [ 2301s] [ 2272.130195] rcu: rcu_sched kthread starved for 5803 jiffies! g57669 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 [ 2301s] [ 2272.130474] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 2301s] [ 2272.130715] rcu: RCU grace-period kthread stack dump: [ 2301s] [ 2272.130909] rcu: Stack dump where RCU GP kthread last ran: [ 2481s] [ 2452.178521] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 2481s] [ 2452.179275] rcu: 1-...!: (8411 ticks this GP) idle=cda/1/0x4000000000000000 softirq=72272/72272 fqs=21 [ 2481s] [ 2452.179617] rcu: rcu_sched kthread starved for 23808 jiffies! g57669 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 [ 2481s] [ 2452.179949] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 2481s] [ 2452.180241] rcu: RCU grace-period kthread stack dump: [ 2481s] [ 2452.180457] rcu: Stack dump where RCU GP kthread last ran: [ 2661s] [ 2632.370659] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 2661s] [ 2632.422457] rcu: 1-...!: (15430 ticks this GP) idle=cda/1/0x4000000000000000 softirq=72272/72272 fqs=21 [ 2661s] [ 2632.422835] rcu: rcu_sched kthread starved for 41827 jiffies! g57669 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 [ 2661s] [ 2632.423099] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 2661s] [ 2632.423322] rcu: RCU grace-period kthread stack dump: [ 2661s] [ 2632.423498] rcu: Stack dump where RCU GP kthread last ran: [ 2841s] [ 2812.408987] rcu: INFO: rcu_sched self-detected stall on CPU [ 2841s] [ 2812.409246] rcu: 1-...!: (21988 ticks this GP) idle=cda/1/0x4000000000000002 softirq=72272/72272 fqs=21 [ 2841s] [ 2812.409342] rcu: rcu_sched kthread starved for 59831 jiffies! g57669 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 [ 2841s] [ 2812.464705] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 2841s] [ 2812.464890] rcu: RCU grace-period kthread stack dump: [ 2841s] [ 2812.465021] rcu: Stack dump where RCU GP kthread last ran: [ 3022s] [ 2992.448529] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 3022s] [ 2992.568724] rcu: 1-...!: (29161 ticks this GP) idle=cda/1/0x4000000000000000 softirq=72272/72272 fqs=21 [ 3022s] [ 2992.568847] rcu: rcu_sched kthread starved for 77835 jiffies! g57669 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 [ 3022s] [ 2992.568940] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 3022s] [ 2992.569021] rcu: RCU grace-period kthread stack dump: [ 3022s] [ 2992.569104] rcu: Stack dump where RCU GP kthread last ran: [ 3202s] [ 3172.579399] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 3202s] [ 3172.638172] rcu: 1-...!: (36644 ticks this GP) idle=cda/1/0x4000000000000000 softirq=72272/72272 fqs=21 [ 3202s] [ 3172.638356] rcu: rcu_sched kthread starved for 95848 jiffies! g57669 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 [ 3202s] [ 3172.638486] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 3202s] [ 3172.638608] rcu: RCU grace-period kthread stack dump: [ 3202s] [ 3172.638733] rcu: Stack dump where RCU GP kthread last ran: [ 3382s] [ 3352.635605] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 3382s] [ 3352.785267] rcu: 1-...!: (44755 ticks this GP) idle=cda/1/0x4000000000000000 softirq=72272/72272 fqs=21 [ 3382s] [ 3352.796719] rcu: rcu_sched kthread starved for 113853 jiffies! g57669 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 [ 3382s] [ 3352.807909] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 3382s] [ 3352.808153] rcu: RCU grace-period kthread stack dump: [ 3382s] [ 3352.809425] rcu: Stack dump where RCU GP kthread last ran: [ 3562s] [ 3532.659925] rcu: INFO: rcu_sched self-detected stall on CPU [ 3562s] [ 3532.671463] rcu: 1-...!: (53554 ticks this GP) idle=cda/1/0x4000000000000002 softirq=72272/72272 fqs=21 [ 3562s] [ 3532.697304] rcu: rcu_sched kthread starved for 131859 jiffies! g57669 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 [ 3562s] [ 3532.697588] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 3562s] [ 3532.697733] rcu: RCU grace-period kthread stack dump: [ 3562s] [ 3532.697877] rcu: Stack dump where RCU GP kthread last ran: [ 3742s] [ 3712.708722] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 3742s] [ 3712.709943] rcu: 1-...!: (61764 ticks this GP) idle=cda/1/0x4000000000000000 softirq=72272/72272 fqs=21 [ 3742s] [ 3712.710087] rcu: rcu_sched kthread starved for 149861 jiffies! g57669 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 [ 3742s] [ 3712.710195] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 3742s] [ 3712.710286] rcu: RCU grace-period kthread stack dump: [ 3742s] [ 3712.710380] rcu: Stack dump where RCU GP kthread last ran: [ 3922s] [ 3892.766529] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 3922s] [ 3892.770375] rcu: 1-...!: (70277 ticks this GP) idle=cda/1/0x4000000000000000 softirq=72272/72272 fqs=21 [ 3922s] [ 3892.805972] rcu: rcu_sched kthread starved for 167870 jiffies! g57669 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 [ 3922s] [ 3892.806225] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 3922s] [ 3892.806499] rcu: RCU grace-period kthread stack dump: Kernel version: 5.13.4-1.1: https://github.com/openSUSE/kernel/releases/tag/v5.13.4 Logs: https://build.opensuse.org/build/openSUSE:Factory:zSystems/standard/s390x/cr... Latest updates for s390x July 21: https://build.opensuse.org/package/view_file/openSUSE:Factory:zSystems/cross... -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c1 Sarah Kriesch <ada.lovelace@gmx.de> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |azouhr@opensuse.org, | |ihno@suse.com, | |tstaudt@de.ibm.com Found By|--- |Community User --- Comment #1 from Sarah Kriesch <ada.lovelace@gmx.de> --- Hi Thomas, it seems that we have got the next kernel bug. This time, that is a combination with gcc... -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c2 --- Comment #2 from Sarah Kriesch <ada.lovelace@gmx.de> --- The important line has been swallowed: [ 876s] [ 846.778933] BUG: Bad rss-counter state mm:00000000e555b579 type:MM_FILEPAGES val:-256 -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 Sarah Kriesch <ada.lovelace@gmx.de> changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|P5 - None |P2 - High -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c8 --- Comment #8 from Sarah Kriesch <ada.lovelace@gmx.de> --- Here is the link to the rpm of kernel-default: http://download.opensuse.org/ports/zsystems/tumbleweed/repo/oss/s390x/kernel... -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c9 Sarah Kriesch <ada.lovelace@gmx.de> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ro@suse.com --- Comment #9 from Sarah Kriesch <ada.lovelace@gmx.de> --- @R�diger: Last time there were discussions besause of system dumps and kdump. Can we retry it, if I would restart the build of cross-epiphany-gcc11-bootstrap? -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c10 --- Comment #10 from Sarah Kriesch <ada.lovelace@gmx.de> --- I tried to add the dmesg command in multiple ways into the spec file (of the build process) in my home repo this weekend. It was not possible to reproduce it there. I look how to do that in another way today. -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c11 --- Comment #11 from Sarah Kriesch <ada.lovelace@gmx.de> --- @R�diger: bcc has got the same kernel error messages now. Can you take a dump, please? https://build.opensuse.org/package/show/openSUSE:Factory:zSystems/bcc -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c12 --- Comment #12 from Ruediger Oertel <ro@suse.com> --- hi ... I'll try to find help in my team as I'm on vacation starting tomorrow I wanted to get back to this earlier but was blocked by too much other things (including setting up the new Z yesterday ...) -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c14 --- Comment #14 from Sarah Kriesch <ada.lovelace@gmx.de> --- We had additional issues, but I had a problem receiving a dump for analyzing. Christian Borntr�ger gave me the hint with "dump-guest-memory" istead of vmstate. Here is the dmesg output inside of the crash tool: WARNING: kernel relocated [3150MB]: patching 91218 gdb minimal_symbol values WARNING: kernel version inconsistency between vmlinux and dumpfile WARNING: forcing cpu count to: 3 please wait... (gathering task table data) crash: invalid kernel virtual address: 2aa3abc7a50 type: "fill_task_struct" KERNEL: ./boot/vmlinux-5.13.8-2-default.xz DEBUGINFO: /usr/lib/debug/boot/vmlinux-5.13.8-2-default.debug DUMPFILE: s390zl21.memory.dump CPUS: 3 [OFFLINE: 1] DATE: Thu Aug 19 05:30:29 EDT 2021 UPTIME: 2 days, 16:23:24 LOAD AVERAGE: 9.00, 9.00, 9.00 TASKS: 73 NODENAME: s390zl21 RELEASE: 5.13.8-1-default VERSION: #1 SMP Thu Aug 5 08:56:22 UTC 2021 (967c6a8) MACHINE: s390x (unknown Mhz) MEMORY: 4 GB PANIC: "" PID: 0 COMMAND: "swapper/0" TASK: c60c3a00 (1 of 2) [THREAD_INFO: c60c3a00] CPU: 0 STATE: TASK_RUNNING INFO: no panic task found crash> dmesg [223279.797135] [<00000000c50061ec>] rcu_gp_kthread+0xabc/0x1280 [223279.797139] [<00000000c4fa1ecc>] kthread+0x14c/0x160 [223279.797143] [<00000000c4f2b56c>] __ret_from_fork+0x3c/0x60 [223279.797146] [<00000000c58bbc3a>] ret_from_fork+0xa/0x30 [223279.797147] rcu: Stack dump where RCU GP kthread last ran: [223279.797221] Task dump for CPU 1: [223279.797222] task:cc1plus state:R running task stack: 0 pid: 2431 ppid: 2430 flags:0x00000004 [223279.797226] Call Trace: [223279.797226] [<00000000c58a6774>] show_stack+0x8c/0xd8 [223279.797229] [<00000000c4fad58c>] sched_show_task+0x14c/0x170 [223279.797231] [<00000000c58a84a6>] rcu_check_gp_kthread_starvation+0x15e/0x180 [223279.797233] [<00000000c500889c>] rcu_sched_clock_irq+0x7fc/0x990 [223279.797235] [<00000000c5019caa>] update_process_times+0xba/0xf0 [223279.797239] [<00000000c502c67a>] tick_sched_handle+0x4a/0x70 [223279.797241] [<00000000c502ca76>] tick_sched_timer+0x76/0xe0 [223279.797242] [<00000000c501a856>] __hrtimer_run_queues+0x136/0x290 [223279.797244] [<00000000c501b8a0>] hrtimer_interrupt+0x150/0x380 [223279.797246] [<00000000c4f34d56>] do_IRQ+0x56/0x70 [223279.797248] [<00000000c4f35134>] do_irq_async+0x54/0xb0 [223279.797251] [<00000000c58adade>] do_ext_irq+0x9e/0x150 [223279.797252] [<00000000c58bbe80>] ext_int_handler+0xc0/0xf0 [223279.797253] [<00000000c555d024>] xas_load+0x44/0xc0 [223279.797259] [<00000000c5145452>] pagecache_get_page+0x82/0x500 [223279.797264] [<00000000c5145f14>] filemap_fault+0x84/0x880 [223279.797265] [<00000000c532bd4c>] ext4_filemap_fault+0x4c/0x70 [223279.797269] [<00000000c518bc0c>] __do_fault+0x4c/0xf0 [223279.797273] [<00000000c519114c>] __handle_mm_fault+0xc1c/0x1300 [223279.797274] [<00000000c5191904>] handle_mm_fault+0xd4/0x220 [223279.797276] [<00000000c4f506ac>] do_exception+0x18c/0x420 [223279.797278] [<00000000c4f50cda>] do_dat_exception+0x2a/0x60 [223279.797280] [<00000000c58ad6a2>] __do_pgm_check+0x132/0x250 [223279.797281] [<00000000c58bbd78>] pgm_check_handler+0x118/0x160 [223459.838813] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [223459.840823] (detected by 0, t=22300491 jiffies, g=11737, q=1164) [223459.840828] rcu: All QSes seen, last rcu_sched kthread activity 22300488 (4317283024-4294982536), jiffies_till_next_fqs=1, root ->qsmask 0x0 [223459.840897] rcu: rcu_sched kthread starved for 22300488 jiffies! g11737 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 [223459.840950] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [223459.840996] rcu: RCU grace-period kthread stack dump: [223459.841024] task:rcu_sched state:R running task stack: 0 pid: 13 ppid: 2 flags:0x00000000 [223459.841029] Call Trace: [223459.841031] [<00000000c58b5336>] __schedule+0x366/0x9b0 [223459.841040] [<00000000c58b59e0>] schedule+0x60/0x100 [223459.841042] [<00000000c58bab9c>] schedule_timeout+0x8c/0x170 [223459.841045] [<00000000c50061ec>] rcu_gp_kthread+0xabc/0x1280 [223459.841048] [<00000000c4fa1ecc>] kthread+0x14c/0x160 [223459.841052] [<00000000c4f2b56c>] __ret_from_fork+0x3c/0x60 [223459.841055] [<00000000c58bbc3a>] ret_from_fork+0xa/0x30 [223459.841057] rcu: Stack dump where RCU GP kthread last ran: [223459.841084] Task dump for CPU 1: [223459.841085] task:cc1plus state:R running task stack: 0 pid: 2431 ppid: 2430 flags:0x00000004 [223459.841088] Call Trace: [223459.841089] [<0000000000000000>] 0x0 [223639.888971] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [223639.948065] (detected by 1, t=22318496 jiffies, g=11737, q=1164) [223639.948072] rcu: All QSes seen, last rcu_sched kthread activity 22318493 (4317301029-4294982536), jiffies_till_next_fqs=1, root ->qsmask 0x0 [223639.948141] rcu: rcu_sched kthread starved for 22318493 jiffies! g11737 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 [223639.948194] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [223639.948239] rcu: RCU grace-period kthread stack dump: [223639.948265] task:rcu_sched state:R running task stack: 0 pid: 13 ppid: 2 flags:0x00000000 [223639.948270] Call Trace: [223639.948272] [<00000000c58b5336>] __schedule+0x366/0x9b0 [223639.948281] [<00000000c58b59e0>] schedule+0x60/0x100 [223639.948283] [<00000000c58bab9c>] schedule_timeout+0x8c/0x170 [223639.948286] [<00000000c50061ec>] rcu_gp_kthread+0xabc/0x1280 [223639.948289] [<00000000c4fa1ecc>] kthread+0x14c/0x160 [223639.948293] [<00000000c4f2b56c>] __ret_from_fork+0x3c/0x60 [223639.948296] [<00000000c58bbc3a>] ret_from_fork+0xa/0x30 [223639.948298] rcu: Stack dump where RCU GP kthread last ran: [223639.948325] Task dump for CPU 1: [223639.948326] task:cc1plus state:R running task stack: 0 pid: 2431 ppid: 2430 flags:0x00000004 [223639.948329] Call Trace: [223639.948329] [<00000000c58a6774>] show_stack+0x8c/0xd8 [223639.948332] [<00000000c4fad58c>] sched_show_task+0x14c/0x170 [223639.948334] [<00000000c58a84a6>] rcu_check_gp_kthread_starvation+0x15e/0x180 [223639.948336] [<00000000c500889c>] rcu_sched_clock_irq+0x7fc/0x990 [223639.948338] [<00000000c5019caa>] update_process_times+0xba/0xf0 [223639.948342] [<00000000c502c67a>] tick_sched_handle+0x4a/0x70 [223639.948344] [<00000000c502ca76>] tick_sched_timer+0x76/0xe0 [223639.948346] [<00000000c501a856>] __hrtimer_run_queues+0x136/0x290 [223639.948347] [<00000000c501b8a0>] hrtimer_interrupt+0x150/0x380 [223639.948349] [<00000000c4f34d56>] do_IRQ+0x56/0x70 [223639.948352] [<00000000c4f35134>] do_irq_async+0x54/0xb0 [223639.948355] [<00000000c58adade>] do_ext_irq+0x9e/0x150 [223639.948356] [<00000000c58bbe80>] ext_int_handler+0xc0/0xf0 [223639.948357] [<00000000c555d070>] xas_load+0x90/0xc0 [223639.948362] ([<00000000c5145452>] pagecache_get_page+0x82/0x500) [223639.948366] [<00000000c5145f14>] filemap_fault+0x84/0x880 [223639.948368] [<00000000c532bd4c>] ext4_filemap_fault+0x4c/0x70 [223639.948371] [<00000000c518bc0c>] __do_fault+0x4c/0xf0 [223639.948375] [<00000000c519114c>] __handle_mm_fault+0xc1c/0x1300 [223639.948377] [<00000000c5191904>] handle_mm_fault+0xd4/0x220 [223639.948378] [<00000000c4f506ac>] do_exception+0x18c/0x420 [223639.948381] [<00000000c4f50cda>] do_dat_exception+0x2a/0x60 [223639.948382] [<00000000c58ad6a2>] __do_pgm_check+0x132/0x250 [223639.948383] [<00000000c58bbd78>] pgm_check_handler+0x118/0x160 [223819.938812] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [223819.939049] (detected by 1, t=22336501 jiffies, g=11737, q=1164) [223819.939052] rcu: All QSes seen, last rcu_sched kthread activity 22336498 (4317319034-4294982536), jiffies_till_next_fqs=1, root ->qsmask 0x0 [223819.939172] rcu: rcu_sched kthread starved for 22336498 jiffies! g11737 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 [223819.939284] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [223819.939383] rcu: RCU grace-period kthread stack dump: [223819.939434] task:rcu_sched state:R running task stack: 0 pid: 13 ppid: 2 flags:0x00000000 -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c15 --- Comment #15 from Sarah Kriesch <ada.lovelace@gmx.de> --- PID: 2431 TASK: 82726000 CPU: 1 COMMAND: "cc1plus" LOWCORE INFO: -psw : 0x0704e00180000000 0x00000000c555d044 -function : xas_load at c555d044 -prefix : 0x00408000 -cpu timer: 0x7fff92df5f45fa18 -clock cmp: 0x2f67c657b01d5e00 -general registers: 0x00000372025d4000 0x000000000000001c 0x0000000093a8024a 0x00000000000000e0 0x0000000093a80248 0x00000372025d4034 0x00000000025d0000 0x00000380025b7ab0 0x0000000000000708 000000000000000000 000000000000000000 0x00000380025b7ab0 0x0000000082726000 0x00000000c58d8248 0x00000000c555d000 0x00000380025b79c0 -access registers: 0x000003ff 0xac1efa80 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 -control registers: 0x00a0000014966a10 0x00000000c6518007 0x000000000001d080 000000000000000000 0x000000000000ffff 0x000000000001d080 0x0000000032000000 0x00000000827d81c7 0x0000000000008000 000000000000000000 0000000 0x0000000002dde014 0x0000000000000040 000000000000000000 0x0000000200000028 0x0000000000000007 0xffffffff00000001 0x0000000002dde020 0x00000007000003e8 0x0000000002e5b750 0x000003fffc47c188 0x00000000025a1cf8 000000000000000000 0x000003fffd7fc85e 0x00000000012dee10 0x000003fffc47c158 000000000000000000 #0 [380025b79f0] pagecache_get_page at c5145452 #1 [380025b7a98] filemap_fault at c5145f14 #2 [380025b7b48] ext4_filemap_fault at c532bd4c #3 [380025b7b80] __do_fault at c518bc0c #4 [380025b7bb8] __handle_mm_fault at c519114c #5 [380025b7c88] handle_mm_fault at c5191904 #6 [380025b7ce0] do_exception at c4f506ac #7 [380025b7d40] do_dat_exception at c4f50cda #8 [380025b7d70] __do_pgm_check at c58ad6a2 #9 [380025b7ea0] pgm_check_handler at c58bbd78 PSW: 0705200180000000 00000000017083e8 (user space) GPRS: 0000000000000042 000000000259b7c0 000003ffab5005a0 000003fffd7fc640 0000000000000001 0000000000000001 00000000025a1cd0 00000000025337a0 00000000025a4748 0000000002535198 0000000000000001 000003ffab525000 000003ffac1a8f78 0000000000000002 0000000001e83d42 000003fffd7fc5a0 000000000000000000 0x00000000c6518007 0x00000000db000000 0x000000000001d0c0 -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c16 --- Comment #16 from Sarah Kriesch <ada.lovelace@gmx.de> --- used command: sudo crash -c /dev/ttyS0 --cpus 3 ./boot/vmlinux-5.13.8-2-default.xz s390zl21.memory.dump kernel-default and kernel-default-debuginfo for the attached memory.dump available under: https://build.opensuse.org/package/binaries/home:AdaLovelace:branches:openSU... (user account required) -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c17 --- Comment #17 from Sarah Kriesch <ada.lovelace@gmx.de> --- The file is too large for Bugzilla. Here is a download link: https://drive.google.com/drive/folders/121uOWtpx8q20Gde-O1Sf-1NOPP6Wrm_F?usp... -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c18 --- Comment #18 from Sarah Kriesch <ada.lovelace@gmx.de> --- crash> vm -p 2431 PID: 2431 TASK: 82726000 CPU: 1 COMMAND: "cc1plus" MM PGD RSS TOTAL_VM 81495900 827d8000 13372k 32496k VMA START END FLAGS FILE 82473640 1000000 2483000 8000875 /usr/lib64/gcc/s390x-suse-linux/11/cc1plus VIRTUAL PHYSICAL 1000000 b7a00000 1001000 b7a01000 1002000 b7a02000 1003000 b7a03000 1004000 b7a04000 1005000 b7a05000 1006000 b7a06000 1007000 b7a07000 1008000 b7a08000 1009000 b7a09000 100a000 b7a0a000 100b000 b7a0b000 100c000 b7a0c000 100d000 b7a0d000 100e000 b7a0e000 100f000 b7a0f000 1010000 b7a10000 1011000 b7a11000 1012000 b7a12000 1013000 b7a13000 1014000 b7a14000 1015000 b7a15000 1016000 b7a16000 1017000 b7a17000 1018000 b7a18000 1019000 b7a19000 101a000 b7a1a000 101b000 b7a1b000 101c000 b7a1c000 101d000 b7a1d000 101e000 b7a1e000 101f000 b7a1f000 1020000 b7a20000 1021000 b7a21000 1022000 b7a22000 1023000 b7a23000 1024000 b7a24000 1025000 b7a25000 1026000 b7a26000 1027000 b7a27000 1028000 b7a28000 1029000 b7a29000 102a000 b7a2a000 102b000 b7a2b000 102c000 b7a2c000 102d000 b7a2d000 102e000 b7a2e000 102f000 b7a2f000 1030000 b7a30000 1031000 b7a31000 1032000 b7a32000 1033000 b7a33000 1034000 b7a34000 1035000 b7a35000 1036000 b7a36000 1037000 b7a37000 1038000 b7a38000 1039000 b7a39000 103a000 b7a3a000 103b000 b7a3b000 103c000 b7a3c000 103d000 b7a3d000 103e000 b7a3e000 103f000 b7a3f000 1040000 b7a40000 1041000 b7a41000 1042000 b7a42000 1043000 b7a43000 1044000 b7a44000 1045000 b7a45000 1046000 b7a46000 1047000 b7a47000 1048000 b7a48000 1049000 b7a49000 104a000 b7a4a000 104b000 b7a4b000 104c000 b7a4c000 104d000 b7a4d000 104e000 b7a4e000 104f000 b7a4f000 1050000 b7a50000 1051000 b7a51000 1052000 b7a52000 1053000 b7a53000 1054000 b7a54000 1055000 b7a55000 1056000 b7a56000 1057000 b7a57000 1058000 b7a58000 1059000 b7a59000 105a000 b7a5a000 105b000 b7a5b000 105c000 b7a5c000 105d000 b7a5d000 105e000 b7a5e000 105f000 b7a5f000 1060000 b7a60000 1061000 b7a61000 1062000 b7a62000 1063000 b7a63000 1064000 b7a64000 1065000 b7a65000 1066000 b7a66000 1067000 b7a67000 1068000 b7a68000 1069000 b7a69000 106a000 b7a6a000 106b000 b7a6b000 106c000 b7a6c000 106d000 b7a6d000 106e000 b7a6e000 106f000 b7a6f000 1070000 b7a70000 1071000 b7a71000 1072000 b7a72000 1073000 b7a73000 1074000 b7a74000 -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c25 Sarah Kriesch <ada.lovelace@gmx.de> changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(bugproxy@us.ibm.c | |om) --- Comment #25 from Sarah Kriesch <ada.lovelace@gmx.de> --- What is the status of the bugfix? Fabian gave you feedback. -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c35 --- Comment #35 from Sarah Kriesch <ada.lovelace@gmx.de> --- The package radamsa has got this kernel error message: [ 85s] + /usr/bin/make -O -j2 V=1 VERBOSE=1 'CFLAGS=-O2 -g -m64 -fmessage-length=0 -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables' [ 1635s] [ 1611.720478][ T32] BUG: Bad rss-counter state mm:000000001e3d3f7c type:MM_FILEPAGES val:-48 [ 1637s] mkdir -p bin [ 1637s] cc -O2 -g -m64 -fmessage-length=0 -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -o bin/radamsa radamsa.c [ 1654s] mkdir -p tmp [ 1655s] sh tests/run bin/radamsa [ 1655s] Testing bin/radamsa: [ 1655s] o tests/ab.sh: ok [ 1655s] o tests/bd.sh: ok [ 1655s] o tests/blank.sh: ok [ 1655s] o tests/fail.sh: bad first failure [ 1655s] bad first failure [ 1655s] bad first failure [ 1655s] bad first failure [ 1655s] bad first failure [ 1655s] bad first failure -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c36 Sarah Kriesch <ada.lovelace@gmx.de> changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(ada.lovelace@gmx. | |de) | --- Comment #36 from Sarah Kriesch <ada.lovelace@gmx.de> --- watchman can be more interesting with the information for Gerald: [ 200s] Your build configuration: [ 200s] [ 200s] CC = gcc [ 200s] CPPFLAGS = -D_REENTRANT -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE [ 200s] CFLAGS = -O2 -g -m64 -fmessage-length=0 -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -fno-strict-aliasing -Wall -Wextra -Wdeclaration-after-statement -g -gdwarf-2 -fno-omit-frame-pointer [ 200s] CXX = g++ [ 200s] CXXFLAGS = -O2 -g -m64 -fmessage-length=0 -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -Wno-format-truncation -Wall -Wextra -g -gdwarf-2 -fno-omit-frame-pointer [ 200s] LDFLAGS = -flto=auto [ 200s] prefix: /usr [ 200s] version: 4.9.0 [ 200s] state directory: /run/watchman [ 200s] [ 200s] + make -j2 [ 201s] make all-am [ 201s] make[1]: Entering directory '/home/abuild/rpmbuild/BUILD/watchman-4.9.0' [ 201s] CXX watchman-ChildProcess.o [ 201s] CXX watchman-ContentHash.o [ 223s] CXX watchman-CookieSync.o [ 233s] CXX watchman-FileDescriptor.o [ 237s] [ 217.346928][ T31] BUG: Bad rss-counter state mm:000000004df6d83a type:MM_FILEPAGES val:-704 [ 238s] CXX watchman-FileInformation.o [ 240s] CXX watchman-InMemoryView.o [ 242s] CXX watchman-Pipe.o [ 252s] CXX watchman-PubSub.o [ 266s] CXX watchman-ThreadPool.o [ 273s] CXX watchman-QueryableView.o [ 281s] [ 261.409910][ T4933] BUG: Bad page state in process cc1plus pfn:a5101 [ 281s] [ 261.548175][ T4933] BUG: Bad page state in process cc1plus pfn:a5102 [ 281s] [ 261.548328][ T4933] BUG: Bad page state in process cc1plus pfn:a5103 [ 281s] [ 261.548420][ T4933] BUG: Bad page state in process cc1plus pfn:a5104 [ 281s] [ 261.548511][ T4933] BUG: Bad page state in process cc1plus pfn:a5105 [ 281s] [ 261.548625][ T4933] BUG: Bad page state in process cc1plus pfn:a5106 [ 281s] [ 261.548704][ T4933] BUG: Bad page state in process cc1plus pfn:a5107 [ 281s] [ 261.548781][ T4933] BUG: Bad page state in process cc1plus pfn:a5108 [ 281s] [ 261.548858][ T4933] BUG: Bad page state in process cc1plus pfn:a5109 [ 281s] [ 261.549006][ T4933] BUG: Bad page state in process cc1plus pfn:a510a [ 281s] [ 261.549091][ T4933] BUG: Bad page state in process cc1plus pfn:a510b [ 281s] [ 261.549172][ T4933] BUG: Bad page state in process cc1plus pfn:a510c [ 281s] [ 261.549251][ T4933] BUG: Bad page state in process cc1plus pfn:a510d [ 281s] [ 261.549329][ T4933] BUG: Bad page state in process cc1plus pfn:a510e [ 281s] [ 261.549407][ T4933] BUG: Bad page state in process cc1plus pfn:a510f [ 281s] [ 261.549485][ T4933] BUG: Bad page state in process cc1plus pfn:a5110 [ 281s] [ 261.549562][ T4933] BUG: Bad page state in process cc1plus pfn:a5111 [ 281s] [ 261.549639][ T4933] BUG: Bad page state in process cc1plus pfn:a5112 [ 281s] [ 261.549716][ T4933] BUG: Bad page state in process cc1plus pfn:a5113 [ 281s] [ 261.549793][ T4933] BUG: Bad page state in process cc1plus pfn:a5114 [ 281s] [ 261.549870][ T4933] BUG: Bad page state in process cc1plus pfn:a5115 [ 281s] [ 261.549947][ T4933] BUG: Bad page state in process cc1plus pfn:a5116 [ 281s] [ 261.550024][ T4933] BUG: Bad page state in process cc1plus pfn:a5117 [ 281s] [ 261.550111][ T4933] BUG: Bad page state in process cc1plus pfn:a5118 [ 281s] [ 261.550190][ T4933] BUG: Bad page state in process cc1plus pfn:a5119 [ 281s] [ 261.550269][ T4933] BUG: Bad page state in process cc1plus pfn:a511a [ 281s] [ 261.550348][ T4933] BUG: Bad page state in process cc1plus pfn:a511b [ 281s] [ 261.550427][ T4933] BUG: Bad page state in process cc1plus pfn:a511c [ 281s] [ 261.550505][ T4933] BUG: Bad page state in process cc1plus pfn:a511d [ 281s] [ 261.550583][ T4933] BUG: Bad page state in process cc1plus pfn:a511e [ 281s] [ 261.550661][ T4933] BUG: Bad page state in process cc1plus pfn:a511f [ 281s] [ 261.550740][ T4933] BUG: Bad page state in process cc1plus pfn:a5120 [ 281s] [ 261.550818][ T4933] BUG: Bad page state in process cc1plus pfn:a5121 [ 281s] [ 261.550896][ T4933] BUG: Bad page state in process cc1plus pfn:a5122 [ 281s] [ 261.550974][ T4933] BUG: Bad page state in process cc1plus pfn:a5123 [ 281s] [ 261.551052][ T4933] BUG: Bad page state in process cc1plus pfn:a5124 [ 281s] [ 261.551131][ T4933] BUG: Bad page state in process cc1plus pfn:a5125 [ 281s] [ 261.551209][ T4933] BUG: Bad page state in process cc1plus pfn:a5126 [ 281s] [ 261.551287][ T4933] BUG: Bad page state in process cc1plus pfn:a5127 [ 281s] [ 261.551366][ T4933] BUG: Bad page state in process cc1plus pfn:a5128 [ 281s] [ 261.551444][ T4933] BUG: Bad page state in process cc1plus pfn:a5129 [ 281s] [ 261.551523][ T4933] BUG: Bad page state in process cc1plus pfn:a512a [ 281s] [ 261.551602][ T4933] BUG: Bad page state in process cc1plus pfn:a512b [ 281s] [ 261.551679][ T4933] BUG: Bad page state in process cc1plus pfn:a512c [ 281s] [ 261.551757][ T4933] BUG: Bad page state in process cc1plus pfn:a512d [ 281s] [ 261.551836][ T4933] BUG: Bad page state in process cc1plus pfn:a512e [ 281s] [ 261.551914][ T4933] BUG: Bad page state in process cc1plus pfn:a512f [ 281s] [ 261.551993][ T4933] BUG: Bad page state in process cc1plus pfn:a5130 [ 281s] [ 261.552071][ T4933] BUG: Bad page state in process cc1plus pfn:a5131 [ 281s] [ 261.552150][ T4933] BUG: Bad page state in process cc1plus pfn:a5132 [ 281s] [ 261.552228][ T4933] BUG: Bad page state in process cc1plus pfn:a5133 [ 281s] [ 261.552307][ T4933] BUG: Bad page state in process cc1plus pfn:a5134 [ 281s] [ 261.552385][ T4933] BUG: Bad page state in process cc1plus pfn:a5135 [ 281s] [ 261.552463][ T4933] BUG: Bad page state in process cc1plus pfn:a5136 [ 281s] [ 261.552541][ T4933] BUG: Bad page state in process cc1plus pfn:a5137 [ 281s] [ 261.552619][ T4933] BUG: Bad page state in process cc1plus pfn:a5138 [ 281s] [ 261.552697][ T4933] BUG: Bad page state in process cc1plus pfn:a5139 [ 281s] [ 261.552775][ T4933] BUG: Bad page state in process cc1plus pfn:a513a [ 281s] [ 261.552853][ T4933] BUG: Bad page state in process cc1plus pfn:a513b [ 281s] [ 261.552931][ T4933] BUG: Bad page state in process cc1plus pfn:a513c [ 281s] CXX watchman-spawn.o [ 281s] CXX watchman-opt.o [ 341s] [ 321.678657][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 341s] [ 321.835415][ C1] rcu: All QSes seen, last rcu_sched kthread activity 6009 (4294969041-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 341s] [ 321.999873][ C1] rcu: rcu_sched kthread starved for 6009 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 341s] [ 321.999941][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 341s] [ 321.999991][ C1] rcu: RCU grace-period kthread stack dump: [ 341s] [ 322.000063][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 521s] [ 501.720088][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 521s] [ 501.745510][ C1] rcu: All QSes seen, last rcu_sched kthread activity 24014 (4294987046-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 521s] [ 501.745589][ C1] rcu: rcu_sched kthread starved for 24014 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 521s] [ 501.745651][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 521s] [ 501.745703][ C1] rcu: RCU grace-period kthread stack dump: [ 521s] [ 501.745780][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 701s] [ 681.773572][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 701s] [ 682.108309][ C1] rcu: All QSes seen, last rcu_sched kthread activity 42019 (4295005051-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 701s] [ 682.108442][ C1] rcu: rcu_sched kthread starved for 42019 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 701s] [ 682.108525][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 701s] [ 682.108591][ C1] rcu: RCU grace-period kthread stack dump: [ 701s] [ 682.108684][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 881s] [ 861.874041][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 881s] [ 861.882314][ C1] rcu: All QSes seen, last rcu_sched kthread activity 60029 (4295023061-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 881s] [ 861.882572][ C1] rcu: rcu_sched kthread starved for 60029 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 881s] [ 861.882727][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 881s] [ 861.882871][ C1] rcu: RCU grace-period kthread stack dump: [ 881s] [ 861.883009][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 1061s] [ 1042.112078][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 1061s] [ 1042.283713][ C1] rcu: All QSes seen, last rcu_sched kthread activity 78053 (4295041085-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 1061s] [ 1042.283807][ C1] rcu: rcu_sched kthread starved for 78053 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 1061s] [ 1042.283870][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 1061s] [ 1042.283932][ C1] rcu: RCU grace-period kthread stack dump: [ 1061s] [ 1042.284015][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 1241s] [ 1222.171679][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 1241s] [ 1222.283107][ C1] rcu: All QSes seen, last rcu_sched kthread activity 96059 (4295059091-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 1241s] [ 1222.283256][ C1] rcu: rcu_sched kthread starved for 96059 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 1241s] [ 1222.283888][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 1241s] [ 1222.284576][ C1] rcu: RCU grace-period kthread stack dump: [ 1241s] [ 1222.284728][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 1421s] [ 1402.231104][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 1421s] [ 1402.340906][ C1] rcu: All QSes seen, last rcu_sched kthread activity 114065 (4295077097-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 1421s] [ 1402.341051][ C1] rcu: rcu_sched kthread starved for 114065 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 1421s] [ 1402.341128][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 1421s] [ 1402.341181][ C1] rcu: RCU grace-period kthread stack dump: [ 1421s] [ 1402.341278][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 1602s] [ 1582.298865][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 1602s] [ 1582.888473][ C1] rcu: All QSes seen, last rcu_sched kthread activity 132071 (4295095103-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 1602s] [ 1582.888571][ C1] rcu: rcu_sched kthread starved for 132071 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 1602s] [ 1582.888644][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 1602s] [ 1582.888705][ C1] rcu: RCU grace-period kthread stack dump: [ 1602s] [ 1582.888794][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 1782s] [ 1762.387579][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 1782s] [ 1762.387727][ C1] rcu: All QSes seen, last rcu_sched kthread activity 150080 (4295113112-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 1782s] [ 1762.387796][ C1] rcu: rcu_sched kthread starved for 150080 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 1782s] [ 1762.387850][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 1782s] [ 1762.387895][ C1] rcu: RCU grace-period kthread stack dump: [ 1782s] [ 1762.388019][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 1962s] [ 1942.446760][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 1962s] [ 1942.485300][ C1] rcu: All QSes seen, last rcu_sched kthread activity 168086 (4295131118-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 1962s] [ 1942.485375][ C1] rcu: rcu_sched kthread starved for 168086 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 1962s] [ 1942.485429][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 1962s] [ 1942.485475][ C1] rcu: RCU grace-period kthread stack dump: [ 1962s] [ 1942.485546][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 2142s] [ 2122.502705][ C0] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 2142s] [ 2122.894815][ C0] rcu: All QSes seen, last rcu_sched kthread activity 186131 (4295149163-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 2142s] [ 2122.936517][ C0] rcu: rcu_sched kthread starved for 186135 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 2142s] [ 2122.936628][ C0] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 2142s] [ 2122.936718][ C0] rcu: RCU grace-period kthread stack dump: [ 2142s] [ 2122.936848][ C0] rcu: Stack dump where RCU GP kthread last ran: [ 2323s] [ 2302.995674][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 2323s] [ 2303.517840][ C1] rcu: All QSes seen, last rcu_sched kthread activity 204141 (4295167173-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 2323s] [ 2303.517936][ C1] rcu: rcu_sched kthread starved for 204141 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 2323s] [ 2303.518007][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 2323s] [ 2303.518068][ C1] rcu: RCU grace-period kthread stack dump: [ 2323s] [ 2303.518146][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 2503s] [ 2483.050239][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 2503s] [ 2483.422236][ C1] rcu: All QSes seen, last rcu_sched kthread activity 222147 (4295185179-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 2503s] [ 2483.422331][ C1] rcu: rcu_sched kthread starved for 222147 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 2503s] [ 2483.422407][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 2503s] [ 2483.422471][ C1] rcu: RCU grace-period kthread stack dump: [ 2503s] [ 2483.422556][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 2682s] [ 2663.146691][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 2682s] [ 2663.146864][ C1] rcu: All QSes seen, last rcu_sched kthread activity 240156 (4295203188-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 2683s] [ 2663.355686][ C1] rcu: rcu_sched kthread starved for 240156 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 2683s] [ 2663.540275][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 2683s] [ 2663.540327][ C1] rcu: RCU grace-period kthread stack dump: [ 2683s] [ 2663.540402][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 2862s] [ 2843.200089][ C0] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 2862s] [ 2843.297826][ C0] rcu: All QSes seen, last rcu_sched kthread activity 258171 (4295221203-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 2862s] [ 2843.297958][ C0] rcu: rcu_sched kthread starved for 258171 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 2862s] [ 2843.298021][ C0] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 2862s] [ 2843.298076][ C0] rcu: RCU grace-period kthread stack dump: [ 2862s] [ 2843.298154][ C0] rcu: Stack dump where RCU GP kthread last ran: [ 3043s] [ 3023.402002][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 3043s] [ 3023.402199][ C1] rcu: All QSes seen, last rcu_sched kthread activity 276182 (4295239214-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 3043s] [ 3023.402283][ C1] rcu: rcu_sched kthread starved for 276182 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 3043s] [ 3023.402399][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 3043s] [ 3023.402490][ C1] rcu: RCU grace-period kthread stack dump: [ 3043s] [ 3023.402584][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 3223s] [ 3203.450092][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 3223s] [ 3203.557258][ C1] rcu: All QSes seen, last rcu_sched kthread activity 294187 (4295257219-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 3223s] [ 3203.608733][ C1] rcu: rcu_sched kthread starved for 294187 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 3223s] [ 3203.608899][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. [ 3223s] [ 3203.609082][ C1] rcu: RCU grace-period kthread stack dump: [ 3223s] [ 3203.609171][ C1] rcu: Stack dump where RCU GP kthread last ran: [ 3403s] [ 3383.506111][ C1] rcu: INFO: rcu_sched detected stalls on CPUs/tasks: [ 3403s] [ 3383.509427][ C1] rcu: All QSes seen, last rcu_sched kthread activity 312192 (4295275224-4294963032), jiffies_till_next_fqs=1, root ->qsmask 0x0 [ 3403s] [ 3383.540655][ C1] rcu: rcu_sched kthread starved for 312192 jiffies! g6677 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 [ 3403s] [ 3383.540723][ C1] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior. -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c37 --- Comment #37 from Sarah Kriesch <ada.lovelace@gmx.de> --- Anyway, it seems that our build nodes have been damaged in many builds. I had to rebuild Perl. LLVM does not succeed. I had strange behaviour with security issues in tls and zip (reported this weekend). @Takashi I don't have a good feeling with the existing bugfix. Can we revert it again, please? @Gerald Sorry! But it seems that we have received more bugs in this direction on application and compiler levels than before. cross-epiphany-gcc11-bootstrap has no problems anymore. It seems, that it has been a handover to the next level (incl. these kernel error messages). -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c39 --- Comment #39 from Sarah Kriesch <ada.lovelace@gmx.de> --- Thank you for this hint, Fabian. If we did not have the bugfix in Tumbleweed until now, then that seems to be the same issue as before. @Gerald Sorry! I thought that the commit would be also in our master branch if that will be also backported for SLE. @kernel team: Can you apply the patch on Tumbleweed, please? -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c42 --- Comment #42 from Sarah Kriesch <ada.lovelace@gmx.de> --- (In reply to LTC BugProxy from comment #41)
------- Comment From geraldsc@de.ibm.com 2022-01-24 07:10 EDT------- FTR (again), the mentioned upstream commit is by no means a qualified bugfix for this issue. It is a fix for a theoretical race, which has been present for many years (basically forever), and never showed in practice. There is no indication that it would / should help with this issue, it is just "grasping at straws", as mentioned earlier here, due to lack of any good starting points for debugging (e.g. kernel dump or log).
I would suggest that you take the opportunity, if you can still easily reproduce the problem, and implement the advice from an earlier comment to get a kernel dump at the time when the first "bad page state" error occurs:
The "bad page state" warning causes a taint, so panic_on_taint=5 should help there. Needs an OBS admin to add that option and get a dump though, or alternatively someone who can reproduce it (somewhat reliably) with osc build --vm-type=kvm.
I had an equal idea in the past with log files and a discussion about that with @Adrian. Saving dumps is an additional number on top. Do you know about the size of such a kernel dump and perhaps additionally the log? I thought overnight about this request. That makes only sense if these dumps can be also removed again from the SUSE infrastructure after an automated handover to you. If you take a look at my last comments, then you can see that "many" such dumps could be reproduced then on a daily basis and I have received such example dumps via Adrian in the past. There are also some requirements for IBM side then. I know IBM security rules. It is difficult to receive data from outside to Developer side. I had such a data exchange with Sandy via the Box in the past. Can you ask Thomas or your People Lead Matthias about the possibility of writing access for our systems in the Box, that you can receive these data automated and that these data can be removed after a successful upload on our side again? I would say, that it would be a win-win situation for you as an IBMer, the same as for openSUSE and SUSE. BUT IBM should know, that it is an investment from SUSE side for all of us then. My requirement (in the name of the Linux Distributions Working Group) for that is, that you speak in the same tone as here with SUSE in the future (again). It does not make sense to invest in such features if that does not have any future in the next years. @Adrian: What do you say about this idea to create such dumps automatically and upload these to the Box on IBM side and remove these data afterwards again on the SUSE infrastructure? @Ihno Is such a handshake ok if SUSE and openSUSE are handled again in a good way by IBM? P.S. Yes. I am interacting pro all Linux Distributions (incl. our Enterprise partner) via the Open Mainframe Project. -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 Sarah Kriesch <ada.lovelace@gmx.de> changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ihno@suse.com) -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c45 Sarah Kriesch <ada.lovelace@gmx.de> changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(ihno@suse.com) |needinfo?(bugproxy@us.ibm.c | |om) --- Comment #45 from Sarah Kriesch <ada.lovelace@gmx.de> --- Thank you for the dump, Rudi! We have got luck, that nodejs is affected. All with dependencies to programming languages have got a higher priority at IBM since last year (because mainframes should also be used for programming stuff). (In reply to Ruediger Oertel from comment #44)
now the question is just how to get this 610M file out to whoever needs it There is only Gerald as a kernel developer for SUSE and openSUSE at IBM. We are sharing our resources.
I expect, that your referenced link also works via the IBM Bugzilla. Therefore, this bug report has been the correct place for forwarding this link. :) -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c48 --- Comment #48 from Sarah Kriesch <ada.lovelace@gmx.de> --- Christian Borntraeger gave me and Adrian the hint to use "dump-guest-memory" in monitor for reproducing it at IBM. Gerald has to say how we should transfer all data to him (that would be also in the future with the automated way). Ihno has suggested also the SUSE FTP server as a temporary solution. -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c60 --- Comment #60 from Sarah Kriesch <ada.lovelace@gmx.de> --- I have installed the debuginfo package (in the past) and extracted the kernel with: rpm2cpio kernel-default-*.s390x.rpm | cpio -idmv Then I was able to use crash with: crash -c /dev/ttyS0 --cpus 3 boot/vmlinux-*-default.xz s390zl21.memory.dump (* has to be replaced by the kernel version) If Rudi has used the referenced method, then it should be possible to open the dump this way then. If the dump is from SLE/Leap, you need the kernel download link for SLE 15 SP4 Beta or Leap. -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c61 --- Comment #61 from Ruediger Oertel <ro@suse.com> --- I am currently preparing another dump including kernel and kernel-debuginfo and buildlog ... still waiting for the 8G dump to compress ... all will be at https://beta.suse.com/private/ro/s390-tw/ -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1188896 http://bugzilla.opensuse.org/show_bug.cgi?id=1188896#c62 --- Comment #62 from Ruediger Oertel <ro@suse.com> --- # cat SUMS 1cedf33c19e6b8d13ed2dbd3c3ba9c8af583dffab40d4a8a5f33945a68fbcccd buildlog 11b9d2db42d402f09189e7f23dfd1277377e132a79bdfdd20612436f7175cd21 kernel-default-5.16.8-1.1.s390x.rpm 8bcc4b8d549eeffaead729e3bdceb366ae48bde5363f6c5a773ec0dbecc9ea0a kernel-default-debuginfo-5.16.8-1.1.s390x.rpm 248c624f7f4e31f93bc0759af126f2cb56f780a7f4ba37b2a10a1af1da766cf8 kernel-default-debugsource-5.16.8-1.1.s390x.rpm dfdd1ddf05d7858e8d5349afc73247b54adc4344fa31198685ff9c4d8a8f429b qemu-vm-dump-qt6-qt5compat.gz b1fbf65214e41d4f7c501a50fc6c708816506508031fef6d6ebe2ce800caed33 README -- You are receiving this mail because: You are the assignee for the bug.
participants (1)
-
bugzilla_noreply@suse.com