Bug ID 1187701
Summary Ten64 fails to boot (kernel panic in initramfs) from nvme after upgrade to kernel 5.3.18-59.5-default
Classification openSUSE
Product openSUSE Distribution
Version Leap 15.3
Hardware aarch64
OS openSUSE Leap 15.3
Status NEW
Severity Critical
Priority P5 - None
Component Kernel
Assignee kernel-bugs@opensuse.org
Reporter matt@traverse.com.au
QA Contact qa-bugs@suse.de
Found By ---
Blocker ---

Created attachment 850544 [details]
Failed boot on kernel 5.3.18-59.5-default

Hello,

After Leap 15.3 appliance images were updated recently, I have found the newer
versions no longer boot on Ten64.

Working image build:
openSUSE-Leap-15.3-ARM-JeOS-efi.aarch64-2021.05.21-Build9.94.raw.xz / kernel
5.3.18-57-default 

Not working:
openSUSE-Leap-15.3-ARM-JeOS-efi.aarch64-2021.05.21-Build9.106.raw.xz / kernel
5.3.18-59.5-default

If I upgrade the kernel inside Build9.94 with 'zypper up', the same issue
occurs.

If I then choose the previous snapshot from GRUB the system can once again be
booted with 5.3.18-57.


Boot fails due to this, during initramfs/dracut:

[    6.227357] Internal error: synchronous external abort: 96000210 [#1] SMP
[    6.232443] mmc0: SDHCI controller on 2140000.esdhc [2140000.esdhc] using
ADMA 64-bit
[    6.234152] Modules linked in: nvme nvme_core dwc3 sdhci_of_esdhc(+)
sdhci_pltfm sdhci t10_pi mmc_core ulpi udc_core rtc_fsl_ftm_alarm i2c_imx
gpio_keys sg scsi_mod
[    6.256675] Supported: Yes
[    6.259380] CPU: 6 PID: 7 Comm: kworker/u16:0 Not tainted
5.3.18-59.5-default #1 SLE15-SP3
[    6.267643] Hardware name: traverse ten64/ten64, BIOS 2020.07-rc1-gb47b96d4
06/25/2021
[    6.275581] Workqueue: nvme-reset-wq nvme_reset_work [nvme]
[    6.275587] pstate: a0000005 (NzCv daif -PAN -UAO)
[    6.275594] pc : nvme_reset_work+0x16c/0x12f8 [nvme]
[    6.275600] lr : nvme_reset_work+0x164/0x12f8 [nvme]
[  OK      6.275601] sp : ffff8000100a3c80
m] Reached targe[    6.275603] x29: ffff8000100a3c80 x28: ffff0732958af0c0
t Basic[    6.275606] x27: ffff0732958af0c0 x26: ffff0732954412c0
System.
[    6.275609] x25: ffff073295441300 x24: ffff073295441710
[    6.275611] x23: ffff0732958af000 x22: ffff073295441000
[    6.275614] x21: ffffb9bd5bd89000 x20: ffff073295440f10
[    6.275617] x19: ffff073295441000 x18: ffffffffffffffff
[    6.275620] x17: 0000000000000000 x16: ffffb9bd5a7a35a0
[    6.275622] x15: ffffb9bd5bd89908 x14: 0000000000000040
[    6.275625] x13: 0000000000000228 x12: 0000000000000000
[    6.275628] x11: 0000000000000000 x10: 0000000000001a50
[    6.275630] x9 : ffff8000100a3d10 x8 : 000000000000007d
[    6.275633] x7 : 0000000000000006 x6 : 0000000000010000
[    6.275635] x5 : 0000000000000000 x4 : 0000000000000000
[    6.275638] x3 : 0000000080000000 x2 : 0000000000000000
[    6.275641] x1 : ffffffffffffffff x0 : ffff8000105b201c
[    6.275644] Call trace:
[    6.275651]  nvme_reset_work+0x16c/0x12f8 [nvme]
[    6.275659]  process_one_work+0x200/0x458
[    6.275662]  worker_thread+0x144/0x4f0
[    6.275666]  kthread+0x130/0x138
[    6.275670]  ret_from_fork+0x10/0x18
[    6.275675] Code: aa1c03e0 940005af f9414ac0 91007000 (b9400000)
[    6.275678] ---[ end trace c33296e2e9bf08c4 ]---

I have confirmed this issue with multiple Ten64 units and different SSD models,
so it does not appear to be hardware related


You are receiving this mail because: