http://bugzilla.suse.com/show_bug.cgi?id=1113289 http://bugzilla.suse.com/show_bug.cgi?id=1113289#c6 Oliver Kurz <okurz@suse.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|RESOLVED |REOPENED Resolution|WORKSFORME |--- --- Comment #6 from Oliver Kurz <okurz@suse.com> --- sorry, I think I misread my own bug. The issue seems to be still present to me even though the symptoms change, I suspect still the same problem though: I had problems withs btrfs turning root fs r/o already multiple times since some months effectively crashing my whole computer with near to no possibility to interact with it besides magic-sysrq to a limited extend. Happened on 4.19.0, 4.18.10, 4.12.14-lp150.12.4-default. Computer was HDD busy and completely stalled, only message in journal after reboot: "Feb 02 08:34:46 linux-28d6.suse smartd[2172]: Device: /dev/sda [SAT], starting scheduled Short Self-Test.". magic-sysrq has no visible impact except that sysrq-b causes the hdd activity led to go off after about 1-2 seconds. Also seems like these crashes are more likely to appear in the morning for unknown reason.On 2019-03-04 this happened again in the morning at around 08:43 (clock stopped at this time in the X11 window manager). I let the computer run until 12:15 and it did not change anything, the HDD LED is still on (on SSD). On 2019-05-13 I was lucky and had access to a root term long enough because I did not call any commands that would rely on SSD I/O so I could execute "cat /proc/kmsg" and see messages about btrfs balance that fails due to ENOSPC. Also I could see the output of magic-sysrq actions which was helpful to control that. I reset the computer and made quite some space and ran the balance commands from the cron job again. This time no errors. On 2019-05-15 I saw a message about smartd tests but also that it completed successfully. Side note: I learned that "dmesg -T" has misleading timestamps because the clock does not continue over suspend, the time is based on uptime. Problem reproduced on 5.1.5 just on 2019-06-11. -- You are receiving this mail because: You are on the CC list for the bug.