07.01.2016 19:22, tomtomme пишет:
"Cristian Rodríguez" <crrodriguez@opensuse.org>schrieb: On Wed, Jan 6, 2016 at 4:47 PM, Frank Kunz <mailinglists@kunz-im-inter.net> wrote:
Hi,
I have some strange behavior while testing Tumbleweeed over the last two weeks. I have seen it with different versions, also with the latest 20160105 a few minutes ago.
When doing a 'fstrim -v /' and then reboot I get a 'Non-System disc or disk error,...'
I have the same problem with a oldish samsung SSD. it is not that the disk is wiped (at least in my case) but grub2 installation gets corrupted. reinstalling grub2 from a rescue system "fixes" the problem. I opted just to just "not do that then".. I had the same experience 2 times 4-5 weeks ago, so also no logs. But I hope the following information is still useful First I thought it was BTRFS or my 6 years old OCZ Agility SSD (sandforce controller) since 2 other Tumbleweed systems with newer SSDs (no sandforce), ext4 and same repos / configs had no problems. I regulary manually invoke fstrim on all 3 machines once a month or so but i do not remember if i trimmed right before GRUB2 was gone on that BTRFS machine. However I got the similar BIOS message "no operating system found". So I put in my Tumbleweed USB-Stick and chose the upgrade option. The installer found my root-system and the package-db without issues and I chose then to re-install grub2 on root where it was before (and some base-packages like kernel and systemd).
That fixed it until it happend a week later again after a "normal" zypper dup, fairly sure that I did not trim that time - maybe the kernel does this automatically these days, but I am not sure if this needs the "discard" option in /etc/fstab which I do NOT have.
See btrfsmaintenance package - it installs cron job to trim btrfs.
Again I fixed missing grub with the above method, but this time installed it in MBR AND root. No grub2 problems since then - zypper-problems instead, again only on this machine.
Well, this indirectly confirms hypothesis that btrfs trim deletes bootloader area. I'm still not sure where error comes from. Would someone who can reproduce it record state of btrfs bootloader area (64KiB at the beginning of device) when booting fails?
When I started using it again 1 week ago "zypper dup" installed only half of the new packages - then it gave up saying it could not extract any further packages. There were 5 GB space left on the SSD but I nontheless tried emptying trash and deleting .thumbnails and that helped zypper to extract / install some more rpms, but still not all and that borked my system so that I re-installed root with ext4 now.
I really thouht my ssd is dying although the logs showed no errors for /dev/sdb. But after your reports I really don´t know whats going on.
- Which controllers do your SSDs use?
- Did you all trim manually or with discard option?
- Did you all use BTRFS?
- How old are your SSDs?
- Do you see SSD-write-related errors in the logs?
-- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org