[Bug 1222313] btrfs storage discrepancy between used and free space of about 1/3 of the total disk capacity
https://bugzilla.suse.com/show_bug.cgi?id=1222313 https://bugzilla.suse.com/show_bug.cgi?id=1222313#c15 --- Comment #15 from Felix Niederwanger <felix.niederwanger@suse.com> --- Moving the disk images to an external medium and back made a HUGE difference - I got almost 200 GiB back:
/dev/mapper/system-root 932G 586G 338G 64% /.snapshots /dev/mapper/system-root 932G 586G 338G 64% /boot/grub2/i386-pc /dev/mapper/system-root 932G 586G 338G 64% /boot/grub2/x86_64-efi /dev/mapper/system-root 932G 586G 338G 64% /home /dev/mapper/system-root 932G 586G 338G 64% /opt /dev/mapper/system-root 932G 586G 338G 64% /root /dev/mapper/system-root 932G 586G 338G 64% /srv /dev/mapper/system-root 932G 586G 338G 64% /usr/local /dev/mapper/system-root 932G 586G 338G 64% /var
I now also disabled COW for the filesystem in question via `chattr +C -R /srv` and hope this prevents the problem from arising. The "storage hole" were my VM disk images that I keep on this laptop for testing. All VMs are updated once per day during lunch time, and I keep them until the products run EOL. We do apply the +C attribute for the /var partition by default and this is also the case here. I wonder if it would make sense to apply the same defaults for /srv, which are likely holding similar kind of data like /var does. -- You are receiving this mail because: You are on the CC list for the bug.
participants (1)
-
bugzilla_noreply@suse.com