Bug ID | 1222313 |
---|---|
Summary | btrfs storage discrepancy between used and free space of about 1/3 of the total disk capacity |
Classification | openSUSE |
Product | openSUSE Tumbleweed |
Version | Current |
Hardware | Other |
OS | Other |
Status | NEW |
Severity | Normal |
Priority | P5 - None |
Component | Kernel:Filesystems |
Assignee | kernel-fs@suse.de |
Reporter | felix.niederwanger@suse.com |
QA Contact | qa-bugs@suse.de |
Target Milestone | --- |
Found By | --- |
Blocker | --- |
I have a btrfs filesystem of about 1 TiB on my Laptop where I'm missing about 1/3 of the capacity of the disk. According to the output of `du`, my disk should occupy about 540 GiB of storage space. This includes snapper snapshots, ignoring the possible space savings by shared space, so this is a generous upper limit. However, df reports that 778GiB of storage are being used at the moment, leaving a discrepancy of about 240 GiB or almost 1/4 of the total SSD capacity that I am unable to account for. If I take the used space reported by snapper instead of the du output of /.snapshots the discrepancy increases to 314 GiB or about 1/3 of the total SSD capacity that are missing. I wrote to the research@suse.de mailing list on Tuesday (https://mailman.suse.de/mlarch/SuSE/research/2024/research.2024.04/msg00000.html) and we were unable to find where the missing storage went. Two other users confirmed my issue by reporting they hit a similar issues - one on-list one in a private conversation. In both cases they report increased disk usage with no obvious consumer present. To exclude the known btrfs metadata 6.7 kernel bug, I run a full balance this week, and also checked the output of fi df, which shows that metadata are occupying only 4GiB. The balance didn't change the overall picture. I will attach all collected logs and information as requested in the mail thread below. ## System description Running Tumbleweed 20240402 with Kernel 6.8.1-1-default. I'm using the full disk encryption layout as suggested by the installer in November 2023. This means a single btrfs volume atop a LUKS encrypted lvm volume. I run scrubs manually, and otherwise left the default btrfs maintenance script in their default configuration (balance/trim enabled).