First off, thanks for taking the time to sift through this info-dump…
and sorry for the bad judgment calls wrt filtering output.
Replying point-by-point.
Andrei Borzenkov
On 13.02.2022 20:13, Kévin Le Gouguec wrote:
* The / subvolume is 40GB, and has almost no free space left.[2]
* My recent upgrade left me with a 15GB snapshot that *I think* is responsible for / being so full?[3]
Yes. This snapshot consumes 15GB of unique (non-shared) data. Showing
btrfs qgroup show /
may give some more information.
Attached below, along with the full output of 'btrfs subvolume list /'.
(2) Is this 2018 setup still salvageable, or should I reinstall? I try to follow factory@lists.opensuse.org and project@lists.opensuse.org, so I know that Tumbleweed underwent some Big Changes between February 2018 and April 2021; I assume that reinstalling should nonetheless not be necessary, and I just messed something up?
Delete offending snapshot (as long as you are sure you will not need it to rollback).
snapper delete 2
Note that this may move accounting of this data to different snapshot, in the worst case you may need to delete all of them.
OK; haven't done that yet, since I'd like to make sure I haven't elided important information in my opening post.
(3) What does this huge snapshot represent? The dates in snapper list make me think that it's a diff between the first installation (2018-02-20) and the last upgrade (2022-01-24), in which case, yup, of course the diff would be huge.
So look inside this snapshot? Snapshot is filesystem content at some point in time. Snapshot "grows" when content of actual (root) filesystem changes and starts to differ from what is in snapshot. If you have 15G of packaged files and did huge update that touches *all* packages (there were several not long ago), then snapshot will have old files and you root will have new files.
Mmm, 'ls -lt /.snapshots/[23]/snapshot/usr/bin' says that the most recent file in snapshot 2 dates from 2021-06-23 (that would be the last upgrade I ran on the desktop before shelving it), while the most recent file in snapshot 3 dates from 2022-01-24 (the "wake-up call" upgrade). So maybe the date on snapshot 1 (2018-02-20) is a red herring, and everything is Perfectly Normal™: the diff between 2021-06-23 and 2022-01-24 (snapshot 2 and 3) simply does weigh ¾ of /usr.
Empirically though deleting this huge snapshot just moves all the gigabytes into snapshot 1 (that's what I remember happening the last time I tried to reclaim disk space), which I cannot delete. Is there a way to tell snapper to just forget about 2018 and use the last upgrade as the oldest point of reference?
No, that should not happen. Snapshot 1 is your actual root filesystem. You cannot delete it.
So show actual results after "snapper delete 2".
That will be my next move. Hopefully this "moving into snapshot 1" stuff is just a figment of my deranged mind.
40G is a bit tight and suitable for the basic installation only.
(FWIW, as I mentioned in my message, I don't think I've messed with Tumbleweed's defaults there)
Also, the subvolume setup seems different?[6] Not sure why the older setup has /tmp listed, since findmnt /tmp tells me it's a tmpfs; why is it still on btrfs's radar? Also, why does the newer setup have entries for /root and /home, and not the older one?
Yes, things change over time.
Sure; is there a point where these residual discrepancies could be a problem though, requiring a full reinstall?
[3] On the older desktop: # snapper --iso \ list --columns number,type,date,used-space,cleanup,description # | Type | Date | Used Space | Cleanup | Description ---+--------+---------------------+------------+---------+---------------------- 0 | single | | | | current 1* | single | 2018-02-20 20:13:41 | 237.26 MiB | | first root filesystem 2 | pre | 2022-01-24 07:39:54 | 15.98 GiB | number | zypp(zypper) 3 | post | 2022-01-24 12:58:38 | 12.64 MiB | number | 4 | pre | 2022-01-24 13:46:28 | 960.00 KiB | number | zypp(zypper) 5 | post | 2022-01-24 13:49:01 | 73.57 MiB | number |
...
[6] On the older desktop: # btrfs subvolume list / | grep -v snapshots/ ID 257 gen 622718 top level 5 path @ ID 258 gen 638632 top level 257 path @/var ID 259 gen 637339 top level 257 path @/usr/local ID 260 gen 622718 top level 257 path @/tmp ID 261 gen 637339 top level 257 path @/srv ID 262 gen 637339 top level 257 path @/opt ID 263 gen 629216 top level 257 path @/boot/grub2/x86_64-efi ID 264 gen 637331 top level 257 path @/boot/grub2/i386-pc ID 265 gen 637331 top level 257 path @/.snapshots ID 269 gen 637339 top level 258 path @/var/lib/machines
That does not match. Please, NEVER filter output. Now we have no idea whether your system is seriously broken or you just decided to now show this data.
Apologies; the point of this last footnote was to compare the subvolume layout with the second desktop I installed in April; I did not expect it to be useful for debugging the free space issue. Full output below, followed by 'btrfs qgroup show /': ID 257 gen 622718 top level 5 path @ ID 258 gen 640101 top level 257 path @/var ID 259 gen 640082 top level 257 path @/usr/local ID 260 gen 622718 top level 257 path @/tmp ID 261 gen 637339 top level 257 path @/srv ID 262 gen 637339 top level 257 path @/opt ID 263 gen 629216 top level 257 path @/boot/grub2/x86_64-efi ID 264 gen 639631 top level 257 path @/boot/grub2/i386-pc ID 265 gen 640082 top level 257 path @/.snapshots ID 266 gen 640082 top level 265 path @/.snapshots/1/snapshot ID 269 gen 637339 top level 258 path @/var/lib/machines ID 1958 gen 629495 top level 265 path @/.snapshots/2/snapshot ID 1972 gen 639631 top level 265 path @/.snapshots/3/snapshot ID 1973 gen 639631 top level 265 path @/.snapshots/4/snapshot ID 1974 gen 639631 top level 265 path @/.snapshots/5/snapshot qgroupid rfer excl -------- ---- ---- 0/5 16.00KiB 16.00KiB 0/257 16.00KiB 16.00KiB 0/258 1.28GiB 1.28GiB 0/259 296.71MiB 296.71MiB 0/260 16.00KiB 16.00KiB 0/261 16.00KiB 16.00KiB 0/262 275.77MiB 275.77MiB 0/263 16.00KiB 16.00KiB 0/264 2.69MiB 2.69MiB 0/265 15.45MiB 15.45MiB 0/266 17.95GiB 252.15MiB 0/269 16.00KiB 16.00KiB 0/1958 17.35GiB 15.98GiB 0/1972 18.04GiB 12.64MiB 0/1973 18.04GiB 960.00KiB 0/1974 17.89GiB 75.93MiB IIUC the highest count of "bytes owned exclusively" goes to qgroupid 0/1958, which would be snapshot 2. Not sure how to interpret this information though; e.g. is there a way to reconcile 'btrfs filesystem usage /' (which says "Used: 37.47GiB") with the "exclusive" count? # btrfs qgroup show / --raw | tr -s ' ' | cut -d' ' -f3 | tail -n+3 | paste -sd+ | bc | numfmt --to=si 20G My takeaways so far: * This is probably all par for the course, since my / subvolume is barely twice as big as all the stuff I put in /usr. * In these conditions, any full-system upgrade is bound to fill the subvolume. * I guess I was doubtful that the diff before/after any given dist-upgrade would be so big; thanks for lifting the veil of my eyes. * Next time I install Tumbleweed, I should consider not doing a… how did Stefan put it? “a stupid "use all defaults => next => next => next => finish" installation” 🤪 Again, thanks for your time; I'll wait a bit to make sure no-one sees something obviously wrong with the information I've added in this reply, then go ahead and delete snapshot 2. I'll try to followup afterward, if only to confirm that this was all much ado about nothing.