
2018-09-03 19:51 keltezéssel, Andrei Borzenkov írta:
03.09.2018 20:36, Albert Oszkó пишет:
2018-09-03 19:18 keltezéssel, Andrei Borzenkov írta:
03.09.2018 19:40, Albert Oszkó пишет:
Hi all,
This Asus laptop (with TW and KDE) was not used for ten days. Yesterday evening I switched it on and saw I have >800 updates. I tried tio install them with zypper dup. I failed, because root got full. This was with 4.18.0 kernel. I managed to boot into 4.17.14 kernel (got some complaints, but none showstopper) and manually remove all downloaded rpm packages from /var/cache/zypp/packages with mc. Dolphin did not start in superuser mode. All these deleted and having only the 0 and 1 snapshots, I have less than 600 M in root from 40 G. And I have no idea where is the rest of the space?
Start with showing output of
btrfs fi us / btrfs su li / btrfs qgroup show /
Here they are:
linux-olq5:/home/berci # btrfs fi us / Overall: Device size: 40.00GiB Device allocated: 40.00GiB Device unallocated: 1.00MiB Device missing: 0.00B Used: 38.51GiB Free (estimated): 564.43MiB (min: 564.43MiB) Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 107.84MiB (used: 0.00B)
Data,single: Size:36.44GiB, Used:35.89GiB /dev/sda3 36.44GiB
Metadata,DUP: Size:1.75GiB, Used:1.31GiB /dev/sda3 3.50GiB
System,DUP: Size:32.00MiB, Used:16.00KiB /dev/sda3 64.00MiB
Unallocated: /dev/sda3 1.00MiB linux-olq5:/home/berci # btrfs su li / ID 257 gen 171200 top level 5 path @ ID 258 gen 512167 top level 257 path @/.snapshots ID 259 gen 512383 top level 258 path @/.snapshots/1/snapshot I presume this is your "1" snapshot. What
btrfs su get / grep ' / ' /proc/self/mounts
say?
ID 260 gen 184118 top level 257 path @/boot/grub2/i386-pc ID 261 gen 440004 top level 257 path @/boot/grub2/x86_64-efi ID 262 gen 440407 top level 257 path @/opt ID 263 gen 440011 top level 257 path @/srv ID 264 gen 512383 top level 257 path @/tmp ID 265 gen 387780 top level 257 path @/usr/local ID 266 gen 512176 top level 257 path @/var/cache ID 267 gen 387725 top level 257 path @/var/crash ID 268 gen 168243 top level 257 path @/var/lib/libvirt/images ID 269 gen 171200 top level 257 path @/var/lib/machines ID 270 gen 168243 top level 257 path @/var/lib/mailman ID 271 gen 168243 top level 257 path @/var/lib/mariadb ID 272 gen 168243 top level 257 path @/var/lib/mysql ID 273 gen 168243 top level 257 path @/var/lib/named ID 274 gen 168243 top level 257 path @/var/lib/pgsql ID 275 gen 512383 top level 257 path @/var/log ID 276 gen 387725 top level 257 path @/var/opt ID 277 gen 512382 top level 257 path @/var/spool ID 278 gen 512383 top level 257 path @/var/tmp ID 754 gen 265167 top level 258 path @/.snapshots/64/snapshot ID 876 gen 336471 top level 258 path @/.snapshots/142/snapshot ID 883 gen 336471 top level 258 path @/.snapshots/148/snapshot ID 904 gen 341810 top level 258 path @/.snapshots/164/snapshot You do have older snapshots even if they are "lost" for snapper. What
snapper list
says?
linux-olq5:/home/berci # btrfs qgroup show / qgroupid rfer excl -------- ---- ---- 0/5 16.00KiB 16.00KiB 0/257 16.00KiB 16.00KiB 0/258 48.00KiB 48.00KiB 0/259 11.14GiB 5.35GiB Your (likely) current root consumes 11GiB
0/260 16.00KiB 16.00KiB 0/261 3.38MiB 3.38MiB 0/262 546.26MiB 546.26MiB 0/263 60.00KiB 60.00KiB 0/264 423.66MiB 423.66MiB 0/265 16.00KiB 16.00KiB 0/266 182.83MiB 182.83MiB 0/267 16.00KiB 16.00KiB 0/268 16.00KiB 16.00KiB 0/269 16.00KiB 16.00KiB 0/270 16.00KiB 16.00KiB 0/271 16.00KiB 16.00KiB 0/272 16.00KiB 16.00KiB 0/273 16.00KiB 16.00KiB 0/274 16.00KiB 16.00KiB 0/275 1.18GiB 1.18GiB 0/276 16.00KiB 16.00KiB 0/277 68.00KiB 68.00KiB 0/278 188.95MiB 188.95MiB Starting from here ...
0/403 0.00B 0.00B 0/439 0.00B 0.00B 0/580 3.17GiB 16.00EiB 0/581 413.15MiB 16.00EiB 0/638 0.00B 0.00B ... to here there are stale entries. Those subvolumes no more exist. You should delete them
btrfs qgroup destroy 0/403 / ...
And then update quota information with
btrfs quota rescan -w /
then listing qgroup should be more close to reality
0/754 10.40GiB 9.86GiB 0/876 10.96GiB 3.59GiB 0/883 11.00GiB 2.24GiB 0/904 10.92GiB 2.96GiB 1/0 29.23GiB 23.45GiB And those "lost" snapshots consume 23.45GiB together.
255/269 16.00KiB 16.00KiB
I did what you said: linux-olq5:~ # btrfs qgroup destroy 0/403 / linux-olq5:~ # btrfs quota rescan -w / quota rescan started linux-olq5:~ # btrfs qgroup show / qgroupid rfer excl -------- ---- ---- 0/5 16.00KiB 16.00KiB 0/257 16.00KiB 16.00KiB 0/258 48.00KiB 48.00KiB 0/259 11.12GiB 5.33GiB 0/260 16.00KiB 16.00KiB 0/261 3.38MiB 3.38MiB 0/262 546.26MiB 546.26MiB 0/263 60.00KiB 60.00KiB 0/264 423.63MiB 423.63MiB 0/265 16.00KiB 16.00KiB 0/266 182.83MiB 182.83MiB 0/267 16.00KiB 16.00KiB 0/268 16.00KiB 16.00KiB 0/269 16.00KiB 16.00KiB 0/270 16.00KiB 16.00KiB 0/271 16.00KiB 16.00KiB 0/272 16.00KiB 16.00KiB 0/273 16.00KiB 16.00KiB 0/274 16.00KiB 16.00KiB 0/275 1.19GiB 1.19GiB 0/276 16.00KiB 16.00KiB 0/277 68.00KiB 68.00KiB 0/278 190.15MiB 190.15MiB 0/439 0.00B 0.00B 0/580 3.17GiB 16.00EiB 0/581 413.15MiB 16.00EiB 0/638 0.00B 0.00B 0/754 10.40GiB 9.86GiB 0/876 10.96GiB 3.59GiB 0/883 11.00GiB 2.24GiB 0/904 10.92GiB 2.96GiB 1/0 29.23GiB 23.45GiB 255/269 16.00KiB 16.00KiB linux-olq5:~ # btrfs fi us / Overall: Device size: 40.00GiB Device allocated: 40.00GiB Device unallocated: 1.00MiB Device missing: 0.00B Used: 38.50GiB Free (estimated): 580.96MiB (min: 580.96MiB) Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 107.80MiB (used: 0.00B) Data,single: Size:36.44GiB, Used:35.87GiB /dev/sda3 36.44GiB Metadata,DUP: Size:1.75GiB, Used:1.31GiB /dev/sda3 3.50GiB System,DUP: Size:32.00MiB, Used:16.00KiB /dev/sda3 64.00MiB