I have a colleague (erstwhile) in Australia who ran into the problem and couldn't understand why. One PC with a 40GB hard drive and another with a 40GB SSD both with the same number of snapshots but only the SSD complaining of no space left. After a lot of checking I asked him to delete a number of snapshots (reduced from 20 and 20 to 10 and 10) which now shows only 3.9GB used. Later I'll file a bug as this could force someone who is not very Linux savvy to do a fresh install or even move to another distro. Regards Sid. -- Sid Boyce ... Hamradio License G3VBV, Licensed Private Pilot Emeritus IBM/Amdahl Mainframes and Sun/Fujitsu Servers Tech Support Senior Staff Specialist, Cricket Coach Microsoft Windows Free Zone - Linux used for all Computing Tasks
On 07/05/2021 22.58, Sid Boyce wrote:
I have a colleague (erstwhile) in Australia who ran into the problem and couldn't understand why.
One PC with a 40GB hard drive and another with a 40GB SSD both with the
same number of snapshots but only the SSD complaining of no space left.
After a lot of checking I asked him to delete a number of snapshots (reduced from 20 and 20 to 10 and 10) which now shows only 3.9GB used.
Later I'll file a bug as this could force someone who is not very Linux
savvy to do a fresh install or even move to another distro.
SSD or rotating rust has no difference. Further analysis would have found the real cause. It is perfectly known issue that a small hard disk with snapshots is a bad idea. You need at least 3 times the size you would normally use. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On Saturday 08 May 2021, Carlos E. R. wrote:
On 07/05/2021 22.58, Sid Boyce wrote:
I have a colleague (erstwhile) in Australia who ran into the problem and couldn't understand why.
One PC with a 40GB hard drive and another with a 40GB SSD both with the
same number of snapshots but only the SSD complaining of no space left.
After a lot of checking I asked him to delete a number of snapshots (reduced from 20 and 20 to 10 and 10) which now shows only 3.9GB used.
Later I'll file a bug as this could force someone who is not very Linux
savvy to do a fresh install or even move to another distro.
SSD or rotating rust has no difference. Further analysis would have found the real cause.
It is perfectly known issue that a small hard disk with snapshots is a bad idea. You need at least 3 times the size you would normally use.
The installer should restrict or warn users about the unusual space requirements for OpenSUSE btrfs-root. Many people coming to OpenSUSE may not have encountered btrfs or snapshots before. The issue can be punted to RTFM, but most people do not initially RTFM and will have a unpleasant surprise waiting for them way downsteam of a poor choice during installation. While this is "perfectly known issue" to experienced OpenSUSE players, I imagine we want to avoid unpleasant surprises for new users, users who are perhaps only kicking the tires on OpenSUSE, and who could easily move down the road to some other distro or OS. Michael
On 08/05/2021 00.33, Michael Hamilton wrote:
On Saturday 08 May 2021, Carlos E. R. wrote:
On 07/05/2021 22.58, Sid Boyce wrote:
I have a colleague (erstwhile) in Australia who ran into the problem and couldn't understand why.
One PC with a 40GB hard drive and another with a 40GB SSD both with the
same number of snapshots but only the SSD complaining of no space left.
After a lot of checking I asked him to delete a number of snapshots (reduced from 20 and 20 to 10 and 10) which now shows only 3.9GB used.
Later I'll file a bug as this could force someone who is not very Linux
savvy to do a fresh install or even move to another distro.
SSD or rotating rust has no difference. Further analysis would have found the real cause.
It is perfectly known issue that a small hard disk with snapshots is a bad idea. You need at least 3 times the size you would normally use.
The installer should restrict or warn users about the unusual space requirements for OpenSUSE btrfs-root. Many people coming to OpenSUSE may not have encountered btrfs or snapshots before.
The issue can be punted to RTFM, but most people do not initially RTFM and will have a unpleasant surprise waiting for them way downsteam of a poor choice during installation.
I agree. I have been told that YaST warns users with small disks during installation.
While this is "perfectly known issue" to experienced OpenSUSE players, I imagine we want to avoid unpleasant surprises for new users, users who are perhaps only kicking the tires on OpenSUSE, and who could easily move down the road to some other distro or OS.
IMO, not experienced users should not be installing Tumbleweed but Leap. On the other hand, most now seasoned users have done poor choices on their initial installs, when they were novices, and subsequently reinstalled with better informed selections. It is part of the process of learning. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On Sat, May 08, Michael Hamilton wrote:
The installer should restrict or warn users about the unusual space requirements for OpenSUSE btrfs-root. Many people coming to OpenSUSE may not have encountered btrfs or snapshots before.
The installer is doing this. But YaST cannot look in the future what the user will do with the system afterwards. So assume somebody is doing a standard installation. 40GB are clearly enough for this. Now he installs a lot of additional software afterwards and does additional things like video recording or running VMs, which are all very disk consuming -> the user will run into problems, which was not predictable by the installer. Thorsten -- Thorsten Kukuk, Distinguished Engineer, Senior Architect SLES & MicroOS SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nuernberg, Germany Managing Director: Felix Imendoerffer (HRB 36809, AG Nürnberg)
On 07/05/2021 22:28, Carlos E. R. wrote:
On 07/05/2021 22.58, Sid Boyce wrote:
I have a colleague (erstwhile) in Australia who ran into the problem and couldn't understand why.
One PC with a 40GB hard drive and another with a 40GB SSD both with the
same number of snapshots but only the SSD complaining of no space left.
After a lot of checking I asked him to delete a number of snapshots (reduced from 20 and 20 to 10 and 10) which now shows only 3.9GB used.
Later I'll file a bug as this could force someone who is not very Linux
savvy to do a fresh install or even move to another distro.
SSD or rotating rust has no difference. Further analysis would have found the real cause.
It is perfectly known issue that a small hard disk with snapshots is a bad idea. You need at least 3 times the size you would normally use.
Understood. That's why I normally go for as large as I can get and over time 1TB -> 2TB --> 5TB and I have a 6TB on standby in case of a failure. The user installed openSUSE Leap because he doesn't like Windows or Ubuntu or other distros, added to that is the performance of modest hardware. The doc says >16GB and snapshots will be enabled which gives the impression that 40GB would hit no problems. What puzzled him was that the 40GB SSD experiencing the problem and the 40GB hard drive was OK .... obviously the SSD was loaded with more apps, videos, etc. than the hard drive. Life was sweet until suddenly "zypper dup" complained. Regards Sid. -- Sid Boyce ... Hamradio License G3VBV, Licensed Private Pilot Emeritus IBM/Amdahl Mainframes and Sun/Fujitsu Servers Tech Support Senior Staff Specialist, Cricket Coach Microsoft Windows Free Zone - Linux used for all Computing Tasks
On 08/05/2021 16.19, Sid Boyce wrote:
On 07/05/2021 22:28, Carlos E. R. wrote:
On 07/05/2021 22.58, Sid Boyce wrote:
I have a colleague (erstwhile) in Australia who ran into the problem and couldn't understand why.
One PC with a 40GB hard drive and another with a 40GB SSD both with the
same number of snapshots but only the SSD complaining of no space left.
After a lot of checking I asked him to delete a number of snapshots (reduced from 20 and 20 to 10 and 10) which now shows only 3.9GB used.
Later I'll file a bug as this could force someone who is not very Linux
savvy to do a fresh install or even move to another distro.
SSD or rotating rust has no difference. Further analysis would have found the real cause.
It is perfectly known issue that a small hard disk with snapshots is a
bad idea. You need at least 3 times the size you would normally use.
Understood. That's why I normally go for as large as I can get and over
time 1TB -> 2TB --> 5TB and I have a 6TB on standby in case of a failure.
The user installed openSUSE Leap because he doesn't like Windows or Ubuntu or other distros, added to that is the performance of modest hardware.
Yep.
The doc says >16GB and snapshots will be enabled which gives the impression that 40GB would hit no problems.
Sigh... My educated opinion is different...
What puzzled him was that the 40GB SSD experiencing the problem and the
40GB hard drive was OK .... obviously the SSD was loaded with more apps, videos, etc. than the hard drive.
I doubt videos would be a problem, they typically go to the /home partition. Unless there is no /home partition and is a directory, a practice I strongly dislike for "actual" use by people. It could be more apps, or simply different zypper dup usage pattern. If the total hard disk space is that limited (40GB) I would have selected a single ext4 partition, plus possibly swap. And Leap, unless having a specific reason for using factory. In that case, I would have ready the XFCE rescue USB.
Life was sweet until suddenly "zypper dup" complained.
Yes, you are not the first one hit by this issue. And using factory makes the issue worse as there are many more "dups". -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 08/05/2021 15:39, Carlos E. R. wrote:
On 08/05/2021 16.19, Sid Boyce wrote:
On 07/05/2021 22:28, Carlos E. R. wrote:
On 07/05/2021 22.58, Sid Boyce wrote:
I have a colleague (erstwhile) in Australia who ran into the problem and couldn't understand why.
One PC with a 40GB hard drive and another with a 40GB SSD both with the
same number of snapshots but only the SSD complaining of no space left.
After a lot of checking I asked him to delete a number of snapshots (reduced from 20 and 20 to 10 and 10) which now shows only 3.9GB used.
Later I'll file a bug as this could force someone who is not very Linux
savvy to do a fresh install or even move to another distro.
SSD or rotating rust has no difference. Further analysis would have found the real cause.
It is perfectly known issue that a small hard disk with snapshots is a
bad idea. You need at least 3 times the size you would normally use.
Understood. That's why I normally go for as large as I can get and over
time 1TB -> 2TB --> 5TB and I have a 6TB on standby in case of a failure.
The user installed openSUSE Leap because he doesn't like Windows or Ubuntu or other distros, added to that is the performance of modest hardware.
Yep.
The doc says >16GB and snapshots will be enabled which gives the impression that 40GB would hit no problems.
Sigh... My educated opinion is different...
What puzzled him was that the 40GB SSD experiencing the problem and the
40GB hard drive was OK .... obviously the SSD was loaded with more apps, videos, etc. than the hard drive.
I doubt videos would be a problem, they typically go to the /home partition. Unless there is no /home partition and is a directory, a practice I strongly dislike for "actual" use by people.
It could be more apps, or simply different zypper dup usage pattern.
If the total hard disk space is that limited (40GB) I would have selected a single ext4 partition, plus possibly swap. And Leap, unless having a specific reason for using factory. In that case, I would have ready the XFCE rescue USB.
Life was sweet until suddenly "zypper dup" complained.
Yes, you are not the first one hit by this issue. And using factory makes the issue worse as there are many more "dups".
I gather he accepted the installation defaults. I think he got started with Leap 14 and upgraded through to 15.2. After deleting half the snapshots this is what it now reports, used went from 41GB to only 3.9GB. linux-sl6n:~ # df -h Filesystem     Size Used Avail Use% Mounted on devtmpfs       3.9G 4.0K 3.9G  1% /dev tmpfs          3.9G     0 3.9G  0% /dev/shm tmpfs          3.9G 1.6M 3.9G  1% /run tmpfs          3.9G      0 3.9G  0% /sys/fs/cgroup /dev/sda2       41G 9.3G  31G 24% / /dev/sda2       41G 9.3G  31G 24% /var/opt /dev/sda2       41G 9.3G  31G 24% /var/lib/mariadb /dev/sda2       41G 9.3G  31G 24% /var/lib/libvirt/images /dev/sda2       41G 9.3G  31G 24% /var/lib/machines /dev/sda2       41G 9.3G  31G 24% /usr/local /dev/sda2       41G 9.3G  31G 24% /boot/grub2/x86_64-efi /dev/sda2       41G 9.3G  31G 24% /var/crash /dev/sda2       41G 9.3G  31G 24% /var/lib/mailman /dev/sda2       41G 9.3G  31G 24% /opt /dev/sda2       41G 9.3G  31G 24% /srv /dev/sda2       41G 9.3G  31G 24% /tmp /dev/sda2       41G 9.3G  31G 24% /var/spool /dev/sda2       41G 9.3G  31G 24% /var/lib/pgsql /dev/sda2       41G 9.3G  31G 24% /boot/grub2/i386-pc /dev/sda2       41G 9.3G  31G 24% /var/log /dev/sda2       41G 9.3G  31G 24% /var/lib/named /dev/sda2       41G 9.3G  31G 24% /.snapshots /dev/sda2       41G 9.3G  31G 24% /var/lib/mysql /dev/sda2       41G 9.3G  31G 24% /var/tmp /dev/sda4      424G  48G 377G 12% /home tmpfs          786M  12K 786M  1% /run/user/1000 /dev/sdb1       30G  22G 8.3G 73% /run/media/sav/Lexar Regards Sid. -- Sid Boyce ... Hamradio License G3VBV, Licensed Private Pilot Emeritus IBM/Amdahl Mainframes and Sun/Fujitsu Servers Tech Support Senior Staff Specialist, Cricket Coach Microsoft Windows Free Zone - Linux used for all Computing Tasks
On 08.05.2021 20:06, Bernhard M. Wiedemann wrote:
On 08/05/2021 17.34, Sid Boyce wrote:
linux-sl6n:~ # df -h
Try instead btrfs filesystem df /
because the stat syscall used by df cannot reflect all the details of btrfs with its snapshots and copy-on-write.
Oh, really? Yet another town legend about btrfs ... bor@tw:~> df -h / Filesystem Size Used Avail Use% Mounted on /dev/vda2 39G 25G 14G 66% / bor@tw:~> So you say it lies and I must use "btrfs fi df"? bor@tw:~> sudo /usr/sbin/btrfs filesystem df / Data, single: total=30.01GiB, used=22.77GiB System, DUP: total=32.00MiB, used=16.00KiB Metadata, DUP: total=1.50GiB, used=987.25MiB GlobalReserve, single: total=78.38MiB, used=0.00B bor@tw:~> Oops. How is novice user supposed to interpret it? Where is my free space? So you claim that "df" lies and I have only 7.3GB of available space? "btrfs fi df" IS NOT ENOUGH to get information about available space because btrfs is using two-stage allocator and "btrfs fi df" only shows second half of it. It can only be sensibly used with information about the first half, which gives bor@tw:~> sudo /usr/sbin/btrfs filesystem show / Label: none uuid: cc072e56-f671-4388-a4a0-2ffee7c98fdb Total devices 1 FS bytes used 23.73GiB devid 1 size 38.91GiB used 33.07GiB path /dev/vda2 bor@tw:~> So I have 40GB disk, 33GB of which is allocated by the first stage of allocator and OF THIS ALLOCATED SPACE 22.77GB is used by data allocated by the second stage allocator. And if I had 1TB disk with 33GB allocated, "btrfs fi df" STILL showed the same 30GB without any indication how much additional disk space is available. But now in "btrfs fi df" output we apparently have "lost" 1.5GB - "total" as shown amounts to 31.5GB while "used" in "btrfs fi sh" is 33GB. And to explain it we need to know what "DUP" allocation profile is and that it allocates two copy of everything. And that "btrfs fi df" shows the LOGICAL size, not PHYSICAL. Do not you think it is a wee bit too much to demand from novice user who has met btrfs for the first time? The swiss knife today is "btrfs fi us" bor@tw:~> sudo /usr/sbin/btrfs filesystem us -T / Overall: Device size: 38.91GiB Device allocated: 33.07GiB Device unallocated: 5.84GiB Device missing: 0.00B Used: 24.70GiB Free (estimated): 13.08GiB (min: 10.16GiB) Free (statfs, df): 13.08GiB Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 78.38MiB (used: 0.00B) Multiple profiles: no Data Metadata System Id Path single DUP DUP Unallocated -- --------- -------- --------- -------- ----------- 1 /dev/vda2 30.01GiB 3.00GiB 64.00MiB 5.84GiB -- --------- -------- --------- -------- ----------- Total 30.01GiB 1.50GiB 32.00MiB 5.84GiB Used 22.77GiB 987.25MiB 16.00KiB bor@tw:~> It shows as total disk size, how much of it is allocated by the first stage, how much space from this is consumed by the second stage, and finally it shows the same estimation that plain df shows. All in all, for a btrfs on a single device "df" is pretty much accurate. And neither df nor "btrfs fi df" do care about snapshots at all. They show you how much space is used by CURRENT DATA. They do not care about logical relationship between this data (which snapshot/cloning is about). Now if you have 1TB disk and 20GB of data in root filesystem and it is shown as full you of course need to investigate where space is gone - but none of the above tools will help you. So putting "snapshots" and "df" in one sentence is simply wrong. Sorry.
On 08/05/2021 17.34, Sid Boyce wrote:
On 08/05/2021 15:39, Carlos E. R. wrote:
On 08/05/2021 16.19, Sid Boyce wrote:
...
Life was sweet until suddenly "zypper dup" complained.
Yes, you are not the first one hit by this issue. And using factory makes the issue worse as there are many more "dups".
I gather he accepted the installation defaults. I think he got started with Leap 14 and upgraded through to 15.2. After deleting half the snapshots this is what it now reports, used went from 41GB to only 3.9GB.
linux-sl6n:~ # df -h
df is not useful on btrfs partitions. Anyway, I can see there is a separate /home partition of 424 GB.
Filesystem     Size Used Avail Use% Mountedon devtmpfs       3.9G 4.0K 3.9G  1% /dev tmpfs          3.9G     0 3.9G  0% /dev/shm tmpfs          3.9G 1.6M 3.9G  1% /run tmpfs          3.9G      0 3.9G  0% /sys/fs/cgroup /dev/sda2       41G 9.3G  31G 24% / ... /dev/sda2       41G 9.3G  31G 24% /var/tmp /dev/sda4      424G  48G 377G 12% /home tmpfs          786M  12K 786M  1% /run/user/1000 /dev/sdb1       30G  22G 8.3G 73% /run/media/sav/Lexar
-- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
participants (6)
-
Andrei Borzenkov
-
Bernhard M. Wiedemann
-
Carlos E. R.
-
Michael Hamilton
-
Sid Boyce
-
Thorsten Kukuk