[opensuse-factory] Zypper, btrfs, large updates and how to better handle the possibility of running out of disk space
My system uses a 40 GB btrfs root partition. I just updated to 20190612 and it did not went totally smooth. zypper dup reported 3200 packages were to be updated. I checked the available space with (I have it bound to an alias): /usr/sbin/btrfs fi usage / 2>/dev/null | grep "Free (estimated)" it reported 7.5GB free. Thus I thought it should be sufficient for that massive update. I was wrong: Zypper downloaded all packages fine then proceeded to install them and midway there were a bunch of out of disk space errors. Then it halted with a prompt to retry/abort/etc. I deleted the oldest snapshot taking a lot of space (several GB) with 'snapper delete', then resumed and it finished without error. My question is how this situation could be avoided in the first place ? Having a system with no disk space left is always dangerous as the system can behave unpredictably: this also happened to me a few days ago after installing a bunch of individual packages, then suddenly journalctld and syslogd went berserk attempting to repeatedly write to disk. I could recover the situation deleting a snapshot but the situation was a bit stressful because it seemed the system could become unresponsive and unusuable at any time (bash complained, etc). So I think it would be able to have 2 things: - having zypper check it has enough space to perform the full update and at least emit a warning. This is mostly important for 'zypper dup'. Checking required space might be tricky - having some facility at the system level that warn user of running out of disk space Also, I think a 40GB partition is not big enough to handle these super large updates. If you have to delete snapshots manually preventively, there's a size problem. I understand these massive updates are rare, but still. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Greetings. On Sat, 15 Jun 2019 16:13:16 +0200, Michael Pujos <pujos.michael@gmail.com> wrote:
Also, I think a 40GB partition is not big enough to handle these super large updates. If you have to delete snapshots manually preventively, there's a size problem. I understand these massive updates are rare, but still.
On one of my machines, I encounter this problem with nearly every update. I've gotten into the habit of manually deleting snapshots before running zypper dup, but even that isn't always enough to prevent running out of disk space. (FWIW, my root partition is 50 GB; free space fluctuates between 5 GB and 20 GB.) I've taken to clearing out /tmp and /var/tmp as well, though of course that usually requires rebooting into runlevel 3 to make sure I'm not deleting anything in use. Regards, Tristan -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Tristan Miller Free Software developer, ferret herder, logologist https://logological.org/ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
On 15/06/2019 23:43, Michael Pujos wrote:
My system uses a 40 GB btrfs root partition.
I just updated to 20190612 and it did not went totally smooth.
zypper dup reported 3200 packages were to be updated. I checked the available space with (I have it bound to an alias):
/usr/sbin/btrfs fi usage / 2>/dev/null | grep "Free (estimated)"
it reported 7.5GB free.
Thus I thought it should be sufficient for that massive update. I was wrong:
Zypper downloaded all packages fine then proceeded to install them and midway there were a bunch of out of disk space errors. Then it halted with a prompt to retry/abort/etc.
I deleted the oldest snapshot taking a lot of space (several GB) with 'snapper delete', then resumed and it finished without error.
My question is how this situation could be avoided in the first place ? Having a system with no disk space left is always dangerous as the system can behave unpredictably: this also happened to me a few days ago after installing a bunch of individual packages, then suddenly journalctld and syslogd went berserk attempting to repeatedly write to disk. I could recover the situation deleting a snapshot but the situation was a bit stressful because it seemed the system could become unresponsive and unusuable at any time (bash complained, etc).
So I think it would be able to have 2 things:
- having zypper check it has enough space to perform the full update and at least emit a warning. This is mostly important for 'zypper dup'. Checking required space might be tricky - having some facility at the system level that warn user of running out of disk space
Also, I think a 40GB partition is not big enough to handle these super large updates. If you have to delete snapshots manually preventively, there's a size problem. I understand these massive updates are rare, but still.
I'm pretty sure this part has been fixed and the suggested default is now much larger, doesn't help us with older systems though. -- Simon Lees (Simotek) http://simotek.net Emergency Update Team keybase.io/simotek SUSE Linux Adelaide Australia, UTC+10:30 GPG Fingerprint: 5B87 DB9D 88DC F606 E489 CEC5 0922 C246 02F0 014B -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
participants (3)
-
Michael Pujos
-
Simon Lees
-
Tristan Miller