Zypper crashes while loading shared libraries
Hi, during a zypper update, I stoped the update with <ctrl>+<c>. Seams to be that I stopped it during an updated of a required library (there was a segfault message visible). Now if I start "zypper", the following message is visible: zypper: error while loading shared libraries: libabsl_log_internal_check_op.so.2308.0.0: cannot open shared object file: No such file or directory According https://opensuse.pkgs.org/tumbleweed/opensuse-oss-x86_64/libabsl2308_0_0-202... the file should be available at https://ftp.lysator.liu.se/pub/opensuse/tumbleweed/repo/oss/x86_64/libabsl23... but it isn't. OK, so I downloaded the full folder with more than 2M files with: wget --no-parent -r https://ftp.lysator.liu.se/pub/opensuse/tumbleweed/repo/oss/x86_64/ --reject "index.html*" But still not there. I checked the dendencies but was not able to find the related package and installed all packages was there: # rpm -q --whatrequires zypper patterns-base-sw_management-20200505-47.1.x86_64 opi-5.0.0-1.1.noarch zypper-needs-restarting-1.14.68-1.4.noarch zypper-lifecycle-plugin-0.6.1601367426.843fe7a-3.8.noarch zypper-aptitude-1.14.68-1.4.noarch --> OK, I don't know if this is the right ones # rpm -qp zypper-1.14.68-1.4.x86_64.rpm --requires /bin/sh /bin/sh /bin/sh config(zypper) = 1.14.68-1.4 ld-linux-x86-64.so.2()(64bit) ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libaugeas.so.0()(64bit) libaugeas.so.0(AUGEAS_0.1.0)(64bit) libaugeas.so.0(AUGEAS_0.8.0)(64bit) libaugeas0 >= 1.10.0 libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.28)(64bit) libc.so.6(GLIBC_2.32)(64bit) libc.so.6(GLIBC_2.34)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_2.4)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_3.3.1)(64bit) libreadline.so.8()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.5)(64bit) libstdc++.so.6(CXXABI_1.3.8)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.14)(64bit) libstdc++.so.6(GLIBCXX_3.4.15)(64bit) libstdc++.so.6(GLIBCXX_3.4.18)(64bit) libstdc++.so.6(GLIBCXX_3.4.20)(64bit) libstdc++.so.6(GLIBCXX_3.4.21)(64bit) libstdc++.so.6(GLIBCXX_3.4.26)(64bit) libstdc++.so.6(GLIBCXX_3.4.29)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) libstdc++.so.6(GLIBCXX_3.4.32)(64bit) libstdc++.so.6(GLIBCXX_3.4.9)(64bit) libxml2.so.2()(64bit) libxml2.so.2(LIBXML2_2.4.30)(64bit) libxml2.so.2(LIBXML2_2.5.0)(64bit) libxml2.so.2(LIBXML2_2.6.0)(64bit) libzypp >= 17.31.31 libzypp.so.1722()(64bit) libzypp.so.1722(ZYPP_plain)(64bit) procps rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 rpmlib(PayloadIsZstd) <= 5.4.18-1 --> This don't tell me the package name :-( Any idea how to repair zypper on a running system? PS: In worst case I need to boot a new install package and look it is possible to repair the system with an upgrade from an install disk Thanks Ulf
* On 3/11/24 19:40, Ulf via openSUSE Factory wrote:
Now if I start "zypper", the following message is visible: zypper: error while loading shared libraries: libabsl_log_internal_check_op.so.2308.0.0: cannot open shared object file: No such file or directory
According https://opensuse.pkgs.org/tumbleweed/opensuse-oss-x86_64/libabsl2308_0_0-202...
Since libabsl2401_0_0-20240116.1-1.1.x86_64.rpm is available and newer, the libabsl package has been upgraded on your system, but other programs or libraries still use the older library and weren't correctly upgraded. You've investigated the issue from the wrong side. The old package is gone now. Unless you can find it in some archive, you will certainly not find it in the TW repo any longer. Unfortunately, there's no easy way to tell which program is at fault. You'll have to check all binaries and libraries manually (via ldd or readelf) and figure out which one is still referencing the old library and then upgrade THIS package specifically. Also hope that the library is not loaded dynamically via dlopen... Mihai
On 2024-03-11 19:50, Mihai Moldovan wrote:
Now if I start "zypper", the following message is visible: zypper: error while loading shared libraries: libabsl_log_internal_check_op.so.2308.0.0: cannot open shared object file: No such file or directory
According https://opensuse.pkgs.org/tumbleweed/opensuse-oss-x86_64/libabsl2308_0_0-202... Since libabsl2401_0_0-20240116.1-1.1.x86_64.rpm is available and newer, the
* On 3/11/24 19:40, Ulf via openSUSE Factory wrote: libabsl package has been upgraded on your system, but other programs or libraries still use the older library and weren't correctly upgraded. You've investigated the issue from the wrong side. The old package is gone now. Unless you can find it in some archive, you will certainly not find it in the TW repo any longer.
Unfortunately, there's no easy way to tell which program is at fault.
You'll have to check all binaries and libraries manually (via ldd or readelf) and figure out which one is still referencing the old library and then upgrade THIS package specifically. Also hope that the library is not loaded dynamically via dlopen...
Some details on what happend, including the manual recovery: https://bugzilla.opensuse.org/show_bug.cgi?id=1221119#c2 Andreas
Many thanks Mihai and Andreas for quick response :-) Unluckily this PC is the only one in my Home Network which have a ext4 Filesystem running (OK, lessons learned, next step is to migrate it to btrfs). Am Montag, 11. März 2024, 19:55:06 CET schrieb Andreas Stieger via openSUSE Factory:
Thanks this was the bugreport I searched several hours and cant find :-/ But it helps to fix the issue :-D Your remark - was the missing info: Now: zypper -> libzypp -> libprotobuf-lite25_2_0 -> libabsl2401_0_0 I installed: # rpm -Uvh libzypp-17.31.31-1.2.x86_64.rpm # rpm -Uvh libprotobuf-lite25_1_0-25.1-9.5.x86_64.rpm # rpm -Uvh libprotobuf-lite3_21_12-21.12-1.5.x86_64.rpm # rpm -Uvh libabsl2401_0_0-20240116.1-1.1.x86_64.rpm Now zypper runs fine again :heart: Ulf
Ulf wrote:
Many thanks Mihai and Andreas for quick response :-)
Unluckily this PC is the only one in my Home Network which have a ext4 Filesystem running (OK, lessons learned, next step is to migrate it to btrfs).
Ulf
Just about all of my linux partitions on at least 3 machines have all used ext4 . . . it was even suggested in openSUSE back when that "ext4 was more stable than btrfs" . . . . I don't see the point to just change the format . . . unless you are doing a fresh install . . . then, OK.
Am Dienstag, 12. März 2024, 01:23:28 CET schrieb Fritz Hudnut:
Ulf wrote:
Many thanks Mihai and Andreas for quick response :-)
Unluckily this PC is the only one in my Home Network which have a ext4 Filesystem running (OK, lessons learned, next step is to migrate it to btrfs).
Ulf
Just about all of my linux partitions on at least 3 machines have all used ext4 . . . it was even suggested in openSUSE back when that "ext4 was more stable than btrfs"
yes..some years back that was probably true, but thats history
. . . . I don't see the point to just change the format . . . unless you are doing a fresh install . . . then, OK.
btrfs offer so many advantages, just looking at the snapshot functionality...boot into an old snapshot, snapper rollback, and issues like broke updates are solved. (with no intend to sart a discussion ext4 vs btrfs...) Axel
Hi Togethern, Am Dienstag, 12. März 2024, 13:55:42 CET schrieb Axel Braun:
Am Dienstag, 12. März 2024, 01:23:28 CET schrieb Fritz Hudnut:
Just about all of my linux partitions on at least 3 machines have all used ext4 . . . it was even suggested in openSUSE back when that "ext4 was more stable than btrfs" yes..some years back that was probably true, but thats history
Yep, this is what I know as well and there is a reason that (open)SUSE install btrfs by default since more than 10 years 🧐
. . . . I don't see the point to just change the format . . . unless you are doing a fresh install . . . then, OK. btrfs offer so many advantages, just looking at the snapshot functionality...boot into an old snapshot, snapper rollback, and issues like broke updates are solved.
Yep, I work on this machine with a ext4 over mdadm with Raid1 for root. According my experiance it is much more easy to use there a btrfs (direct with Raid1 or over mdadm). Especially due to the fact that a resize and check of a btrfs system is possible on a running system and also size reduction is possible (which is sometimes required).
(with no intend to sart a discussion ext4 vs btrfs...)
Me nicer. And especially in this case a rollback would be so much easier than this about 2 days with > 8h to find the fix. Ulf
[QUOTE]btrfs offer so many advantages, just looking at the snapshot functionality...boot into an old snapshot, snapper rollback, and issues like broke updates are solved. (with no intend to sart a discussion ext4 vs btrfs...) Axel [/QUOTE] Not wanting to get this "ext4 vs btrfs" thread to blow up either, I just want to paraphrase Woody Allen from "Manhattan"??? . . . "linux is like a shark, if it isn't constantly moving forward . . . it dies." So I never think about "going back" in my linux installs . . . it's either moving forward, or it's nuked and repaved . . . . : - 0 F
On 2024-03-12 13:55, Axel Braun wrote:
Am Dienstag, 12. März 2024, 01:23:28 CET schrieb Fritz Hudnut:
Ulf wrote:
Many thanks Mihai and Andreas for quick response :-)
Unluckily this PC is the only one in my Home Network which have a ext4 Filesystem running (OK, lessons learned, next step is to migrate it to btrfs).
Just about all of my linux partitions on at least 3 machines have all used ext4 . . . it was even suggested in openSUSE back when that "ext4 was more stable than btrfs"
yes..some years back that was probably true, but thats history
. . . . I don't see the point to just change the format . . . unless you are doing a fresh install . . . then, OK.
btrfs offer so many advantages, just looking at the snapshot functionality...boot into an old snapshot, snapper rollback, and issues like broke updates are solved.
rollback is a wonderful feature. But it comes with some snags: * btrfs is more difficult to repair in case of corruptions (fsck) * we still don't have a procedure to recreate a btrfs partition setup when doing a restore from scratch. Or cloning. item 2 affects in this case, as the admin can not simply create a new btrfs partition and move the existing system to it. He has to install fresh.
(with no intend to sart a discussion ext4 vs btrfs...)
:-) No, each one has pros and cons, as everything in engineering. We choose according to our respective needs. -- Cheers / Saludos, Carlos E. R. (from 15.5 x86_64 at Telcontar)
Well said Carlos . . . I was going to try to click "thumbs up" on your post, but it doesn't seem to be working, even while running an openSUSE OS . . . perhaps because I'm formatted in ext4??? : - )) F
Am 13.03.24 um 14:07 schrieb Carlos E. R.:
item 2 affects in this case, as the admin can not simply create a new btrfs partition and move the existing system to it.
Ever heard of btrfs send and btrfs receive? I do this for system migrations all the time. I can even migrate old snapshots i when I want to keep them. Just send/receive ro snapshots of what you want to move, clone one into a rw snapshot and mount it as your live volume. It's not a single step, but definitely possible for an everyday admin. - Ben
On 3/13/24 11:22, Ben Greiner wrote:
Am 13.03.24 um 14:07 schrieb Carlos E. R.:
item 2 affects in this case, as the admin can not simply create a new btrfs partition and move the existing system to it.
Ever heard of btrfs send and btrfs receive? I do this for system migrations all the time. I can even migrate old snapshots i when I want to keep them. Just send/receive ro snapshots of what you want to move, clone one into a rw snapshot and mount it as your live volume. It's not a single step, but definitely possible for an everyday admin.
Hi Ben, I was just getting ready to say the same thing. btrfs send/receive can be sent to another btrfs partition or to a regular file. I just wish there was an easy way to specify to get all the subvolumes. Regards, Joe
On 2024-03-13 18:42, Joe Salmeri wrote:
On 3/13/24 11:22, Ben Greiner wrote:
Am 13.03.24 um 14:07 schrieb Carlos E. R.:
item 2 affects in this case, as the admin can not simply create a new btrfs partition and move the existing system to it.
Ever heard of btrfs send and btrfs receive? I do this for system migrations all the time. I can even migrate old snapshots i when I want to keep them. Just send/receive ro snapshots of what you want to move, clone one into a rw snapshot and mount it as your live volume. It's not a single step, but definitely possible for an everyday admin.
Hi Ben,
I was just getting ready to say the same thing. btrfs send/receive can be sent to another btrfs partition or to a regular file.
I just wish there was an easy way to specify to get all the subvolumes.
It doesn't work with rsync. -- Cheers / Saludos, Carlos E. R. (from 15.5 x86_64 at Telcontar)
On Wed, 2024-03-13 at 13:42 -0400, Joe Salmeri wrote:
On 3/13/24 11:22, Ben Greiner wrote:
Am 13.03.24 um 14:07 schrieb Carlos E. R.:
item 2 affects in this case, as the admin can not simply create a new btrfs partition and move the existing system to it.
Ever heard of btrfs send and btrfs receive? I do this for system migrations all the time. I can even migrate old snapshots i when I want to keep them. Just send/receive ro snapshots of what you want to move, clone one into a rw snapshot and mount it as your live volume. It's not a single step, but definitely possible for an everyday admin.
Hi Ben,
I was just getting ready to say the same thing. btrfs send/receive can be sent to another btrfs partition or to a regular file.
I just wish there was an easy way to specify to get all the subvolumes.
Shameless self-marketing: https://github.com/mwilck/btrfs-clone It uses send/receive internally. These days, it's probably simpler to just do a block-level clone of the filesystem with "dd" and (if necessary) change the UUID. Martin
On 16.03.2024 02:25, Martin Wilck via openSUSE Factory wrote:
On Wed, 2024-03-13 at 13:42 -0400, Joe Salmeri wrote:
On 3/13/24 11:22, Ben Greiner wrote:
Am 13.03.24 um 14:07 schrieb Carlos E. R.:
item 2 affects in this case, as the admin can not simply create a new btrfs partition and move the existing system to it.
Ever heard of btrfs send and btrfs receive? I do this for system migrations all the time. I can even migrate old snapshots i when I want to keep them. Just send/receive ro snapshots of what you want to move, clone one into a rw snapshot and mount it as your live volume. It's not a single step, but definitely possible for an everyday admin.
Hi Ben,
I was just getting ready to say the same thing. btrfs send/receive can be sent to another btrfs partition or to a regular file.
I just wish there was an easy way to specify to get all the subvolumes.
Shameless self-marketing: https://github.com/mwilck/btrfs-clone It uses send/receive internally.
Does it adjust grub/fstab/crypttab/add-your-favorite-place for the new UUID?
These days, it's probably simpler to just do a block-level clone of the filesystem with "dd" and (if necessary) change the UUID.
Martin
On Sat, 2024-03-16 at 09:58 +0300, Andrei Borzenkov wrote:
On 16.03.2024 02:25, Martin Wilck via openSUSE Factory wrote:
Shameless self-marketing: https://github.com/mwilck/btrfs-clone It uses send/receive internally.
Does it adjust grub/fstab/crypttab/add-your-favorite-place for the new UUID?
No. It just creates a new btrfs with all subvolumes (well, almost, see the README for an explanation why it isn't possible to replicate the subvolume structure of an existing volume exactly with btrfs send/receive. Martin
On 3/13/24 09:07, Carlos E. R. wrote:
* we still don't have a procedure to recreate a btrfs partition setup when doing a restore from scratch. Or cloning.
item 2 affects in this case, as the admin can not simply create a new btrfs partition and move the existing system to it. He has to install fresh.
Hi Carlos, Have you seen this link ? https://rootco.de/2018-01-19-opensuse-btrfs-subvolumes/ Although it is from a while back it is basically correct with one exception which is that /tmp now uses tmpfs instead of a subvolume now. I have used that as a basis for replicating TW's subvolume layout on other distros I was testing out. -- Regards, Joe
Hi Together, Am Mittwoch, 13. März 2024, 18:38:51 CET schrieb Joe Salmeri:
Have you seen this link ?
Thanks for Link and all the good arguments. I would propose to stop the discussion at this point with the conclusion, that each FS have its own pro and cons - and each one is able to select this one he expect is the right for him/her. Ulf
On 2024-03-13 18:38, Joe Salmeri wrote:
On 3/13/24 09:07, Carlos E. R. wrote:
* we still don't have a procedure to recreate a btrfs partition setup when doing a restore from scratch. Or cloning.
item 2 affects in this case, as the admin can not simply create a new btrfs partition and move the existing system to it. He has to install fresh.
Hi Carlos,
Have you seen this link ?
Maybe. As I don't use btrfs on root, I don't keep that type of instructions in ram ;-)
Although it is from a while back it is basically correct with one exception which is that /tmp now uses tmpfs instead of a subvolume now.
I have used that as a basis for replicating TW's subvolume layout on other distros I was testing out.
I would prefer a script included with all the distributions guaranteed to be kept up to date that creates all the volumes or whatever. Or a YaST module. -- Cheers / Saludos, Carlos E. R. (from 15.5 x86_64 at Telcontar)
participants (10)
-
Andreas Stieger
-
Andrei Borzenkov
-
Axel Braun
-
Ben Greiner
-
Carlos E. R.
-
Fritz Hudnut
-
Joe Salmeri
-
Martin Wilck
-
Mihai Moldovan
-
Ulf