Hi, Am Mittwoch, 7. Dezember 2022, 22:22:42 CET schrieb Dirk Müller:
Hi Dominique,
sorry for dropping the mailing list from the reply, fixed now again.
Am Mi., 7. Dez. 2022 um 18:10 Uhr schrieb Dominique Leuenberger / DimStar <dimstar@opensuse.org>:
Sure - it's a wiki and can only benefit from more input.
I added a pro/cons list from my point of view and expanded a bit on the baselibs idea.
Thanks, that's helpful.
The proposal I favor so far is the 'possible option' - i.e allow to build everything (or as much as we want) proprely as a different architecture and get this into rpm/zypp.
I agree that this sounds like the "cleanest" approach, but it uses a hammer for something where not necessarily a hammer is needed, and it has therefore also quite some drawbacks.
FWICT this problem looks very much like a nail.
In addition, this approach has been rejected already previously by rpm upstream, I added the link to that discussion to your wiki page.
Not for x86_64_vX, that PR is about CPU family specific architectures like znver1. With glibc, gcc etc. supporting x86_64_vX through hwcaps, I think they'd accept it.
Also it has a number of downsides, and myself being one of the openSUSE maintainers with the experience on armv6l/armv7l, there is a TON of software that breaks when rpm architecture is not matching the kernel architecture (aka when $(uname -m) != %arch, which is the case for armv7l <-> armv7hl).
Do you have some examples? I imagine this only hits software which has to interact with RPM as well as the kernel.
Maintaining this and getting those fixes upstream is anything but fun, and in this case it only worked because other distributions (fedora, ubuntu/debian) had the same issue, so everyone pushed the various upstreams to accept those patches. And yet, such an assumption is constantly creeping into the code requiring a constant battle of fixes.
Also it breaks base interoperability requirements, especially when targetting -v3 and -v4 and thinking about containerized / cloud deployments, where cpu features can change between restart or reboot.
Having x86_64_vX arch in RPM doesn't mean that coinstallability is impossible. We could ship the hwcaps libraries as .x86_64_vX.rpms. For container/cloud deployments I would actually expect those images to be pinned to a specific x86_64 level from the beginning to ensure predictable behaviour. It's no fun to hit some issue only randomly after an instance ran on newer hardware.
mls has already been lined up and thinks it should be rather easy (for the rpm and obs part)
Yes, that's true for zypper/libzypp. However in the non-openSUSE context of SUSE distributions often 3rd party software is used for systems management, and those need to be adopted for this as well. and many of those have hardcoded assumptions or false assumptions (like $(uname -m) being the rpmarchitecture).
Do you have an example?
; zypp
itself already has code to treat arch x>y but compatible (i586/i686) - si if we get x86_64<x86_64-v3, we're all set there (plus code to detect what machine you have)
with the downsides that it requires patching of about 1000 packages (%ifarch x86_64 -> %ifarch %x86_64)
Most of those packages wouldn't work with the hwcaps-only approach at all because they include binaries.
and not providing coinstallability, meaning you can't have single-installation media/machine images, requiring the user to choose which version works across their data centers and clouds (and hope that they do not use anything like kubernetes that just moves around workload)
(see above)
plus requiring full builds of the tumbleweed distribution (while technically we could build smaller ones for the higher optimized versions, that would mean mix-match deployments on the user side and on our installation medias, increasing size requirements and complexity)
Yeah, the huge flexibility means we have a lot of open options. At that point we already arrive at "implementation details" though, something we can even change later. Cheers, Fabian
Greetings, Dirk