Re: ALP WG Meeting minutes - July 5th 2022
![](https://seccdn.libravatar.org/avatar/e6dc8afd12f42302ae7b5ea72e4dd686.jpg?s=120&d=mm&r=g)
Hi Lars, Lars Vogdt <lars@linux-schulserver.de> writes:
Am 5. Juli 2022 15:46:29 UTC schrieb "Dan Čermák" <dcermak@suse.de>:
tl;dr; Fear not, you will not be forced to download random flatpaks or containers from dockerhub/flathub. We (probably) just change the deliver method, but the packaging workflow will stay the same.
1) What are the advantages of flatpak/container vs RPM?
Flatpaks support sandboxing when configured properly giving you greater security benefits in comparison to traditional rpms. The main benefit however is that we would be able to build certain desktop applications (e.g. Firefox, Thunderbird, LibreOffice, etc.) only on top of Tumbleweed, put them into a flatpak and distribute them to Tumbleweed, Leap and ALP users. This is something that is impossible to achieve with RPMs unless you bundle *everything* into a single RPM and even then it will probably not work. So this will save our packagers a tremendous amount of work.
2) Will ALP still allow one 'zypper up' call to update all software (container, flatpak, rpm) on a system?
3) Will OBS auto-convert current rpms to flatpaks?
Theoretically that could be done, but I think that's at the moment out of scope.
4) Will there still be some kind of repo (incl. package informations like version-release, changelog, etc.) for flatpaks?
5) Will Yast-Software manage all these different kind of formats (container, flatpak, rpm) transparently for the enduser?
I don't know what the plans of the Yast team wrt flatpaks are, but GNOME Software center supports flatpaks and rpms seamlessly.
6) Can configuration still be expected at the usual places ($HOME/.config/ : /etc : /usr/etc/)?
That depends on the flatpak, but generally flatpaks put their configuration into ~/.var/app/
7) Are current backups (rsync, Bareos) and configuration management (ansible, Salt) still be supported?
8) Will the whole delivery chain stay secure (signing of flatpaks, SSL, DNSSec, ...)?
Yes. This is one of the strict requirements to ALP.
9) Will autoyast still allow automated deployment?
Regards, Lars -- Dan Čermák <dcermak@suse.com> Software Engineer Development tools SUSE Software Solutions Germany GmbH Frankenstrasse 146 90461 Nürnberg Germany
(HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
![](https://seccdn.libravatar.org/avatar/c1a90587ddfde63b222f10157a11b950.jpg?s=120&d=mm&r=g)
Am Wed, 06 Jul 2022 11:48:07 +0200 schrieb Dan Čermák <dcermak@suse.com>:
1) What are the advantages of flatpak/container vs RPM?
Flatpaks support sandboxing when configured properly giving you greater security benefits in comparison to traditional rpms.
The main benefit however is that we would be able to build certain desktop applications (e.g. Firefox, Thunderbird, LibreOffice, etc.) only on top of Tumbleweed, put them into a flatpak and distribute them to Tumbleweed, Leap and ALP users. This is something that is impossible to achieve with RPMs unless you bundle *everything* into a single RPM and even then it will probably not work. So this will save our packagers a tremendous amount of work.
While I understand that this makes the live of a packager easier (at least in the future, when we got rid of all the old-school distributions), I think that we loose a big benefit of the current way to do it. Please correct me, if my assumptions are wrong: * A Flatpak will include all needed libraries, which will blow up the needed (installation and mirror) space * We need to find a way to track, if a Flatpak contains a vulnerable library and push out security updates for all involved Flatpak (containers). I think/hope, that OBS can help us here? * For any library update, we increase the needed space on all mirror servers and also require much more bandwidth from all endusers - as they need to download a whole copy of all Flatpaks with all their in-tree libraries. -> Is there already a project like "deltarpm", which might help to reduce the amount of 'need to be transferred' Flatpak content for any update?
2) Will ALP still allow one 'zypper up' call to update all software (container, flatpak, rpm) on a system?
3) Will OBS auto-convert current rpms to flatpaks?
Theoretically that could be done, but I think that's at the moment out of scope.
For 2, does this mean we loose the current benefit of updating a whole system with just one single tool? So for 3, this currently means that packagers have more work, because they need to provide the needed Flatpak build configuration as well?
4) Will there still be some kind of repo (incl. package informations like version-release, changelog, etc.) for flatpaks?
5) Will Yast-Software manage all these different kind of formats (container, flatpak, rpm) transparently for the enduser?
I don't know what the plans of the Yast team wrt flatpaks are, but GNOME Software center supports flatpaks and rpms seamlessly.
So this means more work for the Yast team. Are they aware of this? Did they maybe even already agreed to implement this in the near future (before the first official release of ALP)?
6) Can configuration still be expected at the usual places ($HOME/.config/ : /etc : /usr/etc/)?
That depends on the flatpak, but generally flatpaks put their configuration into ~/.var/app/
How is a migration from RPM to Flatpak expected in this case? Are we expecting all our users to manually copy their ~/.config/* into the new ~/.var/app/* place? Are there maybe even internal differences (YAML vs INI-Style)?
8) Will the whole delivery chain stay secure (signing of flatpaks, SSL, DNSSec, ...)?
Yes. This is one of the strict requirements to ALP.
Thanks! Lars
![](https://seccdn.libravatar.org/avatar/e6dc8afd12f42302ae7b5ea72e4dd686.jpg?s=120&d=mm&r=g)
Lars Vogdt <lars@linux-schulserver.de> writes:
Am Wed, 06 Jul 2022 11:48:07 +0200 schrieb Dan Čermák <dcermak@suse.com>:
1) What are the advantages of flatpak/container vs RPM?
Flatpaks support sandboxing when configured properly giving you greater security benefits in comparison to traditional rpms.
The main benefit however is that we would be able to build certain desktop applications (e.g. Firefox, Thunderbird, LibreOffice, etc.) only on top of Tumbleweed, put them into a flatpak and distribute them to Tumbleweed, Leap and ALP users. This is something that is impossible to achieve with RPMs unless you bundle *everything* into a single RPM and even then it will probably not work. So this will save our packagers a tremendous amount of work.
While I understand that this makes the live of a packager easier (at least in the future, when we got rid of all the old-school distributions), I think that we loose a big benefit of the current way to do it.
Please correct me, if my assumptions are wrong: * A Flatpak will include all needed libraries, which will blow up the needed (installation and mirror) space
No, flatpaks support runtimes which would contain the required libraries and these are shared between different flatpaks. E.g. on flathub you already have a GNOME runtime and a KDE runtime. We could create our own openSUSE Tumbleweed runtime and base our flatpaks on that.
* We need to find a way to track, if a Flatpak contains a vulnerable library and push out security updates for all involved Flatpak (containers). I think/hope, that OBS can help us here?
The plan is to build *everything* inside OBS. So updates will be distributed the same way as they are currently, only the artifact at the end will change.
* For any library update, we increase the needed space on all mirror servers and also require much more bandwidth from all endusers - as they need to download a whole copy of all Flatpaks with all their in-tree libraries. -> Is there already a project like "deltarpm", which might help to reduce the amount of 'need to be transferred' Flatpak content for any update?
Again, this would go into the runtime, which would be shared by all flatpaks. I must admit that I am not familiar with the flatpak distribution method enough, thus I cannot answer whether there is something like deltarpm.
2) Will ALP still allow one 'zypper up' call to update all software (container, flatpak, rpm) on a system?
3) Will OBS auto-convert current rpms to flatpaks?
Theoretically that could be done, but I think that's at the moment out of scope.
For 2, does this mean we loose the current benefit of updating a whole system with just one single tool?
So for 3, this currently means that packagers have more work, because they need to provide the needed Flatpak build configuration as well?
Not really. Currently a packager has to provide a spec file for every code stream and maintain those independently and make them build. With flatpaks in the mix, you'd have to maintain a single code stream and a flatpak build description.
4) Will there still be some kind of repo (incl. package informations like version-release, changelog, etc.) for flatpaks?
5) Will Yast-Software manage all these different kind of formats (container, flatpak, rpm) transparently for the enduser?
I don't know what the plans of the Yast team wrt flatpaks are, but GNOME Software center supports flatpaks and rpms seamlessly.
So this means more work for the Yast team. Are they aware of this? Did they maybe even already agreed to implement this in the near future (before the first official release of ALP)?
I have CC'd the Yast ML to get their take on this.
6) Can configuration still be expected at the usual places ($HOME/.config/ : /etc : /usr/etc/)?
That depends on the flatpak, but generally flatpaks put their configuration into ~/.var/app/
How is a migration from RPM to Flatpak expected in this case?
Afaik there are no plans to support a migration from SLE to ALP.
Are we expecting all our users to manually copy their ~/.config/* into the new ~/.var/app/* place?
As you'd have to reinstall the system anyway, you'd have to move your configs into different places.
Are there maybe even internal differences (YAML vs INI-Style)?
Any internal differences would be caused by the packaged application itself. Flatpaks do not mandate any type of configuration format, as at the end of the day they are just archives that get unpacked on your file system (just like rpms…). Cheers, Dan -- Dan Čermák <dcermak@suse.com> Software Engineer Development tools SUSE Software Solutions Germany GmbH Frankenstrasse 146 90461 Nürnberg Germany (HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
![](https://seccdn.libravatar.org/avatar/ed5b1491aa79201a8eaf93bf57193584.jpg?s=120&d=mm&r=g)
On 7/6/22 06:45, Dan Čermák wrote:
Lars Vogdt <lars@linux-schulserver.de> writes:
<snip>
So for 3, this currently means that packagers have more work, because they need to provide the needed Flatpak build configuration as well?
Not really. Currently a packager has to provide a spec file for every code stream and maintain those independently and make them build.
This is not generally true and not a function of rpm vs. some other format. That there are forks is imposed by the model of the distribution and the target audience. This has nothing to do with the format of the package. If there is a desire to have the latest and greatest version of Firefox (insert you favorite package example here) in SLED (insert other distro here) there is nothing preventing SUSE from taking the package from TW and sticking it into SLED. Putting it in a flatpak really has no effect on the intended audience of the distro SLE -> as few changes as possible over a long period of time TW -> as close to the cutting edge as we can get without having as system that is broken all the time These concepts do not change .deb, .rpm, flatpak ...... The indiscriminate conflation of topics is concerning. That said, there is no question that at SUSE we have the goal of reducing the number of packages that fall into the "as few changes as possible over a long period of time" category. Isolation via boxing helps this effort to a certain extend, but it doesn't reduce the burden of maintenance, it might actually increase it. If I have 2 applications and they depend on different versions of the same library then there are 2 approaches to deal with this. 1.) I port the application that depends on the older version of the library forward to work with the new version of the library + this is the current practice, one integrate whole 2.) I maintain 2 version of the library and stick the library and application, in it's respective version into it's isolated environment + containers, flatpak.... Whether or not maintaining multiple versions of the same thing is less or more work is yet to be seen. And then there are those cases where 2 is not an option, for example a security bug in OpenSSL does not get fixed in the version one of the applications needs. Forward porting becomes the only option. And of course there is a 3rd choice, only ship one application if there are applications with conflicting dependency requirements. But this has functionality impacts and people don't like to make those kind of choices. Anyway as long as we conflate topics any solutions will, by necessity, be nothing but a shifting around of problems. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU Distinguished Engineer LINUX Technical Team Lead Public Cloud rjschwei@suse.com IRC: robjo
![](https://seccdn.libravatar.org/avatar/6d252d3b7622f05a8c6b6e241ce13257.jpg?s=120&d=mm&r=g)
Hello, Am 06.07.22 um 14:59 schrieb Robert Schweikert:
So for 3, this currently means that packagers have more work, because they need to provide the needed Flatpak build configuration as well?
Not really. Currently a packager has to provide a spec file for every code stream and maintain those independently and make them build.
This is not generally true and not a function of rpm vs. some other format. That there are forks is imposed by the model of the distribution and the target audience. This has nothing to do with the format of the package.
If there is a desire to have the latest and greatest version of Firefox (insert you favorite package example here) in SLED (insert other distro here) there is nothing preventing SUSE from taking the package from TW and sticking it into SLED. Putting it in a flatpak really has no effect on the intended audience of the distro
Actually, from a user perspective, it has. For example, i can put older or newer rpms from older or newer SLES/LEAP versions into a flatpak and run that. For me as a user i can now sometimes circumvent problems because i cannot install the rpm on the system, either because of version conflict, or because the rpm is old and could have security issues (which can be mitigated by flatpak) but i need it for some ephemeral task. But i cannot install it due to version conflicts or security problems, because no one did a proper apparmor/selinux profile for it (if it's not the standard firefox). And to be honest: I had that problem quite a few times as a private user and working for companies. i was so delighted when containers came around for that. In my opinion we have to have to put all views into perspective here. The user, developer, maintainer and downstream maintainer view, respect them equally and find possibilities to enable all of them with at least work as possible. i admit this might be a impossible task, but at least we should strive for it.
SLE -> as few changes as possible over a long period of time TW -> as close to the cutting edge as we can get without having as system that is broken all the time
These concepts do not change .deb, .rpm, flatpak ......
The indiscriminate conflation of topics is concerning.
That said, there is no question that at SUSE we have the goal of reducing the number of packages that fall into the "as few changes as possible over a long period of time" category. Isolation via boxing helps this effort to a certain extend, but it doesn't reduce the burden of maintenance, it might actually increase it. If I have 2 applications and they depend on different versions of the same library then there are 2 approaches to deal with this.
Yes, from a maintainer perspective this is correct. As a maintainer, i want to update all the packages and have them working together. But in practice, sometimes as a user, some versions of packages are not working together. Often because packages in the basesystem block some userland/service/desktop-packages. Compartmentalization can help users and downstream maintainers here quite a lot. For example, a graphics library might have a network access problem/vulnerability, but i need it for creating files, but the new version conflicts with.. let's take a dumb example, graph2dot. Now with containers or flatpaks i can still use the old vulnerable version in a application which could use it, but due to sandboxing the vulnerability is not a problem.
1.) I port the application that depends on the older version of the library forward to work with the new version of the library + this is the current practice, one integrate whole
2.) I maintain 2 version of the library and stick the library and application, in it's respective version into it's isolated environment + containers, flatpak....
or we just keep the older vulnerable version available and let the user/downstream maintainer decide if it's a problem, because he has tools at hand compartmentalizing it. that does not mean we have to maintain the old versions of the library all the way with the same effort, we should have to differentiate between maintain the quality/security and keeping them available. the only issue might be having/providing lists of known issues for packages/software versions.
And then there are those cases where 2 is not an option, for example a security bug in OpenSSL does not get fixed in the version one of the applications needs. Forward porting becomes the only option.
yes, but that is the case because the sole functionality of OpenSSL is the security. we do not need to maintain/keep available a version of a graphics library if it cannot create images, do we? There are different classes of bugs.
Anyway as long as we conflate topics any solutions will, by necessity, be nothing but a shifting around of problems.
i really have to supress myself not to make a joke about of shifting around problems between different abstraction layers.. :) But i admit i also might conflate topics. The new technologies bundle things together which are traditionally not connected. Kind regard, Dennis -- Dennis Knorr, dennis.knorr@suse.com SUSE Software Solutions Germany GmbH Frankenstraße 146, 90461 Nürnberg, Germany Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman (HRB 36809, AG Nürnberg)
![](https://seccdn.libravatar.org/avatar/e6dc8afd12f42302ae7b5ea72e4dd686.jpg?s=120&d=mm&r=g)
Robert Schweikert <rjschwei@suse.com> writes:
On 7/6/22 06:45, Dan Čermák wrote:
Lars Vogdt <lars@linux-schulserver.de> writes:
<snip>
So for 3, this currently means that packagers have more work, because they need to provide the needed Flatpak build configuration as well?
Not really. Currently a packager has to provide a spec file for every code stream and maintain those independently and make them build.
This is not generally true and not a function of rpm vs. some other format. That there are forks is imposed by the model of the distribution and the target audience. This has nothing to do with the format of the package.
If there is a desire to have the latest and greatest version of Firefox (insert you favorite package example here) in SLED (insert other distro here) there is nothing preventing SUSE from taking the package from TW and sticking it into SLED. Putting it in a flatpak really has no effect on the intended audience of the distro
It is: we'd have to bundle everything that Firefox needs into one giant statically linked rpm. This approach would result in more duplication on disk, it would provide none of the sandboxing and it would be complete no go for the security team unless you somehow declare everything that you bundle. If you instead build a flatpak from rpms, you can reuse the existing technology for keeping a SBOM. Cheers, Dan -- Dan Čermák <dcermak@suse.com> Software Engineer Development tools SUSE Software Solutions Germany GmbH Frankenstrasse 146 90461 Nürnberg Germany (HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
![](https://seccdn.libravatar.org/avatar/ed5b1491aa79201a8eaf93bf57193584.jpg?s=120&d=mm&r=g)
On 7/6/22 11:49, Dan Čermák wrote:
Robert Schweikert <rjschwei@suse.com> writes:
On 7/6/22 06:45, Dan Čermák wrote:
Lars Vogdt <lars@linux-schulserver.de> writes:
<snip>
So for 3, this currently means that packagers have more work, because they need to provide the needed Flatpak build configuration as well?
Not really. Currently a packager has to provide a spec file for every code stream and maintain those independently and make them build.
This is not generally true and not a function of rpm vs. some other format. That there are forks is imposed by the model of the distribution and the target audience. This has nothing to do with the format of the package.
If there is a desire to have the latest and greatest version of Firefox (insert you favorite package example here) in SLED (insert other distro here) there is nothing preventing SUSE from taking the package from TW and sticking it into SLED. Putting it in a flatpak really has no effect on the intended audience of the distro
It is: we'd have to bundle everything that Firefox needs into one giant statically linked rpm.
Yes that could be done but it does not have to be that way. The point I was trying to make, and I apparently was not explicit enough, is that any distribution, as an integrated system can have any version of any package. That this is not the case is driven by the model of the distribution and not the delivery method of the software.
This approach would result in more duplication on disk,
How? If firefox needs foo-version-A and the system has foo-version-B then whether I deliver firefox in a flatpack or in, following your example, a big statically linked rpm results in the same duplication. When things get split into isolated uints with their own dependencies and the integration work happens at different stages duplication is inevitable. Need an example, look at go. Every go binary that depends on the same module has it's own copy thereof. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU Distinguished Engineer LINUX Technical Team Lead Public Cloud rjschwei@suse.com IRC: robjo
![](https://seccdn.libravatar.org/avatar/757205097a14d69edc12951b3375437b.jpg?s=120&d=mm&r=g)
Robert Schweikert <rjschwei@suse.com> writes:
On 7/6/22 11:49, Dan Čermák wrote:
Robert Schweikert <rjschwei@suse.com> writes:
On 7/6/22 06:45, Dan Čermák wrote:
Lars Vogdt <lars@linux-schulserver.de> writes:
<snip>
So for 3, this currently means that packagers have more work, because they need to provide the needed Flatpak build configuration as well?
Not really. Currently a packager has to provide a spec file for every code stream and maintain those independently and make them build.
This is not generally true and not a function of rpm vs. some other format. That there are forks is imposed by the model of the distribution and the target audience. This has nothing to do with the format of the package.
If there is a desire to have the latest and greatest version of Firefox (insert you favorite package example here) in SLED (insert other distro here) there is nothing preventing SUSE from taking the package from TW and sticking it into SLED. Putting it in a flatpak really has no effect on the intended audience of the distro
It is: we'd have to bundle everything that Firefox needs into one giant statically linked rpm.
Yes that could be done but it does not have to be that way. The point I was trying to make, and I apparently was not explicit enough, is that any distribution, as an integrated system can have any version of any package. That this is not the case is driven by the model of the distribution and not the delivery method of the software.
Yes, you can do stuff like SCLs with rpms and have multiple versions of the same software in parallel. However, there's certainly a reason why SCLs have been dropped in RHEL 8 and replaced by modularity, which explicitly never targeted co-installation of packages at different versions. Yes it is possible, but I think that doing this with flatpaks and containers is much simpler.
This approach would result in more duplication on disk,
How?
If firefox needs foo-version-A and the system has foo-version-B then whether I deliver firefox in a flatpack or in, following your example, a big statically linked rpm results in the same duplication.
That is correct. You would be however able to benefit from flatpaks deduplication if you install another flatpak using the same libraries as Firefox uses as well as during updates. Cheers, Dan -- Dan Čermák <dcermak@suse.com> Software Engineer Development tools SUSE Software Solutions Germany GmbH Frankenstrasse 146 90461 Nürnberg Germany (HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
![](https://seccdn.libravatar.org/avatar/1980cbbe738655bc3b2cdcf59a4a44e9.jpg?s=120&d=mm&r=g)
Dan Čermák wrote:
Robert Schweikert <rjschwei@suse.com> writes:
If firefox needs foo-version-A and the system has foo-version-B then whether I deliver firefox in a flatpack or in, following your example, a big statically linked rpm results in the same duplication.
That is correct. You would be however able to benefit from flatpaks deduplication if you install another flatpak using the same libraries as Firefox uses as well as during updates.
I'm a bit late to the show - still, allow one question: Is the plan to supply a complete flatpak support base under openSUSE control? That is, all needed libs will be available as flatpaks made on OBS, and not supplied by some 3rd party out of control? This - IMO - is one of the dangerous attack vectors in widely-open package systems like flathub, pypi etc.
![](https://seccdn.libravatar.org/avatar/e6dc8afd12f42302ae7b5ea72e4dd686.jpg?s=120&d=mm&r=g)
Peter Suetterlin <pit@astro.su.se> writes:
Dan Čermák wrote:
Robert Schweikert <rjschwei@suse.com> writes:
If firefox needs foo-version-A and the system has foo-version-B then whether I deliver firefox in a flatpack or in, following your example, a big statically linked rpm results in the same duplication.
That is correct. You would be however able to benefit from flatpaks deduplication if you install another flatpak using the same libraries as Firefox uses as well as during updates.
I'm a bit late to the show - still, allow one question: Is the plan to supply a complete flatpak support base under openSUSE control? That is, all needed libs will be available as flatpaks made on OBS, and not supplied by some 3rd party out of control?
Everything that we build on OBS must be build from sources available in OBS. So yes, the flatpak support base will be under our control and you will not get binaries that were built somewhere outside of OBS, as long as you use the openSUSE flatpaks. We most certainly do _not_ want to weaken our supply chain story. Cheers, Dan -- Dan Čermák <dcermak@suse.com> Software Engineer Development tools SUSE Software Solutions Germany GmbH Frankenstrasse 146 90461 Nürnberg Germany (HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
![](https://seccdn.libravatar.org/avatar/1980cbbe738655bc3b2cdcf59a4a44e9.jpg?s=120&d=mm&r=g)
Dan Čermák wrote:
Everything that we build on OBS must be build from sources available in OBS. So yes, the flatpak support base will be under our control and you will not get binaries that were built somewhere outside of OBS, as long as you use the openSUSE flatpaks.
We most certainly do _not_ want to weaken our supply chain story.
Thanks a lot Dan! I'd call myself a flatpack/container sceptic (like several others here). But with that background it's something I can arrange myself with. I'll watch how it goes :) Pit
![](https://seccdn.libravatar.org/avatar/d7a1abb38a8ed313081bb8f250b16199.jpg?s=120&d=mm&r=g)
Hi, Am 06.07.22 um 12:45 schrieb Dan Čermák:
Lars Vogdt <lars@linux-schulserver.de> writes:
So for 3, this currently means that packagers have more work, because they need to provide the needed Flatpak build configuration as well?
Not really. Currently a packager has to provide a spec file for every code stream and maintain those independently and make them build. With flatpaks in the mix, you'd have to maintain a single code stream and a flatpak build description.
I'm late in reading this long thread but meanwhile I have read many times about popular examples like Firefox and Thunderbird. As the one maintaining Firefox and Thunderbird for openSUSE since their existance I have to say that nobody asked me if flatpaks would make my life easier. Also I'm not maintaining a spec file for every code stream. I have been using one spec file for all supported distributions (and unsupported ones for a while). Yes, it sometimes requires some ifs and elses and sometimes it's a bit annoying to deal with ancient (or too modern) toolchains in distributions but in that discussion here it feels highly exaggerated. I'm not really opposed to have a flatpak option which works but some arguments feel a bit weak. I absolutely like flatpaks for ISVs to make it easier for them to distribute useful (even non-free) software for Linux. I'm just not sure it's the solution for "everything". Wolfgang
![](https://seccdn.libravatar.org/avatar/5b748275c3dbb1ceee18ed554486547d.jpg?s=120&d=mm&r=g)
On Wednesday 2022-07-06 11:48, Dan Čermák wrote:
1) What are the advantages of flatpak/container vs RPM?
Flatpaks support sandboxing when configured properly giving you greater security benefits in comparison to traditional rpms.
But that is not inherent to the flatpaks themselves. As you say, flatpaks are but a different method of _distributing_ software. And so the critique becomes: stop distributing software twice. To make flatpaks (or, at this point: sandboxes) more widely accepted, generate them on the fly from an existing system, i.e. from {an rpmdb and loose files} rather than from a set of .rpm files.
![](https://seccdn.libravatar.org/avatar/e6dc8afd12f42302ae7b5ea72e4dd686.jpg?s=120&d=mm&r=g)
Jan Engelhardt <jengelh@inai.de> writes:
On Wednesday 2022-07-06 11:48, Dan Čermák wrote:
1) What are the advantages of flatpak/container vs RPM?
Flatpaks support sandboxing when configured properly giving you greater security benefits in comparison to traditional rpms.
But that is not inherent to the flatpaks themselves. As you say, flatpaks are but a different method of _distributing_ software. And so the critique becomes: stop distributing software twice.
That's what will happen. Currently we distribute software for every code stream and with flatpaks we hope to reduce the burden on our maintainers and allow them to distribute it only once.
To make flatpaks (or, at this point: sandboxes) more widely accepted, generate them on the fly from an existing system, i.e. from {an rpmdb and loose files} rather than from a set of .rpm files.
Sure, that would be nice to have. But creating such a system is a lot of work which is really hard to justify if there is already a perfectly functioning and widely adopted system in place. Cheers, Dan -- Dan Čermák <dcermak@suse.com> Software Engineer Development tools SUSE Software Solutions Germany GmbH Frankenstrasse 146 90461 Nürnberg Germany (HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
![](https://seccdn.libravatar.org/avatar/a4139df10120ce151e457fd1faff018d.jpg?s=120&d=mm&r=g)
On 7/6/22 20:20, Dan Čermák wrote:
Jan Engelhardt <jengelh@inai.de> writes:
On Wednesday 2022-07-06 11:48, Dan Čermák wrote:
1) What are the advantages of flatpak/container vs RPM?
Flatpaks support sandboxing when configured properly giving you greater security benefits in comparison to traditional rpms.
But that is not inherent to the flatpaks themselves. As you say, flatpaks are but a different method of _distributing_ software. And so the critique becomes: stop distributing software twice.
That's what will happen. Currently we distribute software for every code stream and with flatpaks we hope to reduce the burden on our maintainers and allow them to distribute it only once.
This is simply not true, let me take an extreme example for fun. If you look at the package setserial you will see that we ship the exact same binary for all of SLE-15 and its service packs, you'll also see the sources are identical for tumblewed (the package hasn't changed in 8 years). Also looking at a more relevant recent example of something that would be shipped as a flatpak under your model being the Terminology Terminal Emulator, for Leap 15.4 I just copied the tumbleweed sources across and everything was fine, I would have done the same for the last time it was updated in 15.2 this equates to about 10 minutes of effort each time. But under this new model your suggesting that as well as building an RPM for tumbleweed I also need to build a flatpak? How much effort do you estimate this will take? because your trying to make it sound like as a packager this will be less effort for me when it certainly sounds like more. This is just 2 simple examples in other packages I maintain such as dbus and cmake there is always careful consideration into do we actually need a new version for this stream or can we continue sharing the old ones. My final question is where do we want to draw the line here? I also maintain the conky system monitor i'm not sure how that would work as a flatpak and if it makes sense, similarly the fish shell could be seen by some as a system thing but for others they may just want to install it for there own users so should things like interactive shells also come from flatpak's where do you suggest the line should be here? -- Simon Lees (Simotek) http://simotek.net Emergency Update Team keybase.io/simotek SUSE Linux Adelaide Australia, UTC+10:30 GPG Fingerprint: 5B87 DB9D 88DC F606 E489 CEC5 0922 C246 02F0 014B
![](https://seccdn.libravatar.org/avatar/9346938c7445407e30501c9e4cc1561a.jpg?s=120&d=mm&r=g)
On 7/7/22 04:43, Simon Lees wrote:
On 7/6/22 20:20, Dan Čermák wrote:
That's what will happen. Currently we distribute software for every code stream and with flatpaks we hope to reduce the burden on our maintainers and allow them to distribute it only once.
This is simply not true, let me take an extreme example for fun. If you look at the package setserial you will see that we ship the exact same binary for all of SLE-15 and its service packs, you'll also see the sources are identical for tumblewed (the package hasn't changed in 8 years).
..
My final question is where do we want to draw the line here? I also maintain the conky system monitor i'm not sure how that would work as a flatpak and if it makes sense, similarly the fish shell could be seen by
Let me try. So, flatpaks is a tech built on top of OCI containers (at least it seems in Fedora, which makes some sense). Containers have layers of file systems that are resolved at built time and composed of *binary* packages (RPMs, not OBS sources). This layering means if you have 10 different applications that are built on top of SLE-15, these would only have the application layer different and the rest should all be shared. In the most benign scenario, you have replicated what we have currently with additional layering and indirection. Now, if you look at the older distros like SLE-11, there is also latest Firefox there. But to get this latest Firefox there, the packager had to include a whole bunch of packages into the Firefox's libexec paths (so, private "namespace", for the lack of better words) to be able to build and run it. These packages are not part of SLE-11, but they had to be added to the package to keep it maintained. This means python3 and even nodejs. Flatpaks address this situation. They allow us to migrate the dependencies of *applications* forward as-needed and as-permitted. It still *may* not allow you to mix GNOME 29 applet with GNOME 5 application -- this will have to be tested at a later point in time. But it no longer creates this mess that is forced upon the distributions as they age just to keep different parts of it maintained. - Adam
![](https://seccdn.libravatar.org/avatar/5f188c5fb664dc110d55f04cd59a6e74.jpg?s=120&d=mm&r=g)
Le jeudi 07 juillet 2022 à 11:04 +0200, Adam Majer a écrit :
On 7/7/22 04:43, Simon Lees wrote:
On 7/6/22 20:20, Dan Čermák wrote:
That's what will happen. Currently we distribute software for every code stream and with flatpaks we hope to reduce the burden on our maintainers and allow them to distribute it only once.
This is simply not true, let me take an extreme example for fun. If you look at the package setserial you will see that we ship the exact same binary for all of SLE-15 and its service packs, you'll also see the sources are identical for tumblewed (the package hasn't changed in 8 years).
..
My final question is where do we want to draw the line here? I also maintain the conky system monitor i'm not sure how that would work as a flatpak and if it makes sense, similarly the fish shell could be seen by
Let me try.
So, flatpaks is a tech built on top of OCI containers (at least it seems in Fedora, which makes some sense). Containers have layers of file systems that are resolved at built time and composed of *binary* packages (RPMs, not OBS sources). This layering means if you have 10 different applications that are built on top of SLE-15, these would only have the application layer different and the rest should all be shared.
Sorry but you are wrong, in describing what flatpak is. It is not built on top of OCI containers. OCI containers are just a optional way to distribute the flatpak content (not the default one, and it has some challenges, as listed in the comment section at https://hackweek.opensuse.org/21/projects/containerizing-microos-desktop-com... ). Switching to OCI container for flatpak is IMHO a desirable goal but it will need some work to be done on flatpak side. -- Frederic CROZAT Enterprise Linux OS and Containers Architect SUSE
![](https://seccdn.libravatar.org/avatar/9346938c7445407e30501c9e4cc1561a.jpg?s=120&d=mm&r=g)
On 7/7/22 11:25, Frederic Crozat wrote:
Switching to OCI container for flatpak is IMHO a desirable goal but it will need some work to be done on flatpak side.
I stand corrected on this. I thought that OSTree storage in flatpaks could be replaced by OCI. - Adam
![](https://seccdn.libravatar.org/avatar/ed5b1491aa79201a8eaf93bf57193584.jpg?s=120&d=mm&r=g)
Hi, On 7/7/22 05:04, Adam Majer wrote:
On 7/7/22 04:43, Simon Lees wrote:
On 7/6/22 20:20, Dan Čermák wrote:
That's what will happen. Currently we distribute software for every code stream and with flatpaks we hope to reduce the burden on our maintainers and allow them to distribute it only once.
This is simply not true, let me take an extreme example for fun. If you look at the package setserial you will see that we ship the exact same binary for all of SLE-15 and its service packs, you'll also see the sources are identical for tumblewed (the package hasn't changed in 8 years).
..
My final question is where do we want to draw the line here? I also maintain the conky system monitor i'm not sure how that would work as a flatpak and if it makes sense, similarly the fish shell could be seen by
Let me try.
So, flatpaks is a tech built on top of OCI containers (at least it seems in Fedora, which makes some sense). Containers have layers of file systems that are resolved at built time and composed of *binary* packages (RPMs, not OBS sources). This layering means if you have 10 different applications that are built on top of SLE-15, these would only have the application layer different and the rest should all be shared.
In the most benign scenario, you have replicated what we have currently with additional layering and indirection.
Now, if you look at the older distros like SLE-11, there is also latest Firefox there. But to get this latest Firefox there, the packager had to include a whole bunch of packages into the Firefox's libexec paths (so, private "namespace", for the lack of better words) to be able to build and run it. These packages are not part of SLE-11, but they had to be added to the package to keep it maintained. This means python3 and even nodejs.
Flatpaks address this situation.
Superficially. Again a flatpak is the delivery mechanism, and one still has to put it together. If I start at step 2 or 3 everything is easy. All the dependencies that an app has and needs still need to be build, somewhere, somehow. If I have 4 different apps and they all want/need a different version of python-requests then I need to build those 4 different versions to stick them into 4 different flatpaks. We have no solution to this, how do we build 4 different versions of the same package? multibuild? different packages with the version part of the package name? other solution? If we build flatpaks from a pre-integrated distro solution, i.e. we do what we do today and make sure 4 apps all depend on the same version of python-requests, then flatpaks are a net addition of work, we have to do the porting, build all the respective rpms and then create the flatpak. And of course for the enterprise world the question is how does support get offered for the 4 versions of python-requests? And last but not least, which of the 4 versions is made available to enterprise developers for their own in-house applications? Focusing on the delivery mechanism is just papering over the real problems that need to be solved, which are very fundamental and have hard questions that need to be answered. In the current model as much as possible is shared, i.e. we create an integrated whole and do a lot of work of porting things. In a model where things are isolated the work shifts to maintaining multiple versions of the same thing. As already stated it is a shifting of problems, not a solution for problems. We'll have new problems and will no longer care about the old problems. Well of course there is the transition phase where we'll have to worry about the new and the old problems. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU Distinguished Engineer LINUX Technical Team Lead Public Cloud rjschwei@suse.com IRC: robjo
![](https://seccdn.libravatar.org/avatar/e6dc8afd12f42302ae7b5ea72e4dd686.jpg?s=120&d=mm&r=g)
Robert Schweikert <rjschwei@suse.com> writes:
Hi,
On 7/7/22 05:04, Adam Majer wrote:
On 7/7/22 04:43, Simon Lees wrote:
On 7/6/22 20:20, Dan Čermák wrote:
That's what will happen. Currently we distribute software for every code stream and with flatpaks we hope to reduce the burden on our maintainers and allow them to distribute it only once.
This is simply not true, let me take an extreme example for fun. If you look at the package setserial you will see that we ship the exact same binary for all of SLE-15 and its service packs, you'll also see the sources are identical for tumblewed (the package hasn't changed in 8 years).
..
My final question is where do we want to draw the line here? I also maintain the conky system monitor i'm not sure how that would work as a flatpak and if it makes sense, similarly the fish shell could be seen by
Let me try.
So, flatpaks is a tech built on top of OCI containers (at least it seems in Fedora, which makes some sense). Containers have layers of file systems that are resolved at built time and composed of *binary* packages (RPMs, not OBS sources). This layering means if you have 10 different applications that are built on top of SLE-15, these would only have the application layer different and the rest should all be shared.
In the most benign scenario, you have replicated what we have currently with additional layering and indirection.
Now, if you look at the older distros like SLE-11, there is also latest Firefox there. But to get this latest Firefox there, the packager had to include a whole bunch of packages into the Firefox's libexec paths (so, private "namespace", for the lack of better words) to be able to build and run it. These packages are not part of SLE-11, but they had to be added to the package to keep it maintained. This means python3 and even nodejs.
Flatpaks address this situation.
Superficially. Again a flatpak is the delivery mechanism, and one still has to put it together. If I start at step 2 or 3 everything is easy.
All the dependencies that an app has and needs still need to be build, somewhere, somehow. If I have 4 different apps and they all want/need a different version of python-requests then I need to build those 4 different versions to stick them into 4 different flatpaks. We have no solution to this, how do we build 4 different versions of the same package?
multibuild? different packages with the version part of the package name? other solution?
But we have a really great solution for this: projects in the build service! You can pick or _link packages that you require into a project on OBS, let them build and would then create a flatpak out of the combined rpms.
If we build flatpaks from a pre-integrated distro solution, i.e. we do what we do today and make sure 4 apps all depend on the same version of python-requests, then flatpaks are a net addition of work, we have to do the porting, build all the respective rpms and then create the flatpak.
Except that we can deliver the same flatpak for ALP, openSUSE ALP and Tumbleweed users without any modification. So imho there is a net work reduction.
And of course for the enterprise world the question is how does support get offered for the 4 versions of python-requests?
And last but not least, which of the 4 versions is made available to enterprise developers for their own in-house applications?
Focusing on the delivery mechanism is just papering over the real problems that need to be solved, which are very fundamental and have hard questions that need to be answered.
In the current model as much as possible is shared, i.e. we create an integrated whole and do a lot of work of porting things. In a model where things are isolated the work shifts to maintaining multiple versions of the same thing. As already stated it is a shifting of problems, not a solution for problems. We'll have new problems and will no longer care about the old problems. Well of course there is the transition phase where we'll have to worry about the new and the old problems.
This is not the magic silver bullet that will solve all our problems and we will certainly run into new problems, but it's imho a chance to re-imagine how we build the system and how we can be more flexible when it comes to software delivery. Cheers, Dan -- Dan Čermák <dcermak@suse.com> Software Engineer Development tools SUSE Software Solutions Germany GmbH Frankenstrasse 146 90461 Nürnberg Germany (HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
![](https://seccdn.libravatar.org/avatar/ed5b1491aa79201a8eaf93bf57193584.jpg?s=120&d=mm&r=g)
On 7/7/22 09:42, Dan Čermák wrote:
Robert Schweikert <rjschwei@suse.com> writes: <snip>
Now, if you look at the older distros like SLE-11, there is also latest Firefox there. But to get this latest Firefox there, the packager had to include a whole bunch of packages into the Firefox's libexec paths (so, private "namespace", for the lack of better words) to be able to build and run it. These packages are not part of SLE-11, but they had to be added to the package to keep it maintained. This means python3 and even nodejs.
Flatpaks address this situation.
Superficially. Again a flatpak is the delivery mechanism, and one still has to put it together. If I start at step 2 or 3 everything is easy.
All the dependencies that an app has and needs still need to be build, somewhere, somehow. If I have 4 different apps and they all want/need a different version of python-requests then I need to build those 4 different versions to stick them into 4 different flatpaks. We have no solution to this, how do we build 4 different versions of the same package?
multibuild? different packages with the version part of the package name? other solution?
But we have a really great solution for this: projects in the build service!
You can pick or _link packages that you require into a project on OBS, let them build and would then create a flatpak out of the combined rpms.
A great solution if every maintainer of parts that are higher up the stack now also becomes responsible for maintaining their dependencies. As a maintainer of anything that depends on python-requests I have the responsibility that the package I maintain moves forward when the maintainers of d:l:p move python-requests to a new version. This puts us in the situation where work of one person or team triggers work by others, i.e. what we do today. This usually happens at times that are not convenient. In the new model being proposed, driven by a different delivery mechanism that encourages isolation I need to have my own copy of python-requests. Meaning I now assume responsibility for this copy of python-requests.
If we build flatpaks from a pre-integrated distro solution, i.e. we do what we do today and make sure 4 apps all depend on the same version of python-requests, then flatpaks are a net addition of work, we have to do the porting, build all the respective rpms and then create the flatpak.
Except that we can deliver the same flatpak for ALP, openSUSE ALP and Tumbleweed users without any modification. So imho there is a net work reduction.
I do not share that opinion as I think the maintenance of X versions of the same package equals or outweighs the gains by shipping the same bundle to multiple distros. And Wolfgang also pointed out that in his opinion the maintenance of a number of %if-else statements in spec files is being overblown.
And of course for the enterprise world the question is how does support get offered for the 4 versions of python-requests?
And last but not least, which of the 4 versions is made available to enterprise developers for their own in-house applications?
Focusing on the delivery mechanism is just papering over the real problems that need to be solved, which are very fundamental and have hard questions that need to be answered.
In the current model as much as possible is shared, i.e. we create an integrated whole and do a lot of work of porting things. In a model where things are isolated the work shifts to maintaining multiple versions of the same thing. As already stated it is a shifting of problems, not a solution for problems. We'll have new problems and will no longer care about the old problems. Well of course there is the transition phase where we'll have to worry about the new and the old problems.
This is not the magic silver bullet
Then it should not be presented that way, but at least we agree on that point :D .
that will solve all our problems and we will certainly run into new problems, but it's imho a chance to re-imagine how we build the system and how we can be more flexible when it comes to software delivery.
Which means we have to understand all the angles to make a reasonably well educated decision. And that should not be solely based on opinions but should consider data. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU Distinguished Engineer LINUX Technical Team Lead Public Cloud rjschwei@suse.com IRC: robjo
![](https://seccdn.libravatar.org/avatar/757205097a14d69edc12951b3375437b.jpg?s=120&d=mm&r=g)
Robert Schweikert <rjschwei@suse.com> writes:
On 7/7/22 09:42, Dan Čermák wrote:
Robert Schweikert <rjschwei@suse.com> writes: <snip>
Now, if you look at the older distros like SLE-11, there is also latest Firefox there. But to get this latest Firefox there, the packager had to include a whole bunch of packages into the Firefox's libexec paths (so, private "namespace", for the lack of better words) to be able to build and run it. These packages are not part of SLE-11, but they had to be added to the package to keep it maintained. This means python3 and even nodejs.
Flatpaks address this situation.
Superficially. Again a flatpak is the delivery mechanism, and one still has to put it together. If I start at step 2 or 3 everything is easy.
All the dependencies that an app has and needs still need to be build, somewhere, somehow. If I have 4 different apps and they all want/need a different version of python-requests then I need to build those 4 different versions to stick them into 4 different flatpaks. We have no solution to this, how do we build 4 different versions of the same package?
multibuild? different packages with the version part of the package name? other solution?
But we have a really great solution for this: projects in the build service!
You can pick or _link packages that you require into a project on OBS, let them build and would then create a flatpak out of the combined rpms.
A great solution if every maintainer of parts that are higher up the stack now also becomes responsible for maintaining their dependencies.
As a maintainer of anything that depends on python-requests I have the responsibility that the package I maintain moves forward when the maintainers of d:l:p move python-requests to a new version.
This puts us in the situation where work of one person or team triggers work by others, i.e. what we do today. This usually happens at times that are not convenient.
In the new model being proposed, driven by a different delivery mechanism that encourages isolation I need to have my own copy of python-requests. Meaning I now assume responsibility for this copy of python-requests.
And in certain situations that is desirable. If upstream has not yet moved to version x.y, but TW decided to move forward, then I can stay "behind". Yes, that is not sustainable in the long run, but for a more complex application it is not feasible for me to update it all alone. But Tumbleweed will simply roll forward and break my application unless I manage the update.
If we build flatpaks from a pre-integrated distro solution, i.e. we do what we do today and make sure 4 apps all depend on the same version of python-requests, then flatpaks are a net addition of work, we have to do the porting, build all the respective rpms and then create the flatpak.
Except that we can deliver the same flatpak for ALP, openSUSE ALP and Tumbleweed users without any modification. So imho there is a net work reduction.
I do not share that opinion as I think the maintenance of X versions of the same package equals or outweighs the gains by shipping the same bundle to multiple distros. And Wolfgang also pointed out that in his opinion the maintenance of a number of %if-else statements in spec files is being overblown.
Well, that is Wolfgang's experience. Mine is different. I tried to maintain RStudio in Factory and Leap at the same time and that turned into a nightmare. RStudio would really only keep building on Tumbleweed as every new release bumped the required boost version, which made it impossible to build RStudio on Leap. A second boost version would also not have helped here, unless we linked statically. And we cannot update boost in Leap, as it is frozen due to SLE... And then I have vagrant. Vagrant stopped building for Leap a while ago, because our system Ruby (once again frozen thanks to SLE) is too old and the dependent gems are stopping to support Ruby 2.5. I could start patching stuff, but even vagrant's extensive test suite is not able to catch all the quirks which occur at runtime when interacting with various virtualization software. Long story short, while Wolfgang can maintain his spec for all Leaps and Tumbleweed, I cannot. If I could ship vagrant as a container (which would be challenging due to virtualization being involved), then I would provide a net benefit to our users, because they *could* actually install and use it on Leap, whereas nowadays they cannot. Cheers, Dan --- Dan Čermák <dcermak@suse.com> Software Engineer Development tools SUSE Software Solutions Germany GmbH Frankenstrasse 146 90461 Nürnberg Germany (HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
![](https://seccdn.libravatar.org/avatar/5b748275c3dbb1ceee18ed554486547d.jpg?s=120&d=mm&r=g)
On Friday 2022-07-08 11:53, Dan Čermák wrote:
I tried to maintain RStudio in Factory and Leap at the same time and that turned into a nightmare. RStudio would really only keep building on Tumbleweed as every new release bumped the required boost version, which made it impossible to build RStudio on Leap. A second boost version would also not have helped here, unless we linked statically. And we cannot update boost in Leap, as it is frozen due to SLE...
We have multiple ffmpeg, wxWidgets,... even gcc and clang. Just give the same treatment to boost. (Meaning, model it as extra SRPMs rather than updating the singular package.)
![](https://seccdn.libravatar.org/avatar/757205097a14d69edc12951b3375437b.jpg?s=120&d=mm&r=g)
Jan Engelhardt <jengelh@inai.de> writes:
On Friday 2022-07-08 11:53, Dan Čermák wrote:
I tried to maintain RStudio in Factory and Leap at the same time and that turned into a nightmare. RStudio would really only keep building on Tumbleweed as every new release bumped the required boost version, which made it impossible to build RStudio on Leap. A second boost version would also not have helped here, unless we linked statically. And we cannot update boost in Leap, as it is frozen due to SLE...
We have multiple ffmpeg, wxWidgets,... even gcc and clang. Just give the same treatment to boost. (Meaning, model it as extra SRPMs rather than updating the singular package.)
That wouldn't have helped as I wrote. Boost is linked dynamically and could thus result in clashes at runtime. And linking statically is disallowed. In the end it was too much work to keep trying. Cheers, Dan -- Dan Čermák <dcermak@suse.com> Software Engineer Development tools SUSE Software Solutions Germany GmbH Frankenstrasse 146 90461 Nürnberg Germany (HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
![](https://seccdn.libravatar.org/avatar/dc218decb0acde2abf2367960fea5098.jpg?s=120&d=mm&r=g)
Am Freitag, 8. Juli 2022, 12:28:46 CEST schrieb Jan Engelhardt:
I tried to maintain RStudio in Factory and Leap at the same time and that turned into a nightmare. RStudio would really only keep building on Tumbleweed as every new release bumped the required boost version, which made it impossible to build RStudio on Leap. A second boost version would also not have helped here, unless we linked statically. And we cannot update boost in Leap, as it is frozen due to SLE...
We have multiple ffmpeg, wxWidgets,... even gcc and clang. Just give the same treatment to boost. (Meaning, model it as extra SRPMs rather than updating the singular package.)
I don't know if it always works that easy, but that's exactly how I would imagine/wish it for ruby, python, boost and other packages. But please no containers. Regards Eric
![](https://seccdn.libravatar.org/avatar/dc218decb0acde2abf2367960fea5098.jpg?s=120&d=mm&r=g)
Am Freitag, 8. Juli 2022, 11:53:23 CEST schrieb Dan Čermák:
And then I have vagrant. Vagrant stopped building for Leap a while ago, because our system Ruby (once again frozen thanks to SLE) is too old and the dependent gems are stopping to support Ruby 2.5. I could start patching stuff, but even vagrant's extensive test suite is not able to catch all the quirks which occur at runtime when interacting with various virtualization software.
If packages (ruby, python3 etc.) are too old because of SLES, perhaps one should question exactly this approach instead of generating new processes, which in my opinion only create new problems. At the ancient packages, what python3 concerns even dead packages, lies the problem. Regards Eric
![](https://seccdn.libravatar.org/avatar/e6dc8afd12f42302ae7b5ea72e4dd686.jpg?s=120&d=mm&r=g)
Eric Schirra <ecsos@opensuse.org> writes:
Am Freitag, 8. Juli 2022, 11:53:23 CEST schrieb Dan Čermák:
And then I have vagrant. Vagrant stopped building for Leap a while ago, because our system Ruby (once again frozen thanks to SLE) is too old and the dependent gems are stopping to support Ruby 2.5. I could start patching stuff, but even vagrant's extensive test suite is not able to catch all the quirks which occur at runtime when interacting with various virtualization software.
If packages (ruby, python3 etc.) are too old because of SLES, perhaps one should question exactly this approach instead of generating new processes, which in my opinion only create new problems. At the ancient packages, what python3 concerns even dead packages, lies the problem.
Yes, but they exist for a reason: SLES promises strong stability guarantees and updating the system python is due to that a nogo. With ALP we would like to reimagine the current model and provide users and developers with different ways how they can build and distribute software. E.g. to also allow you to build an application for ALP based on a recent Python, while a more conservative user can keep ALP running for long times without worrying about getting breaking updates. Cheers, Dan -- Dan Čermák <dcermak@suse.com> Software Engineer Development tools SUSE Software Solutions Germany GmbH Frankenstrasse 146 90461 Nürnberg Germany (HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
![](https://seccdn.libravatar.org/avatar/022a482927e713146ced675bb5399746.jpg?s=120&d=mm&r=g)
On 7/8/22 13:57, Dan Čermák wrote:
Eric Schirra <ecsos@opensuse.org> writes:
Am Freitag, 8. Juli 2022, 11:53:23 CEST schrieb Dan Čermák:
And then I have vagrant. Vagrant stopped building for Leap a while ago, because our system Ruby (once again frozen thanks to SLE) is too old and the dependent gems are stopping to support Ruby 2.5. I could start patching stuff, but even vagrant's extensive test suite is not able to catch all the quirks which occur at runtime when interacting with various virtualization software.
If packages (ruby, python3 etc.) are too old because of SLES, perhaps one should question exactly this approach instead of generating new processes, which in my opinion only create new problems. At the ancient packages, what python3 concerns even dead packages, lies the problem.
Yes, but they exist for a reason: SLES promises strong stability guarantees and updating the system python is due to that a nogo.
..for whatever weird definition of "stable"...
With ALP we would like to reimagine the current model and provide users and developers with different ways how they can build and distribute software. E.g. to also allow you to build an application for ALP based on a recent Python, while a more conservative user can keep ALP running for long times without worrying about getting breaking updates.
Well, it will lead to the same security issues like with unmaintained or strictly version-pinned PIP-based installations. Some people have to do the maintenance work. And I also have strong doubts that shifting the problem is less work after all. Ciao, Michael.
![](https://seccdn.libravatar.org/avatar/dc218decb0acde2abf2367960fea5098.jpg?s=120&d=mm&r=g)
Am Freitag, 8. Juli 2022, 13:57:47 CEST schrieb Dan Čermák:
Eric Schirra <ecsos@opensuse.org> writes:
Am Freitag, 8. Juli 2022, 11:53:23 CEST schrieb Dan Čermák:
And then I have vagrant. Vagrant stopped building for Leap a while ago, because our system Ruby (once again frozen thanks to SLE) is too old and the dependent gems are stopping to support Ruby 2.5. I could start patching stuff, but even vagrant's extensive test suite is not able to catch all the quirks which occur at runtime when interacting with various virtualization software.
If packages (ruby, python3 etc.) are too old because of SLES, perhaps one should question exactly this approach instead of generating new processes, which in my opinion only create new problems. At the ancient packages, what python3 concerns even dead packages, lies the problem.
Yes, but they exist for a reason: SLES promises strong stability guarantees and updating the system python is due to that a nogo.
Well. I think the alleged stability of SLES is overrated. (I have to do with both).
With ALP we would like to reimagine the current model and provide users and developers with different ways how they can build and distribute software. E.g. to also allow you to build an application for ALP based on a recent Python, while a more conservative user can keep ALP running for long times without worrying about getting breaking updates.
I think the problem concerning python is not solved with container either. It rather inflates the whole thing even more. It is not the problem of python itself. The real problem with python are the modules. Should there be a separate container for each module? How should that work? What should be easier about it? That is then rather even more work. Maybe I just don't understand it. Regards ERic
![](https://seccdn.libravatar.org/avatar/6d252d3b7622f05a8c6b6e241ce13257.jpg?s=120&d=mm&r=g)
Hello, Am 08.07.22 um 15:06 schrieb Eric Schirra:
If packages (ruby, python3 etc.) are too old because of SLES, perhaps one should question exactly this approach instead of generating new processes, which in my opinion only create new problems. At the ancient packages, what python3 concerns even dead packages, lies the problem.
Yes, but they exist for a reason: SLES promises strong stability guarantees and updating the system python is due to that a nogo.
Well. I think the alleged stability of SLES is overrated. (I have to do with both).
This does have nothing to do with your personal impression but what SUSE sells to customers. The stability is measured.
With ALP we would like to reimagine the current model and provide users and developers with different ways how they can build and distribute software. E.g. to also allow you to build an application for ALP based on a recent Python, while a more conservative user can keep ALP running for long times without worrying about getting breaking updates.
I think the problem concerning python is not solved with container either. It rather inflates the whole thing even more.
For Developers and maintainers sometimes maybe not. But for other maintainers or users: it can solve it.
It is not the problem of python itself. The real problem with python are the modules. Should there be a separate container for each module? How should that work? What should be easier about it? That is then rather even more work. Maybe I just don't understand it.
No. You have a multiple applications and services running on your hosts, but they have conflicting dependencies. You can untangle that if you compartmentalize them in different isolated settings and each can work with their preferred version of dependencies. That is the problem it solves. -- Dennis Knorr, dennis.knorr@suse.com SUSE Software Solutions Germany GmbH Frankenstraße 146, 90461 Nürnberg, Germany Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman (HRB 36809, AG Nürnberg)
![](https://seccdn.libravatar.org/avatar/dc218decb0acde2abf2367960fea5098.jpg?s=120&d=mm&r=g)
Am Freitag, 8. Juli 2022, 15:17:55 CEST schrieb Dennis Knorr:
Yes, but they exist for a reason: SLES promises strong stability guarantees and updating the system python is due to that a nogo.
Well. I think the alleged stability of SLES is overrated. (I have to do with both).
This does have nothing to do with your personal impression but what SUSE sells to customers. The stability is measured.
Whatever the stability is measured with. I don't see any differences. And if you then consider that you have to pay for SLES....... But that is another topic
It is not the problem of python itself. The real problem with python are the modules. Should there be a separate container for each module? How should that work? What should be easier about it? That is then rather even more work. Maybe I just don't understand it.
No. You have a multiple applications and services running on your hosts, but they have conflicting dependencies. You can untangle that if you compartmentalize them in different isolated settings and each can work with their preferred version of dependencies. That is the problem it solves.
But what is solved in the case of python with the container? If an application A needs 20 python-modules. Then I have to build 20 python modules for the right version and then include that in flatpack for A. And for the next python application B then the same game. With again 10 modules which I already built in A and 10 other new modules. etc. etc. So sorry. According to my understanding this is not less work but much more. Regards Eric
![](https://seccdn.libravatar.org/avatar/ed5b1491aa79201a8eaf93bf57193584.jpg?s=120&d=mm&r=g)
On 7/8/22 09:06, Eric Schirra wrote:
Am Freitag, 8. Juli 2022, 13:57:47 CEST schrieb Dan Čermák:
Eric Schirra <ecsos@opensuse.org> writes:
Am Freitag, 8. Juli 2022, 11:53:23 CEST schrieb Dan Čermák:
And then I have vagrant. Vagrant stopped building for Leap a while ago, because our system Ruby (once again frozen thanks to SLE) is too old and the dependent gems are stopping to support Ruby 2.5. I could start patching stuff, but even vagrant's extensive test suite is not able to catch all the quirks which occur at runtime when interacting with various virtualization software.
If packages (ruby, python3 etc.) are too old because of SLES, perhaps one should question exactly this approach instead of generating new processes, which in my opinion only create new problems. At the ancient packages, what python3 concerns even dead packages, lies the problem.
Yes, but they exist for a reason: SLES promises strong stability guarantees and updating the system python is due to that a nogo.
Well. I think the alleged stability of SLES is overrated. (I have to do with both).
With ALP we would like to reimagine the current model and provide users and developers with different ways how they can build and distribute software. E.g. to also allow you to build an application for ALP based on a recent Python, while a more conservative user can keep ALP running for long times without worrying about getting breaking updates.
I think the problem concerning python is not solved with container either. It rather inflates the whole thing even more. It is not the problem of python itself.
Well to a certain degree it is. Python does not support having multiple versions of a module installed for the same interpreter. Everything ends up in /usr/lib/$INTERPRETER_VERSION/site-packages/$MODULE_NAME as opposed to /usr/lib/$INTERPRETER_VERSION/site-packages/$MODULE_NAME_AND_VERSION. This forces every Python application that runs directly on the system to have the same version dependency. And this is certainly part of the problem.
The real problem with python are the modules. Should there be a separate container for each module?
The current solution is containers that provide a certain stack and one then does the rest of the development in that container. This approach works for a certain target audience.
How should that work? What should be easier about it? That is then rather even more work. Maybe I just don't understand it.
There are a number of options, one could think of playing filesystem tricks such as with overlayfs and that each particular application only sees a certain layer. Kind of like containers. One can stick the whole stack into a container, similar to what bci (base container image) is doing. This concept can also be realized with a flatpak Of course there is the TW approach, keep moving forward with upstream and integrate. Then there could be a strict separation between "system interpreter" and "user interpreter" where everything that is needed to manage the system is in it's own path and everything else uses the "user interpreter" and that moves forward at some interval, probably still slower than upstream but faster than it moves today. I think all of this is on the table for discussion. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU Distinguished Engineer LINUX Technical Team Lead Public Cloud rjschwei@suse.com IRC: robjo
![](https://seccdn.libravatar.org/avatar/dc218decb0acde2abf2367960fea5098.jpg?s=120&d=mm&r=g)
Am Freitag, 8. Juli 2022, 15:54:15 CEST schrieb Robert Schweikert:
I think the problem concerning python is not solved with container either. It rather inflates the whole thing even more. It is not the problem of python itself.
Well to a certain degree it is. Python does not support having multiple versions of a module installed for the same interpreter. Everything ends up in /usr/lib/$INTERPRETER_VERSION/site-packages/$MODULE_NAME as opposed to /usr/lib/$INTERPRETER_VERSION/site-packages/$MODULE_NAME_AND_VERSION. This forces every Python application that runs directly on the system to have the same version dependency. And this is certainly part of the problem.
For me, that's not the problem. And I mean that is not quite true, because in Tumbleweed there are also several Versioen a module. By multiple versions I don't mean the version of the module itself, but the Python version used. Currently for 3.8, 3.9 and 3.10 I miss that in Leap. Here there are only modules for 3.6. Can be done. But why not also modules for 3.7, 3.8, 3.9.3.10? Then everyone can the python versione he wants. And thus use several programs in the latest version. Mostly python 3.7 is now a prerequisite.
The real problem with python are the modules. Should there be a separate container for each module?
The current solution is containers that provide a certain stack and one then does the rest of the development in that container. This approach works for a certain target audience.
How should that work? What should be easier about it? That is then rather even more work. Maybe I just don't understand it.
There are a number of options, one could think of playing filesystem tricks such as with overlayfs and that each particular application only sees a certain layer. Kind of like containers.
One can stick the whole stack into a container, similar to what bci (base container image) is doing. This concept can also be realized with a flatpak
Of course there is the TW approach, keep moving forward with upstream and integrate.
Then there could be a strict separation between "system interpreter" and "user interpreter" where everything that is needed to manage the system is in it's own path and everything else uses the "user interpreter" and that moves forward at some interval, probably still slower than upstream but faster than it moves today.
So I see absolutely no solution for the problem addressed, in this example python. And certainly not less work. Quite the opposite. For me, it turns out to be significantly more work. Which, if I then want to reuse different modules in another program, I have to package and build again. And that for each program which uses this module. Over and over again. Regards Eric
![](https://seccdn.libravatar.org/avatar/e6dc8afd12f42302ae7b5ea72e4dd686.jpg?s=120&d=mm&r=g)
Eric Schirra <ecsos@opensuse.org> writes:
Am Freitag, 8. Juli 2022, 15:54:15 CEST schrieb Robert Schweikert:
I think the problem concerning python is not solved with container either. It rather inflates the whole thing even more. It is not the problem of python itself.
Well to a certain degree it is. Python does not support having multiple versions of a module installed for the same interpreter. Everything ends up in /usr/lib/$INTERPRETER_VERSION/site-packages/$MODULE_NAME as opposed to /usr/lib/$INTERPRETER_VERSION/site-packages/$MODULE_NAME_AND_VERSION. This forces every Python application that runs directly on the system to have the same version dependency. And this is certainly part of the problem.
For me, that's not the problem. And I mean that is not quite true, because in Tumbleweed there are also several Versioen a module. By multiple versions I don't mean the version of the module itself, but the Python version used. Currently for 3.8, 3.9 and 3.10 I miss that in Leap. Here there are only modules for 3.6. Can be done. But why not also modules for 3.7, 3.8, 3.9.3.10? Then everyone can the python versione he wants. And thus use several programs in the latest version. Mostly python 3.7 is now a prerequisite.
The problem is that someone has to put in the work to make this happen. You are more than welcome to submit and maintain all the python modules for python 3.7, 3.8, 3.9, 3.10 and soon 3.11. But that's a lot of effort especially if you try to adhere to the stability guarantees of Leap (i.e. **no** breaking changes). And with SLE it's even worse, because the maintenance periods are longer and the stability requirements are even stricter.
The real problem with python are the modules. Should there be a separate container for each module?
The current solution is containers that provide a certain stack and one then does the rest of the development in that container. This approach works for a certain target audience.
How should that work? What should be easier about it? That is then rather even more work. Maybe I just don't understand it.
There are a number of options, one could think of playing filesystem tricks such as with overlayfs and that each particular application only sees a certain layer. Kind of like containers.
One can stick the whole stack into a container, similar to what bci (base container image) is doing. This concept can also be realized with a flatpak
Of course there is the TW approach, keep moving forward with upstream and integrate.
Then there could be a strict separation between "system interpreter" and "user interpreter" where everything that is needed to manage the system is in it's own path and everything else uses the "user interpreter" and that moves forward at some interval, probably still slower than upstream but faster than it moves today.
So I see absolutely no solution for the problem addressed, in this example python. And certainly not less work. Quite the opposite. For me, it turns out to be significantly more work. Which, if I then want to reuse different modules in another program, I have to package and build again. And that for each program which uses this module. Over and over again.
No, you wouldn't have to repackage the python modules yourself again and again. You could simply link them from Tumbleweed/Factory or a devel project into your project and build the flatpak from that. Cheers, Dan -- Dan Čermák <dcermak@suse.com> Software Engineer Development tools SUSE Software Solutions Germany GmbH Frankenstrasse 146 90461 Nürnberg Germany (HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
![](https://seccdn.libravatar.org/avatar/dc218decb0acde2abf2367960fea5098.jpg?s=120&d=mm&r=g)
Am Montag, 11. Juli 2022, 10:13:55 CEST schrieb Dan Čermák:
python. And certainly not less work. Quite the opposite. For me, it turns out to be significantly more work. Which, if I then want to reuse different modules in another program, I have to package and build again. And that for each program which uses this module. Over and over again.
No, you wouldn't have to repackage the python modules yourself again and again. You could simply link them from Tumbleweed/Factory or a devel project into your project and build the flatpak from that.
Okay. I'll give that a try when there's hopefully a documentary on it sometime. At the moment I'm having a bit of a hard time with it and it's all still a bit too theoretical for me.
![](https://seccdn.libravatar.org/avatar/c0cfd5b7246be604feb77f43a3c9626f.jpg?s=120&d=mm&r=g)
Am Freitag, 8. Juli 2022, 13:03:21 CEST schrieb Eric Schirra:
Am Freitag, 8. Juli 2022, 11:53:23 CEST schrieb Dan Čermák:
And then I have vagrant. Vagrant stopped building for Leap a while ago, because our system Ruby (once again frozen thanks to SLE) is too old and the dependent gems are stopping to support Ruby 2.5. I could start patching stuff, but even vagrant's extensive test suite is not able to catch all the quirks which occur at runtime when interacting with various virtualization software.
If packages (ruby, python3 etc.) are too old because of SLES, perhaps one should question exactly this approach instead of generating new processes, which in my opinion only create new problems.
Actually, I was challenging this in https://lists.opensuse.org/archives/list/factory@lists.opensuse.org/thread/ 7KXDUYD3UKUSG3XDN6QEH7DP3UIMJK4K/ for the net Leap version Cheers Axel
![](https://seccdn.libravatar.org/avatar/dc218decb0acde2abf2367960fea5098.jpg?s=120&d=mm&r=g)
Am Freitag, 8. Juli 2022, 14:30:53 CEST schrieb Axel Braun:
Am Freitag, 8. Juli 2022, 13:03:21 CEST schrieb Eric Schirra:
Am Freitag, 8. Juli 2022, 11:53:23 CEST schrieb Dan Čermák:
And then I have vagrant. Vagrant stopped building for Leap a while ago, because our system Ruby (once again frozen thanks to SLE) is too old and the dependent gems are stopping to support Ruby 2.5. I could start patching stuff, but even vagrant's extensive test suite is not able to catch all the quirks which occur at runtime when interacting with various virtualization software.
If packages (ruby, python3 etc.) are too old because of SLES, perhaps one should question exactly this approach instead of generating new processes, which in my opinion only create new problems.
Actually, I was challenging this in https://lists.opensuse.org/archives/list/factory@lists.opensuse.org/thread/ 7KXDUYD3UKUSG3XDN6QEH7DP3UIMJK4K/ for the net Leap version
The sentence structure is a bit strange. What are you trying to tell me? REgards Eric
![](https://seccdn.libravatar.org/avatar/c0cfd5b7246be604feb77f43a3c9626f.jpg?s=120&d=mm&r=g)
Am Freitag, 8. Juli 2022, 15:10:38 CEST schrieb Eric Schirra:
If packages (ruby, python3 etc.) are too old because of SLES, perhaps one should question exactly this approach instead of generating new processes, which in my opinion only create new problems.
Actually, I was challenging this in https://lists.opensuse.org/archives/list/factory@lists.opensuse.org/thread / 7KXDUYD3UKUSG3XDN6QEH7DP3UIMJK4K/ for the net Leap version
The sentence structure is a bit strange. What are you trying to tell me?
what structure ich strange? the line break in the link? It points to 'the challenging of the approach' (not for ALP, but for Leap) Clear now? Axel
![](https://seccdn.libravatar.org/avatar/ed5b1491aa79201a8eaf93bf57193584.jpg?s=120&d=mm&r=g)
On 7/8/22 05:53, Dan Čermák wrote:
Robert Schweikert <rjschwei@suse.com> writes:
<snip>
Except that we can deliver the same flatpak for ALP, openSUSE ALP and Tumbleweed users without any modification. So imho there is a net work reduction.
I do not share that opinion as I think the maintenance of X versions of the same package equals or outweighs the gains by shipping the same bundle to multiple distros. And Wolfgang also pointed out that in his opinion the maintenance of a number of %if-else statements in spec files is being overblown.
Well, that is Wolfgang's experience. Mine is different. I tried to maintain RStudio in Factory and Leap at the same time and that turned into a nightmare. RStudio would really only keep building on Tumbleweed as every new release bumped the required boost version, which made it impossible to build RStudio on Leap. A second boost version would also not have helped here, unless we linked statically. And we cannot update boost in Leap, as it is frozen due to SLE...
And then I have vagrant. Vagrant stopped building for Leap a while ago, because our system Ruby (once again frozen thanks to SLE) is too old and the dependent gems are stopping to support Ruby 2.5. I could start patching stuff, but even vagrant's extensive test suite is not able to catch all the quirks which occur at runtime when interacting with various virtualization software.
Long story short, while Wolfgang can maintain his spec for all Leaps and Tumbleweed, I cannot. If I could ship vagrant as a container (which would be challenging due to virtualization being involved), then I would provide a net benefit to our users, because they *could* actually install and use it on Leap, whereas nowadays they cannot.
That to a certain degree pushes SLE specific problems, which by the way we in the Public Cloud team often face as well, to areas where they may or may not belong. SLE is slow, you mention Ruby and boost as examples and I can easily add to that list. And in this case having tools that provide sufficient isolation such that certain parts of the product can move independently from the base system is certainly worth the cost to SUSE. In that respect I think we agree. But that is a SLE & SUSE problem, not a TW problem. The "it's really slow problem" is rearing it's ugly head for Leap because SUSE was the primary driver behind the extremely tight coupling of Leap to SLE. This brings us back to the trade offs that are necessary. flatpak, containers (your favorite other isolation technique) provide benefits in a certain context. In another context they are mostly overhead. And this leads me back to what I stated previously we have to untangle the delivery method (flatpak, container, rpm) form the larger conversation that provides the context for the delivery method. Isolation carries a certain cost and integration (our current model) carries a certain cost. I doubt that we could come up with any hard data that would show the cost of one model to be significantly less than the cost of the other model. As such this is mostly guess work and anybody's guess is as good as the next person's guess. This is where the context comes into play. If we want to build something where integration is at a minimum, maybe a core distribution with only a couple hundred packages then isolation techniques were we potentially "just" pull isolated units from some upstream are certainly the way to go. If integration work as we know it today mostly continues and we have an integrated whole then the isolation stuff is mostly overhead. And yes there are many deeper questions attached to this as well, as you already pointed out, one of the questions certainly is, what value do we provide if we touch it? Meaning what value is added if we, as a community or company (SUSE) touch the code only to recast it into an rpm? This can easily lead me to writing a novel, as such I will stop here. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU Distinguished Engineer LINUX Technical Team Lead Public Cloud rjschwei@suse.com IRC: robjo
![](https://seccdn.libravatar.org/avatar/e6dc8afd12f42302ae7b5ea72e4dd686.jpg?s=120&d=mm&r=g)
Predrag Ivanović <predivan@mts.rs> writes:
On Fri, 08 Jul 2022 08:26:01 -0400 Robert Schweikert wrote:
This can easily lead me to writing a novel, as such I will stop here.
Blog post(s)? (hint, hint) :)
I wrote a blog post about the value add topic a while ago (not touching the packaging format): https://dancermak.name/value_add_of_enterprise_distribution_distributor_in_t... Cheers, Dan -- Dan Čermák <dcermak@suse.com> Software Engineer Development tools SUSE Software Solutions Germany GmbH Frankenstrasse 146 90461 Nürnberg Germany (HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
![](https://seccdn.libravatar.org/avatar/dc218decb0acde2abf2367960fea5098.jpg?s=120&d=mm&r=g)
Am Donnerstag, 7. Juli 2022, 04:43:07 CEST schrieb Simon Lees:
On 7/6/22 20:20, Dan Čermák wrote:
Jan Engelhardt <jengelh@inai.de> writes:
On Wednesday 2022-07-06 11:48, Dan Čermák wrote:
1) What are the advantages of flatpak/container vs RPM?
Flatpaks support sandboxing when configured properly giving you greater security benefits in comparison to traditional rpms.
But that is not inherent to the flatpaks themselves. As you say, flatpaks are but a different method of _distributing_ software. And so the critique becomes: stop distributing software twice.
That's what will happen. Currently we distribute software for every code stream and with flatpaks we hope to reduce the burden on our maintainers and allow them to distribute it only once.
This is simply not true, let me take an extreme example for fun. If you look at the package setserial you will see that we ship the exact same binary for all of SLE-15 and its service packs, you'll also see the sources are identical for tumblewed (the package hasn't changed in 8 years).
Also looking at a more relevant recent example of something that would be shipped as a flatpak under your model being the Terminology Terminal Emulator, for Leap 15.4 I just copied the tumbleweed sources across and everything was fine, I would have done the same for the last time it was updated in 15.2 this equates to about 10 minutes of effort each time. But under this new model your suggesting that as well as building an RPM for tumbleweed I also need to build a flatpak? How much effort do you estimate this will take? because your trying to make it sound like as a packager this will be less effort for me when it certainly sounds like more.
This is just 2 simple examples in other packages I maintain such as dbus and cmake there is always careful consideration into do we actually need a new version for this stream or can we continue sharing the old ones.
There is nothing to add to this. My fullest agreement. Regards Eric
![](https://seccdn.libravatar.org/avatar/6efcef3d9748aadcb5e18e655e007079.jpg?s=120&d=mm&r=g)
On Wed, 06 Jul 2022 11:48:07 +0200, Dan Čermák wrote:
Lars Vogdt <lars@linux-schulserver.de> writes:
6) Can configuration still be expected at the usual places ($HOME/.config/ : /etc : /usr/etc/)?
That depends on the flatpak, but generally flatpaks put their configuration into ~/.var/app/
How does this work? If different apps are built against different flatpak runtimes, which have different versions of Gnome, how does your Gnome configuration work for both versions? And isn't there just one Gnome version running? The two apps have to co-exist on the same desktop. I don't understand this. -- Robert Webb
![](https://seccdn.libravatar.org/avatar/e6dc8afd12f42302ae7b5ea72e4dd686.jpg?s=120&d=mm&r=g)
Robert Webb <webbdg@verizon.net> writes:
On Wed, 06 Jul 2022 11:48:07 +0200, Dan Čermák wrote:
Lars Vogdt <lars@linux-schulserver.de> writes:
6) Can configuration still be expected at the usual places ($HOME/.config/ : /etc : /usr/etc/)?
That depends on the flatpak, but generally flatpaks put their configuration into ~/.var/app/
How does this work? If different apps are built against different flatpak runtimes, which have different versions of Gnome, how does your Gnome configuration work for both versions?
Your configs are stored in a different folder for each flatpak.
And isn't there just one Gnome version running? The two apps have to co-exist on the same desktop. I don't understand this.
That really depends on what you define as GNOME. You could theoretically launch gnome shell at version 42 and run eog from gnome 41, if the gnome shell were build against a newer version of GNOME. But this is already possible today, if you mess up your system really badly. Cheers, Dan -- Dan Čermák <dcermak@suse.com> Software Engineer Development tools SUSE Software Solutions Germany GmbH Frankenstrasse 146 90461 Nürnberg Germany (HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
![](https://seccdn.libravatar.org/avatar/a4139df10120ce151e457fd1faff018d.jpg?s=120&d=mm&r=g)
On 7/6/22 19:18, Dan Čermák wrote:
Hi Lars,
Lars Vogdt <lars@linux-schulserver.de> writes:
Am 5. Juli 2022 15:46:29 UTC schrieb "Dan Čermák" <dcermak@suse.de>:
tl;dr; Fear not, you will not be forced to download random flatpaks or containers from dockerhub/flathub. We (probably) just change the deliver method, but the packaging workflow will stay the same.
5) Will Yast-Software manage all these different kind of formats (container, flatpak, rpm) transparently for the enduser?
I don't know what the plans of the Yast team wrt flatpaks are, but GNOME Software center supports flatpaks and rpms seamlessly.
Does this mean that an acceptable solution to this problem is to tell all desktop users that they should use Gnome Software rather then zypper up to update there systems? If so do we have plans to ensure that Gnome Software can run with a minimal set of dependencies. For example practically every desktop user is going to have libgtk installed for something (probably also some part of the Qt Stack) so requiring libgtk for desktop users to update there system isn't a huge issue on the other hand plenty of users in the community will be very concerned that using gnome-software would require them also installing all of gnome shell so the wording here would need to be very careful. For a large number of the people I talk to when looking after some of our "lighter" desktops RAM (which has been bought up in this thread) isn't the big issue. The main thing that motivates alot of people to keep smaller lists of installed packages is slow / limited internet speeds and connections which isn't really something we have to think about in Europe the US or Aus anymore but is a significant issue for large number of our users in other parts of the world. -- Simon Lees (Simotek) http://simotek.net Emergency Update Team keybase.io/simotek SUSE Linux Adelaide Australia, UTC+10:30 GPG Fingerprint: 5B87 DB9D 88DC F606 E489 CEC5 0922 C246 02F0 014B
participants (16)
-
Adam Majer
-
Axel Braun
-
Dan Čermák
-
Dan Čermák
-
Dennis Knorr
-
Eric Schirra
-
Frederic Crozat
-
Jan Engelhardt
-
Lars Vogdt
-
Michael Ströder
-
Peter Suetterlin
-
Predrag Ivanović
-
Robert Schweikert
-
Robert Webb
-
Simon Lees
-
Wolfgang Rosenauer