YaST in a YaST-less system
The YaST Team has been recently playing with a fun concept. What about running YaST inside a container to manage a system without having YaST or any of its dependencies installed? Turns out it works and we plan to go further! Learn more at https://yast.opensuse.org/blog/2022-06-13/yastless-yast Cheers. -- Ancor González Sosa YaST Team at SUSE Linux GmbH
On Monday 2022-06-13 18:16, Ancor Gonzalez Sosa wrote:
The YaST Team has been recently playing with a fun concept. What about running YaST inside a container to manage a system without having YaST or any of its dependencies installed? Turns out it works and we plan to go further!
Learn more at https://yast.opensuse.org/blog/2022-06-13/yastless-yast [..]
That tool only depends on Podman (or alternatively Docker) [...]
# zypper in yast The following 26 NEW packages are going to be installed: augeas augeas-lenses libyui-ncurses16 libyui16 ruby ruby3.1-rubygem-abstract_method ruby3.1-rubygem-cfa ruby3.1-rubygem-cheetah ruby3.1-rubygem-fast_gettext ruby3.1-rubygem-nokogiri ruby3.1-rubygem-ruby-augeas ruby3.1-rubygem-simpleidn ruby3.1-rubygem-unf ruby3.1-rubygem-unf_ext sysconfig sysconfig-netconfig wicked wicked-service yast2 yast2-core yast2-hardware-detection yast2-logs yast2-perl-bindings yast2-pkg-bindings yast2-ruby-bindings yast2-ycp-ui-bindings 26 new packages to install. Overall download size: 11.5 MiB. Already cached: 0 B. After the operation, additional 27.6 MiB will be used. ^C # zypper in podman The following 9 NEW packages are going to be installed: catatonit cni cni-plugins conmon fuse-overlayfs libcontainers-common podman runc slirp4netns 9 new packages to install. Overall download size: 25.0 MiB. Already cached: 0 B. After the operation, additional 113.9 MiB will be used. ^C So instead of having to install yast2 (zypper tells me 11 MB), I now have to podman (25 MB).
those new commands grab YaST and run it in a container that will be transparently used to administer the host system.
It's logically backwards that one is supposed to run Y2 in a container, only to have it go outside again. Containers are not supposed to affect the host system. So you have to widen the namespace to the point that it _is_ the host system, mounts included, at which point you could have just installed Y2.
On 13.06.2022 19:45, Jan Engelhardt wrote:
So instead of having to install yast2 (zypper tells me 11 MB), I now have to podman (25 MB).
But if you already have podman but not yast ...
those new commands grab YaST and run it in a container that will be transparently used to administer the host system.
It's logically backwards that one is supposed to run Y2 in a container, only to have it go outside again. Containers are not supposed to affect the host system.
Ever heard about toolbox container for SUSE MicroOS? What about firewalld container? What about VPN containers? And this can continue ... Containers can be used just for application delivery, they are not restricted to application isolation.
So you have to widen the namespace to the point that it _is_ the host system, mounts included, at which point you could have just installed Y2.
May be not often, but for one off task this could be quite handy. Or read-only root where you simply cannot install anything ...
Dne 13. 06. 22 v 18:45 Jan Engelhardt napsal(a):
So instead of having to install yast2 (zypper tells me 11 MB), I now have to podman (25 MB).
Of course, using containers is not for free, it needs some container runtime. It depends what you already have installed in your system. If you already have Ruby (e.g. because you develop a Ruby on Rails app) then the needed YaST size would even go down. Similarly with podman. But it seems that the future is in containerization of apps, so podman (or a similar tool) will be very likely installed anyway.
It's logically backwards that one is supposed to run Y2 in a container, only to have it go outside again. Containers are not supposed to affect the host system.
It depends what you want to do. Usually you want a container to work in isolated environment. But running YaST in an isolated container (without host system interaction) is pointless. And there are already some other tools which run in a container but affect the host system, e.g. the Portainer (https://www.portainer.io/) which is a Docker manager running inside a Docker container managing the host Docker instance. So this approach is not that unusual as it might seem on the first sight.
So you have to widen the namespace to the point that it _is_ the host system, mounts included, at which point you could have just installed Y2.
Um, yes. But the point is that the YaST dependencies cannot collide with the system. SLES still ships with Ruby 2.5, in theory the YaST container could use newer Ruby 3.1 and we could use some new Ruby features there. And that applies to any other used library. Also the host system could contain even less packages, for example you do not need zypper/libzypp in the host system, even rpm (!) itself is not required if you run the YaST package manager in a container. (I have tested this in real!) Another advantage is that you can get rid of YaST just with a simple "docker/podman rmi" command and you can be sure it won't break any system dependencies or uninstall something you actually need (like Ruby for your Ruby on Rails app). Containerization has both pros and cons. But we think the pros will be more important in the future and the cons will be just minor issues or completely disappear.... -- Ladislav Slezák YaST Developer SUSE LINUX, s.r.o. Corso IIa Křižíkova 148/34 18600 Praha 8
On 6/13/2022 12:24, Ladislav Slezák wrote:
Also the host system could contain even less packages, for example you do not need zypper/libzypp in the host system, even rpm (!) itself is not required if you run the YaST package manager in a container. (I have tested this in real!)
The system doesn't need packages anymore! Just a bunch of containers which...have all those packages. What problem is this specific "feature" trying to solve? (What's that RFC about indirection?) -- Jason Craig
On Tuesday, 14 June 2022 8:19:44 AM ACST Jason Craig wrote:
On 6/13/2022 12:24, Ladislav Slezák wrote:
Also the host system could contain even less packages, for example you do not need zypper/libzypp in the host system, even rpm (!) itself is not required if you run the YaST package manager in a container. (I have tested this in real!)
The system doesn't need packages anymore! Just a bunch of containers which...have all those packages. What problem is this specific "feature" trying to solve? (What's that RFC about indirection?)
-- Jason Craig
RFC1925 - Fundamental Truths of Networking (6) It is easier to move a problem around (for example, by moving the problem to a different part of the overall network) than it is to solve it. (6a) (Corollary). It is always possible to add another level of indirection. :) -- ================================================================================================================== Rodney Baker rodney.baker@iinet.net.au ==================================================================================================================
Hello, On 2022-06-14 06:52, Rodney Baker wrote:
On Tuesday, 14 June 2022 8:19:44 AM ACST Jason Craig wrote:
On 6/13/2022 12:24, Ladislav Slezák wrote:
Also the host system could contain even less packages, for example you do not need zypper/libzypp in the host system, even rpm (!) itself is not required if you run the YaST package manager in a container. (I have tested this in real!)
The system doesn't need packages anymore! Just a bunch of containers which...have all those packages. What problem is this specific "feature" trying to solve? (What's that RFC about indirection?)
RFC1925 - Fundamental Truths of Networking
(6) It is easier to move a problem around (for example, by moving the problem to a different part of the overall network) than it is to solve it.
(6a) (Corollary). It is always possible to add another level of indirection.
RFC1925 item (5): It is always possible to aglutenate [sic] multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea. So containers intend to solve RFC1925 item (5) for the price of RFC1925 items (6) and (6a). And ALP intends to solve RFC1925 item (10): One size never fits all. Kind Regards Johannes Meixner -- SUSE Software Solutions Germany GmbH Frankenstr. 146 - 90461 Nuernberg - Germany GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman (HRB 36809, AG Nuernberg)
Le lundi 13 juin 2022 à 16:49 -0600, Jason Craig a écrit :
On 6/13/2022 12:24, Ladislav Slezák wrote:
Also the host system could contain even less packages, for example you do not need zypper/libzypp in the host system, even rpm (!) itself is not required if you run the YaST package manager in a container. (I have tested this in real!)
The system doesn't need packages anymore! Just a bunch of containers which...have all those packages. What problem is this specific "feature" trying to solve? (What's that RFC about indirection?)
This is part of the research work being done around ALP. -- Frederic CROZAT Enterprise Linux OS and Containers Architect SUSE
On Mon, 2022-06-13 at 16:49 -0600, Jason Craig wrote:
On 6/13/2022 12:24, Ladislav Slezák wrote:
Also the host system could contain even less packages, for example you do not need zypper/libzypp in the host system, even rpm (!) itself is not required if you run the YaST package manager in a container. (I have tested this in real!)
The system doesn't need packages anymore! Just a bunch of containers which...have all those packages. What problem is this specific "feature" trying to solve? (What's that RFC about indirection?)
Some problems this is trying to solve: * Avoidance of the "one system version of X" issue. Traditional Linux design only really allows one system version of stacks like glibc, python, ruby, which everything needs to link to. This can mean stacks like YaST get held back by other important things we need to keep working, or visa versa, we need to risk breaking things people are using in order to move YaST forward. Containerising YaST enables us to have as few of those language stacks as possible in the base and lets us ensure we only ship YaST with a configuration we know YaST was built to work with. * Avoidance of the "You must upgrade the whole system" issue. Related to the first, we can only really support your systems when you upgrade the whole system to the levels we intend, be that either the latest Tumbleweed snapshot or Leap with full patches. Partial updates result in the system being in an unknown, unclear-if-it-will-work state. But a containerised YaST can be pinned to whatever version you like, and you can keep using that version independant of what's going on with the base system, and visa versa. * Avoidance of the "everything is installed on the system as root" issue. RPMs when installed effectively execute as root, not only installing the binaries where we tell them to, but running whatever scripts we like to touch whatever we want on your system. As awesome as we are, there is always a chance of an rpm doing horrifically bad things to your host. However installing the same RPM in a container and then delivering that..zero risk to your host, as whatever ugly nonsense happens during the container build and you just get a nice clean container that runs with a degree of limited access to the host anyhow. There's more issues that containerising everything solves, but those are the 3 I'm most passionate about and I think are most relevant for something like containerised YaST. Regards, -- Richard Brown Linux Distribution Engineer - Future Technology Team SUSE Software Solutions Germany GmbH, Frankenstraße 146, D-90461 Nuremberg, Germany (HRB 36809, AG Nürnberg) Managing Directors/Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Martje Boudien Moerman
Am 14.06.22 um 10:55 schrieb Richard Brown:
Some problems this is trying to solve:
* Avoidance of the "one system version of X" issue.
In theory that sounds great, but we have seen this in practice for quite some time now. What it ends up like is that projects integrate some library in its current version at some point, then never update it. That's not hyperbole, I know of an up-to-date package (latest release in 2022) in Factory that bundles a library last updated in 2011. Because why not? (I'm not even being ironic, there is simply no reason to update anything anymore. It just costs time.)
Containerising YaST enables us to have as few of those language stacks as possible in the base and lets us ensure we only ship YaST with a configuration we know YaST was built to work with.
In my view this encourages bad habits: if you can't pin dependencies you have to read the documentation and rely on documented behavior only. What often happens (and is the reason for pinning) is that developers rely on undocumented behavior that breaks in future versions (could even break with recompilation) and perhaps doesn't even work reliably. Compiling or running software against different stacks often uncovers real issues. In my area we regularly use multiple compilers precisely because they usually don't all have the same quirks. If some software needs precisely defined dependencies, it's likely just very brittle and the bugs it has may just not materialize under some limited circumstances that maybe even a container can't reliably reproduce. (And even if, how would you know? Testing isn't exhaustive.) Lastly, we do (via openQA) actually test the full stack, do we not? So it seems like this solves a problem that we have already solved differently.
* Avoidance of the "You must upgrade the whole system" issue.
I can't remember ever wanting not to do that. It also seems like we're replacing "you must" by "you cannot". If some packages uses some very old library, then that's not going to change. Any features that you'd like to use that would require a newer version? Good luck, you have build the entire dependency tree yourself.
* Avoidance of the "everything is installed on the system as root" issue. RPMs when installed effectively execute as root, not only installing the binaries where we tell them to, but running whatever scripts we like to touch whatever we want on your system.
Bugs can hide everywhere, but those scripts are generally so small compared to the software packages they help install that they're not my main source of worry. There are certainly advantages of containers, and I fully understand why e.g. companies might want to use them to allow their officially supported software to run on different distributions. But containerizing the distribution itself? I don't see the point. Does the dependency tree of our 15,000 packages really allow for a neat division into independent containers and how much duplication or proliferation of different versions would it entail? How do we handle plugins and interchangeable implementations? Allowing to mix and match seems fundamentally incompatible with the idea of monolithic isolated immutable containers, but is an essential feature of a distribution. Containers tie together several unrelated notions that make sense in a great many circumstances, but certainly not all. "If all you have is a hammer, everything looks like a nail." Containers are our hammer now, and we'll be damned if not everything in this world can be nailed into one. Aaron
On Wednesday 2022-06-15 02:51, Aaron Puchert wrote:
Am 14.06.22 um 10:55 schrieb Richard Brown:
Some problems this is trying to solve: Avoidance of the "one system version of X" issue.
[…] sounds great […does not work].
Containerising YaST enables us to have as few of those language stacks as possible in the base and lets us ensure we only ship YaST with a configuration we know YaST was built to work with.
[…]this encourages bad habits […] What often happens (and is the reason for pinning) is that developers rely on undocumented behavior that breaks in future versions […] Compiling or running software against different stacks often uncovers real issues. […] If some software needs precisely defined dependencies, it's likely just very brittle and the bugs […]
Reject agile development, return to normal hacking. The value proposition of Linux distributions used to be the integration of components with one another (usually homing in on "one system version of X"). Over the years, the notion has developed that distros are merely "repackaging efforts". Going "all-container-y" strengthens that idea, and the original value proposition is watered down.
* Avoidance of the "everything is installed on the system as root" issue. RPMs when installed effectively execute as root, not only installing the binaries where we tell them to, but running whatever scripts we like to touch whatever we want on your system.
How many post scriptlets are really needed? Of 11k %post, 4k are just for ldconfig alone -- the rest is likely custom shelling that's not immediately line-grepable but barely more than {ldconfig, update-alternatives, fc-cache, things like that} either. We ought to put a focus on using more file triggers.
On Wed, Jun 15, 2022 at 09:11:34AM +0200, Jan Engelhardt wrote:
On Wednesday 2022-06-15 02:51, Aaron Puchert wrote:
Am 14.06.22 um 10:55 schrieb Richard Brown:
Some problems this is trying to solve: Avoidance of the "one system version of X" issue.
[…] sounds great […does not work].
Containerising YaST enables us to have as few of those language stacks as possible in the base and lets us ensure we only ship YaST with a configuration we know YaST was built to work with.
[…]this encourages bad habits […] What often happens (and is the reason for pinning) is that developers rely on undocumented behavior that breaks in future versions […] Compiling or running software against different stacks often uncovers real issues. […] If some software needs precisely defined dependencies, it's likely just very brittle and the bugs […]
Reject agile development, return to normal hacking.
Agile development is great at sometimes for some things and bad at other times for other things. Wasn't Linux initially about choice?
* Avoidance of the "everything is installed on the system as root" issue. RPMs when installed effectively execute as root, not only installing the binaries where we tell them to, but running whatever scripts we like to touch whatever we want on your system.
How many post scriptlets are really needed? Of 11k %post, 4k are just for ldconfig alone -- the rest is likely custom shelling that's not immediately line-grepable but barely more than {ldconfig, update-alternatives, fc-cache, things like that} either. We ought to put a focus on using more file triggers.
Or maybe we should document rpm so people can use its features with confidence? Wait, there isn't any actual rpm specification. Some random blog posts and guides how to do something, sure. But spec file grammar? Exact meaning of the standard rpm tags? Nope. And we aren't the upstream so can't define what is the API and what is just property of current implementation and subject to change. In fact, many things do change from version to version, and some of the chages are quite unexpected and break things. Maybe if the very base of our distribution packaging is something that cannot be relied on then switching to containers makes a lot of sense - our distribution has that brittleness that containers workaround built inherently into its very base. Thanks Michal
On Wednesday 2022-06-15 11:14, Michal Suchánek wrote:
Wasn't Linux initially about choice?
If it ever was (see http://www.islinuxaboutchoice.com/ for *that* discussion), then it was certainly less so in the past. There are more packages shipped than ever before - and I don't mean that because distros count texlive as 6000 these days rather than just 1.
Or maybe we should document rpm so people can use its features with confidence?
We certainly try on wiki.opensuse.org. Now that you mention it, perhaps we should evaluate whether rpm-on-git's wiki could be enabled.
Hello, On 2022-06-15 14:26, Jan Engelhardt wrote:
On Wednesday 2022-06-15 11:14, Michal Suchánek wrote:
Wasn't Linux initially about choice?
If it ever was (see http://www.islinuxaboutchoice.com/ for *that* discussion)
https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy Kind Regards Johannes Meixner -- SUSE Software Solutions Germany GmbH Frankenstr. 146 - 90461 Nuernberg - Germany GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman (HRB 36809, AG Nuernberg)
On 6/15/22 02:51, Aaron Puchert wrote:
Am 14.06.22 um 10:55 schrieb Richard Brown:
[...]
Containerising YaST enables us to have as few of those language stacks as possible in the base and lets us ensure we only ship YaST with a configuration we know YaST was built to work with.
In my view this encourages bad habits: if you can't pin dependencies you have to read the documentation and rely on documented behavior only. What often happens (and is the reason for pinning) is that developers rely on undocumented behavior that breaks in future versions (could even break with recompilation) and perhaps doesn't even work reliably.
Compiling or running software against different stacks often uncovers real issues. In my area we regularly use multiple compilers precisely because they usually don't all have the same quirks.
If some software needs precisely defined dependencies, it's likely just very brittle and the bugs it has may just not materialize under some limited circumstances that maybe even a container can't reliably reproduce. (And even if, how would you know? Testing isn't exhaustive.)
Let me reiterate something: the containerized YaST is only a cherry-on-top of the regular YaST offering. We will still develop and ship YaST in its current form, fully integrated into the system. The very same code-base of YaST runs currently in SLES-15-SP4, Leap 15.4 (both including Ruby 2.5) and Tumbleweed (Ruby 3.1). That code-base has been adapted to all the Ruby versions (and other libraries) that have passed through Tumbleweed since the 15.4 release of Leap (basically every single major versions of Ruby from 2.5 to 3.1). This is not a way to take shortcuts in the YaST development nor to reduce the list of scenarios in which we care about integration. Cheers. -- Ancor González Sosa YaST Team at SUSE Software Solutions
Am 15.06.22 um 09:49 schrieb Ancor Gonzalez Sosa:
On 6/15/22 02:51, Aaron Puchert wrote:
Am 14.06.22 um 10:55 schrieb Richard Brown:
[...]
Containerising YaST enables us to have as few of those language stacks as possible in the base and lets us ensure we only ship YaST with a configuration we know YaST was built to work with.
In my view this encourages bad habits: if you can't pin dependencies you have to read the documentation and rely on documented behavior only. What often happens (and is the reason for pinning) is that developers rely on undocumented behavior that breaks in future versions (could even break with recompilation) and perhaps doesn't even work reliably.
Compiling or running software against different stacks often uncovers real issues. In my area we regularly use multiple compilers precisely because they usually don't all have the same quirks.
If some software needs precisely defined dependencies, it's likely just very brittle and the bugs it has may just not materialize under some limited circumstances that maybe even a container can't reliably reproduce. (And even if, how would you know? Testing isn't exhaustive.)
Let me reiterate something: the containerized YaST is only a cherry-on-top of the regular YaST offering. We will still develop and ship YaST in its current form, fully integrated into the system.
That's good to hear. I was mainly replying to the more general discussion this had turned into, but as long as the container world and the package world can peacefully coexist I don't see an issue with this. Aaron
Hello, On 2022-06-15 02:51, Aaron Puchert wrote:
In my view this encourages bad habits: if you can't pin dependencies you have to read the documentation and rely on documented behavior only. What often happens (and is the reason for pinning) is that developers rely on undocumented behavior that breaks in future versions (could even break with recompilation) and perhaps doesn't even work reliably.
Yes, this is how it is "out there in the wild". And it cannot be avoided in practice. Perhaps it cannot be avoided even in theory when free software development is allowed. To rely on documented behavior the behavior would have to be sufficiently documented but often the behavior is not sufficiently documented and it cannot be sufficiently documented because often function A in libA calls function B in libB that calls function C in libC so the behavior of function A depends on the behavior of function B and function C and the behavior of the kernel in its environment. In theory the developer of a program that calls function A fully understands the behavior on all lower levels. In practice this is not possible with reasonable effort. So the developer of a program just calls function A and tests how his program behaves in his environment(s) and that's basically all he can do with reasonable effort. Related to that: As an example try to either output "Hello World!" or reasonably handle all possible issues in your program i.e. let the system terminate your program is not allowed instead your program must cleanly exit on its own as far as possible, e.g. SIGKILL does what it does but it is possible to handle even SIGABRT in some way. Cf. https://codegolf.stackexchange.com/questions/116207/hello-world-that-handles... And now imagine how far such careful programming is actually implemented "out there in the wild". So in practice things fail in arbitrary ways and isolation of separated parts from each other helps to mitigate bad outcomes. Kind Regards Johannes Meixner -- SUSE Software Solutions Germany GmbH Frankenstr. 146 - 90461 Nuernberg - Germany GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman (HRB 36809, AG Nuernberg)
Am 15.06.22 um 11:13 schrieb Johannes Meixner:
On 2022-06-15 02:51, Aaron Puchert wrote:
In my view this encourages bad habits: if you can't pin dependencies you have to read the documentation and rely on documented behavior only. What often happens (and is the reason for pinning) is that developers rely on undocumented behavior that breaks in future versions (could even break with recompilation) and perhaps doesn't even work reliably.
Yes, this is how it is "out there in the wild". And it cannot be avoided in practice. Perhaps it cannot be avoided even in theory when free software development is allowed.
Maybe I'm a bit spoiled, because the libraries that I work with are reasonably documented, and I will generally complain in reviews if something relies on undocumented behavior. We teach new hires early on to work with documentation and write their own. In any event, while all this can't be fully avoided, we usually build with GCC, Clang, Apple Clang, MSVC on three architectures and three operating systems (sadly no big endian anymore), and that catches quite a bit of "relying on undocumented behavior". Many packages on GitHub make full use of the available CI services and test on a wide variety on platforms and sometimes even against multiple dependency versions. Not coincidentally, such projects tend to have a good track record of producing high quality software.
To rely on documented behavior the behavior would have to be sufficiently documented but often the behavior is not sufficiently documented and it cannot be sufficiently documented because often function A in libA calls function B in libB that calls function C in libC so the behavior of function A depends on the behavior of function B and function C and the behavior of the kernel in its environment. > In theory the developer of a program that calls function A fully understands the behavior on all lower levels.
In practice this is not possible with reasonable effort. So the developer of a program just calls function A and tests how his program behaves in his environment(s) and that's basically all he can do with reasonable effort. This however is problematic whether we use containers or not. If you're just inferring behavior, you might be inferring it subtly wrong. Testing can prove your code wrong, but it can almost never prove it right. So
That is of course not feasible. You're absolutely right about those dependencies, and if you don't address this issue pinning will only provide temporary relief. Of course we'll have a hard time if those behaviors are moving targets, so the idea [1] is that functions specify contracts (preconditions, postconditions), and then implementations are verified against those contracts under the assumption that all called functions satisfy their contract. This localizes our reasoning to a single function, and if you do this for all functions, you've proven correctness of the entire program. Even though full formal verification doesn't seem practical right now (though see [2]), these ideas have "proven" quite powerful and every reasonably reliable piece of software that I've seen uses them in some way. (Sometimes contracts are "by convention", e.g. size() on a C++ container is generally assumed to return an integer value specifying the number of elements. Sometimes they aren't fully written down. But if you don't try to impose clear and concise contracts, you will soon end up with a mess because the complexity will overwhelm your reasoning powers.) As a side note, partial verification is becoming more common. Aside from Rust obviously, we are gaining some checks in C/C++ as well [3,4]. They all use annotations on functions that specify pre- and postconditions, then work exclusively within a single function. that you have tested some combination is not as meaningful as you might think. It might do what you've wrongfully inferred in some, maybe even most cases, but then in others it breaks down. Pinning doesn't address the underlying issue. The underlying issue is not variability in implementation, but unclear contracts or contract violations. These are bugs, and variability in implementation can help you find them. Its absence can't reliably mitigate them. (Not generally at least, and if you can prove that it mitigates them you can probably also just fix the bug...)
Related to that:
As an example try to either output "Hello World!" or reasonably handle all possible issues in your program i.e. let the system terminate your program is not allowed instead your program must cleanly exit on its own as far as possible, e.g. SIGKILL does what it does but it is possible to handle even SIGABRT in some way. Cf. https://codegolf.stackexchange.com/questions/116207/hello-world-that-handles...
You could do that, but why? If you can't meaningfully continue, SIGBART isn't the worst idea.
And now imagine how far such careful programming is actually implemented "out there in the wild".
Fair enough, malfunction testing is a bit tricky. But we've been reasonably successful with simply running our existing unit test suite with instrumented allocation functions that fail after 1, 2, 3, ... allocations and then repeat the test until it passes. This exercises basically all error handling paths that we have. It doesn't test for strong exception safety, but in our code basic exception safety is usually good enough. But I feel that's off-topic. That error handling is not well-tested is a problem whether we use containers or not, since most errors have little to do with specific dependencies but other circumstances of execution. Aaron [1] <https://en.wikipedia.org/wiki/Design_by_contract> [2] <https://media.ccc.de/v/34c3-9105-coming_soon_machine-checked_mathematical_proofs_in_everyday_software_and_hardware_development> [3] <https://clang.llvm.org/docs/ThreadSafetyAnalysis.html> [4] <https://clang.llvm.org/docs/AttributeReference.html#consumed-annotation-checking>
Hello an addendum for clarification: On 2022-06-16 02:35, Aaron Puchert wrote:
Am 15.06.22 um 11:13 schrieb Johannes Meixner: ...
function A in libA calls function B in libB that calls function C in libC so the behavior of function A depends on the behavior of function B and function C and the behavior of the kernel in its environment. ... As an example try to either output "Hello World!" or reasonably handle all possible issues in your program i.e. let the system terminate your program is not allowed instead your program must cleanly exit on its own as far as possible, e.g. SIGKILL does what it does but it is possible to handle even SIGABRT in some way. ... You could do that, but why? If you can't meaningfully continue, SIGBART isn't the worst idea.
this shows how things can go wrong (basically because of RFC 1925 item 6a "indirection"). A programmer may think SIGABRT is OK in some cases so he calls abort() when he cannot continue e.g. in his function B. Now any caller of function B could get aborted in some cases which means the whole caller process gets instantly terminated. Guess what - we had that issue with a system library. Not so fun to see e.g. a system daemon process aborted all of a sudden in some unexpected case. Even if a programmer calls abort() only in main() it would be a rather rude exit of his program that could be unexpected by callers of his program. Therefore I meant that a simple "Hello World!" output in a fully clean way may have to handle even SIGABRT (who knows what functions might be called during output) as an extreme example that it is not possible in practice with reasonable effort to make programs that really work. Kind Regards Johannes Meixner -- SUSE Software Solutions Germany GmbH Frankenstr. 146 - 90461 Nuernberg - Germany GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman (HRB 36809, AG Nuernberg)
On 6/13/22 18:45, Jan Engelhardt wrote:
On Monday 2022-06-13 18:16, Ancor Gonzalez Sosa wrote:
# zypper in yast [...]
26 new packages to install. Overall download size: 11.5 MiB. Already cached: 0 B. After the operation, additional 27.6 MiB will be used.
# zypper in podman [...]
9 new packages to install. Overall download size: 25.0 MiB. Already cached: 0 B. After the operation, additional 113.9 MiB will be used.
So instead of having to install yast2 (zypper tells me 11 MB), I now have to podman (25 MB).
The outcome of those two commands heavily depend on the system you execute them. What you show is the result in a traditional openSUSE in which the software is delivered as packages directly installed in the system (the paradigm I personally use and love). But that's not the only reality we find out there nowadays. You and me, who still enjoy the traditional model of Linux distributions, will still install YaST directly in the system. That's perfectly fine and will always be possible, of course. But the new containerized YaST broadens the usefulness of YaST to also cover other scenarios in which installing RPM packages is only done as a last resort and containers are the preferred way to distribute and consume software.
those new commands grab YaST and run it in a container that will be transparently used to administer the host system.
It's logically backwards that one is supposed to run Y2 in a container, only to have it go outside again. Containers are not supposed to affect the host system.
Actually privileged containers can affect the host system. And they have existed for years. Isolating the base system from the software running inside the containers is only one of the possible goals of containers. In some environments, containers are the main mechanism to distribute and execute software since they represent a way to bundle several software pieces (traditionally RPM packages) in a self-contained format. Now YaST is an option also in those environments. Although I expect YaST to be used seldom there - basically to cover situations in which other tools fall short in functionality or to ease transition for administrators of traditional (open)SUSE distributions. Cheers. -- Ancor González Sosa YaST Team at SUSE Software Solutions
participants (11)
-
Aaron Puchert
-
Ancor Gonzalez Sosa
-
Andrei Borzenkov
-
Frederic Crozat
-
Jan Engelhardt
-
Jason Craig
-
Johannes Meixner
-
Ladislav Slezák
-
Michal Suchánek
-
Richard Brown
-
Rodney Baker