On 01/23/2018 08:46 AM, Hadrien Grasland wrote:
And, furthermore, the key point here again is the speed of change: gcc doesn't introduce breaking changes every six weeks, Rust does.
I can certainly understand that fast-changing software can be difficult to deal with. I for one never understood how the people packaging rolling release distros manage to keep up so well with the rapid rate of kernel releases (and the periodical NVidia driver breakages that ensue), or with the rapid update frequency of anything GCC-related (where you basically have to rebuild everything every time).
At the same time, one should not shoot the messenger. Fresh software with fast feature and bugfix turnaround is also a good thing from the end user point of view, so long as the project can provide the quality assurance guarantees that come with that. And for this, Rust comes much better equipped than many other projects, as you can read up on https://brson.github.io/2017/07/10/how-rust-is-tested .
I am not shooting the messenger. I am criticizing the person who thought that writing a core package with a large number of reverse dependencies [1] in a language where upstream can't even be bothered to run the testsuite on anything beyond x86 is a good idea. And, no, not having the resources for that is not the right justification. If you don't have the resources, either a) don't try to push your language for core packages, or b), ask projects like Debian which do have a large test infrastructure with all kinds of architectures
I wouldn't consider a toolchain a normal piece of software. A toolchain is one of the basic building blocks of your whole distribution. It shouldn't change in crazy ways when you just perform a minor update.
Tell that to the kernel maintainers next time they will break my video driver or send someone's production system in an infinite bootloop in what was supposed to be a security update. And yet, for some reason, we in the Linux world never had much issue building on top of that. In fact, I would argue that one of Tumbleweed's strength is that is the first Linux distribution which I have used so far which provides concrete answers to this problem (via OpenQA and Btrfs snapshots) without forcing its users into software stagnation along the way.
If the kernel breaks, I can just switch to a different kernel at the boot prompt. For this very reason, Debian puts every minor kernel release into a separate package. Furthermore, the distribution kernels don't bring such breaking changes, plus the upstream kernel also NEVER breaks any userland.
Again, there are two sides to this story. Here, you are taking the side of someone who needs to keep a large production system alive, which I agree is very important and must be respected. At the same time, if you put yourselves in a developer's shoes, it is also extremely frustrating to process bug reports or feature requests about problems which you resolved on the master branch months ago, and to be requested to keep alive old software releases which no one even really wants to be using anymore. Surely, there has to be a way to do better here on both accounts.
You need to understand that the people who are paying everyone's bills at the end of the month are the one that are using the stable releases. In our other mail, you were saying that Mozilla is a starving organization, maybe you should try to make the connection between these two statements.
For any software stabilization and testing process, there is a point of diminishing returns. No matter how much energy you expend at reviewing the package base, at some point, you will still need to bite the bullet and push the thing to the users, being fully aware that this is where the bulk of the bugs and security holes will be found just by virtue of users being much more numerous than testers.
I think you don't have the slightest clue how QA in enterprise distributions works or how much QA and testing there is before Debian pushes a stable release. This isn't about biting a bullet and pushing something out untested, there is A LOT of testing behind. This is why companies pay very good money for it.
For this reason, I've grown increasingly skeptical of stable Linux distribution release processes over time. They have never been effective at killing the most annoying bugs for me (like broken hardware drivers), all the while forcing me into ancient software whose problems have been fixed upstream months to years ago. Their upgrade procedures are stressful, time-consuming and fragile.
The key point about stable distributions is not that they are bug-free, the key point is that the bugs and problems are well documented. A rapid release will always bring new regressions. Adrian
[1] https://people.debian.org/~glaubitz/librsvg.txt -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org