Le 22/01/2018 à 18:29, John Paul Adrian Glaubitz a écrit :
On 01/22/2018 05:12 PM, Hadrien Grasland wrote:
The Rust compiler uses unstable internal interfaces, which are not exposed to code which builds on stable releases. The closest equivalent which I can think of in the C/++ world is the GCC/binutils duo: to build and use GCC, you need a matching release of binutils, which maps to a relatively narrow time window. Use too new or too old a release of binutils, and your GCC build will fail with weird assembler and linker errors. And conversely, like any C/++ program, binutils itself has some compiler version requirements.
I'm pretty confident there is never a problem when binutils is too new, at least I haven't run into such a problem with my porting work within Debian.
I remember reading that too new a binutils could also be a problem back in the days where I was playing with OSdeving (which requires a custom cross-compiler configuration), but you are right that I never experienced it firsthand nor have heared about it in a long while. Maybe it's just an old meme fueled by a couple of backwards-incompatible binutils breakages that happened a long time ago...
And, furthermore, the key point here again is the speed of change: gcc doesn't introduce breaking changes every six weeks, Rust does.
I can certainly understand that fast-changing software can be difficult to deal with. I for one never understood how the people packaging rolling release distros manage to keep up so well with the rapid rate of kernel releases (and the periodical NVidia driver breakages that ensue), or with the rapid update frequency of anything GCC-related (where you basically have to rebuild everything every time). At the same time, one should not shoot the messenger. Fresh software with fast feature and bugfix turnaround is also a good thing from the end user point of view, so long as the project can provide the quality assurance guarantees that come with that. And for this, Rust comes much better equipped than many other projects, as you can read up on https://brson.github.io/2017/07/10/how-rust-is-tested . Now, you claim that this is not enough, and that you have observed breakages. But when I have requested you to provide details and evidence, you have not been able to (and have in fact ignored that question altogether). All you have precisely explained in your previous e-mail is that you had issues bootstrapping the compiler using an older release of itself, a special case which the Rust team is well aware of, and provides special support for through pre-built binaries. You have also briefly mentioned something about a disappearing keyword, without mentioning if that keyword was part of a stable Rust release (which is where Rust's stability guarantees apply) or not. If you are not going to provide further details, I will have to assume that it wasn't.
The problem with Rust is simply the lack of stabilization. It's absolutely insane that they think it's ok to break compatibility in minor versions and it blows my mind that so many people find that acceptable.
Adding features in a minor software release is considered okay in any modern software versioning scheme. It is only when existing features are changed or removed that compatibility is considered to be broken.
I wouldn't consider a toolchain a normal piece of software. A toolchain is one of the basic building blocks of your whole distribution. It shouldn't change in crazy ways when you just perform a minor update.
Tell that to the kernel maintainers next time they will break my video driver or send someone's production system in an infinite bootloop in what was supposed to be a security update. And yet, for some reason, we in the Linux world never had much issue building on top of that. In fact, I would argue that one of Tumbleweed's strength is that is the first Linux distribution which I have used so far which provides concrete answers to this problem (via OpenQA and Btrfs snapshots) without forcing its users into software stagnation along the way. Compared to what major Linux infrastructure projects like the kernel, Mesa, or KDE will still periodically send people through, I would say that Rust did pretty well so far. It has managed to iteratively add many features on a rapid release cycle, without breaking the code of anyone who builds on stable releases, as evidenced by the team regularly re-building and re-testing the entire crates.io package library in their routine procedure. You claim that you found holes in this procedure, but so far you have not provided evidence. And even if you had some, all that would mean is that your discovery would serve to improve the testing procedure and make it better. I, for one, am very happy that some software projects are finally taking steps to provide alternatives to the breakage versus stagnation false dichotomy, that has been with us for way too long in the Linux world.
Rust upstream lives in a universe where they think that distributions are an outdated concept. This is why they are shipping their own package manager and consider such breaking changes in minor releases acceptable.
You must understand where they are coming from. Most Linux distributions consider it okay to ship software which lags 5+ years behind official upstream releases, which is not acceptable for a fast-moving software project like Rust (or even to any software project where new releases matter, such as hardware drivers, web browsers, and office suites). And some of the platforms that they target do not ship a standard package management mechanism at all. The rolling release users among us are sadly the minority here.
Well, your perspective would change if you're responsible for maintaining several hundreds desktop machines with several hundred users. Installing a rolling release distribution in such setups would be a nightmare because you would be busy all day long to fix all kinds of regressions.
And I'm not necessarily talking about regressions in the form of bugs. It can already be a regression if feature X behaves differently or extension Y doesn't work anymore.
It's really frustrating how many upstream projects are refusing to understand this. So many just say "Awww, just go ahead and update to the latest upstream version, no big deal. I've been running Arch on my single-user, single-machine setup for years without problems." It simply doesn't work that way in the enterprise world.
Again, there are two sides to this story. Here, you are taking the side of someone who needs to keep a large production system alive, which I agree is very important and must be respected. At the same time, if you put yourselves in a developer's shoes, it is also extremely frustrating to process bug reports or feature requests about problems which you resolved on the master branch months ago, and to be requested to keep alive old software releases which no one even really wants to be using anymore. Surely, there has to be a way to do better here on both accounts. I am glad to see that some software projects are taking steps to resolve this longstanding issue more cleanly. To see better testing and deployment infrastructures which shrink the risk window, and reduce the need for costly backports. Things like more extensive test suites, better continuous integration, containers and staged feature roll-out are all great news for the software world, which will ultimately all help us leave more of the legacy baggage behind, and stop saying to people "Well, you *could* run Linux, open newer DOCX documents, or write C++17 code on that freshly bought laptop, but for that you will need to take some risks...".
Rust's distribution tools cater to the vast majority of users who are stuck with obsolete operating system packages and want to get modern work done nonetheless. To do this, they sometimes need to bypass the standard distribution package management scheme. But this need not concern you as a distribution maintainer, much like you need not be concerned about users who build and install more recent software releases from source: what users do with their machine is solely their business, so long as they don't come complain when their personal fiddling breaks the system.
It very much becomes concern if a new version of application X requires an additional of 250 packages to be updated. It becomes a nightmare from a security point of view. Who is going to review all these additional updated packages?
What's the point of all these fancy security features Rust have when you end up having 25 different versions of libfoo installed on your system?
You might as well then just stop installing security updates.
For any software stabilization and testing process, there is a point of diminishing returns. No matter how much energy you expend at reviewing the package base, at some point, you will still need to bite the bullet and push the thing to the users, being fully aware that this is where the bulk of the bugs and security holes will be found just by virtue of users being much more numerous than testers. For this reason, I've grown increasingly skeptical of stable Linux distribution release processes over time. They have never been effective at killing the most annoying bugs for me (like broken hardware drivers), all the while forcing me into ancient software whose problems have been fixed upstream months to years ago. Their upgrade procedures are stressful, time-consuming and fragile. I am aware that there is a place for such elaborate release schemes, but personally I would rather see all that QA effort being expended directly on maintenance and continuous improvement of the relevant software projects, rather than on making late software later. Cheers, Hadrien -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org