Hi John, Just chipping in as a Rust developer, and wanted to clear up what I perceive as some misunderstanding: Le 22/01/2018 à 16:36, John Paul Adrian Glaubitz a écrit :
On 01/22/2018 04:22 PM, Aleksa Sarai wrote:
I'm not sure we're understanding each other here -- my point was that the *only* Rust project which has this policy for compiling new versions is the Rust compiler. No other Rust project requires this. That's what I meant by "exception, not the rule". So I agree with what you wrote, but it doesn't have much to do with what I was trying to say, which is that the following quote ...
So, you say it's guaranteed that only the Rust compiler will only ever use particular code that will be deprecated in release N+1 or only available in release N-1?
The Rust compiler uses unstable internal interfaces, which are not exposed to code which builds on stable releases. The closest equivalent which I can think of in the C/++ world is the GCC/binutils duo: to build and use GCC, you need a matching release of binutils, which maps to a relatively narrow time window. Use too new or too old a release of binutils, and your GCC build will fail with weird assembler and linker errors. And conversely, like any C/++ program, binutils itself has some compiler version requirements. This does not preclude GCC and binutils from providing stability guarantees on the programs which they accept to compile, but it is a concern that must be kept in mind when maintaining GCC and binutils packages. From this perspective, the Rust compiler's bootstrapping requirements are not different.
I did build test it myself. I tried building Rust 1.22 with Rust 1.20 which failed with actual compiler errors, not just a warning that I have to use the proper version. And I think it's absolutely not unlikely that Rust project X will run into such a problem as well. What keeps Rust project X from using certain language features that were only recently added or removed?
Since Rust 1.0, users of stable versions of the Rust compiler enjoy a number of stability guarantees: * Stable language features may only be added and deprecated, not removed, so code which builds on version N is guaranteed to build on version N+1. * While feature removal and breaking changes to existing features are eventually planned, they will be done via an epoch mechanism similar to the one used by C and C++. Think about C 89/99/11 and C++ 98/11/14/17. In short, the only thing that must be taken care of, from a distribution maintainer's perspective, is that an application must be compiled with a sufficiently recent stable Rust compiler. This is not a new concern (e.g. it has been an issue in the C++ world for a while). If, on the other hand, you find a package which does not build with a _newer_ release of the Rust compiler than was available at its release date, it is a bug, and you should report it to the Rust team.
The problem with Rust is simply the lack of stabilization. It's absolutely insane that they think it's ok to break compatibility in minor versions and it blows my mind that so many people find that acceptable.
Adding features in a minor software release is considered okay in any modern software versioning scheme. It is only when existing features are changed or removed that compatibility is considered to be broken.
Rust upstream lives in a universe where they think that distributions are an outdated concept. This is why they are shipping their own package manager and consider such breaking changes in minor releases acceptable.
You must understand where they are coming from. Most Linux distributions consider it okay to ship software which lags 5+ years behind official upstream releases, which is not acceptable for a fast-moving software project like Rust (or even to any software project where new releases matter, such as hardware drivers, web browsers, and office suites). And some of the platforms that they target do not ship a standard package management mechanism at all. The rolling release users among us are sadly the minority here. Rust's distribution tools cater to the vast majority of users who are stuck with obsolete operating system packages and want to get modern work done nonetheless. To do this, they sometimes need to bypass the standard distribution package management scheme. But this need not concern you as a distribution maintainer, much like you need not be concerned about users who build and install more recent software releases from source: what users do with their machine is solely their business, so long as they don't come complain when their personal fiddling breaks the system.
Finally, one problem with Rust that I ran into when working on the Rust compiler code itself is the high volatility of the language and the code. Things that used to build fine with Rust 1.19 would break with 1.20 and so on. From a distribution maintainer's point of view, this can be very tedious and annoying.
... is not accurate for any project other than the Rust compiler (and the reason for the Rust compiler having this requirement is so that they can use new language features in the compiler itself, not because of stability issues with the language). Any other Rust project must be able to build on 1.x and 1.(x+1) with no changes (and the Rust compiler team tests this quite heavily).
What keeps project X from using certain features of Rust? I have seen projects which would only build with Rust Nightly.
Software which opts into nightly-only unstable Rust features should be considered unstable as well, and is not a good fit for distribution via normal Linux distribution package management schemes. It should thus be rejected from official Linux distribution repositories. Users who want to install and use such packages will be fine with manually building their own versions, and dealing with compiler breakages as they happen. Cheers, Hadrien -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org