Le 22/01/2018 à 18:16, John Paul Adrian Glaubitz a écrit :
On 01/22/2018 05:38 PM, Hadrien Grasland wrote:
...that is, until you end up on a codebase which:
* Has -Werror hardcoded deep into its build system.
That doesn't make any sense. You can always override/amend CFLAGS/CXXFLAGS. There is no such thing as "deeply hardcoded".
* Relies on code which does not follow the C/++ standard to get compiled by accident.
Never seen that.
* Relies on undefined behaviour, or another codegen characteristic that is considered an unimportant detail by GCC's optimizer.
Never seen that.
* Uses compiler version detection macros which have not been adapted to the new GCC release.
That would speak of a very poor build system. Yet, I don't think I have run into such a problem.
* Causes an ICE in that specific release of GCC, which was introduced by accident in what was supposed to be a simple bugfix.
Very rare. So far I have only seen such problems on less common architectures and it was always a breeze to get these things fixed with upstream.
It looks like you have enjoyed pretty well-written and unambitious C/++ code so far, then. Lucky you! Where I work, broken build systems, code and compilers are a relatively common sight, I'd say we deal with them every other month or so, and that is with a package base is much smaller than the repos of SuSE or Debian!
Relying on UB especially happens more often than one would think, and is basically the kernel of truth behind the old "-O3 breaks code" meme.
No matter which way one looks at it, compiler updates are unfortunately always a bit of a risky business from a Linux distribution maintainer's point of view.
Thanks, but I have helped with several gcc transitions in Debian. I never saw anything there as close as with Rust. The changes in gcc actually made sense to me, as I said, I was always able to address them with either very simple patches or by just disabling a certain warning.
What about the fact that Rust only considers x86/x86_64 to be a tier 1 architecture?
In Mozilla's terminology, "tier 2" means "guaranteed to build" and "tier 1" means "and in addition, all automated tests were run". The reason why you would want to only run the build is that running tests is much more trouble than building, because you can build for any architecture from x86 using a cross-compiler, whereas you need real hardware on the target architecture in order to perform serious testing (as emulators are usually too slow to be practical in intensive testing scenarios, and too "clean" to expose real hardware quirks). Assuming you wanted to build yourself a cross-architecture test farm, capable of withstanding the full traffic of Rust's high-volume CI system, what you would soon discover is that most hardware architectures do not address this need very well. It is trivial to find a hardware reseller who will build you a good x86-based rack at a fair price, whereas other architectures often do not provide hardware in a standard rack form factor at all, or only sell hardware at a crazy premium like IBM does with Power. Moreover, embedded architectures also often restrict themselves to cheaper and slower hardware which is not powerful enough for intensive continuous testing, meaning that you need to pile up tons of un-rackable junk before you get enough processing power for this kind of use case... Add to this that keeping a highly heterogeneous hardware base running is very cumbersome, and that some of Rust's tier 2 architectures do not even provide the required capabilities for running a test server (e.g. asmjs/wasm is too limited, Fuschia is too immature, and iOS is too much locked down), and hopefully you will get a fair picture of how much of an undertaking this all really is. Now, this isn't to say that it cannot be done, of course. Nor that it would not be very worthwhile. There are some awesome multi-architecture test beds out there, like Debian's package QA test bed or Microsoft's driver compatibility torture test farm, and I'm pretty sure Novell also have some cool stuff around for testing SuSE too. But I think that level of QA sophistication may be a bit much to expect from a relatively small team inside of a money-starved nonprofit organization. If someone is ready to donate or lend Mozilla the required infrastructure, great, but if not, I would not expect them to build it on their own...
We have just recently seen with Spectre and Meltdown how bad it is to merely focus on x86.
I think you may want to check the latest developments of the Meltdown/Spectre saga here. Meltdown, it turns out, goes beyond Intel processors (AMD remaining unaffected) and also hits some high-end ARM processors. And Spectre attacks have been demonstrated on pretty much every modern CPU which has a cache and speculative execution features. It is not an x86 versus the rest of the world thing, almost every popular high-performance CPU architecture has been demonstrated to be vulnerable to these attacks in some way, and all high-performance CPU manufacturers now needs to reflect upon these events and figure out how to build a more secure product next time... Cheers, Hadrien -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org