Le 23/01/2018 à 18:11, John Paul Adrian Glaubitz a écrit :
Plus, if you look at the number of forks that GNOME's decisions have triggered in the past (MATE, Cinnamon, Deepin etc), it clearly shows that this development model doesn't fly in the long-term.
I am not convinced by this argument. The open-source community loves to fork projects as soon as a big changes happen (which, like any big change, are bound to cause some disagreement), and sometimes just does so out of the blue for the fun of it. In that sense, a fork does not speak in any way about the well-being of the project which is being forked, and does not usually have a lot of long-term consequences for said project. To convince yourself of that, consider historical examples: * When the KDE3 -> KDE4 transition happened, the Trinity project forked KDE3 (which pretty much parallels MATE with GNOME 2). And now that KDE is putting X11 in maintenance mode, I am ready to bet that some X11 enthusiast is going to fork KDE again. * Wayland has seen Canonical go Mir, and now NVidia are trying to cause a community split in there again. * Before systemd became the norm, there were 3 popular init clones around in the Linux ecosystem. * The BSD community dedicates most of its manpower to writing BSD-licensed equivalents of existing GPL-licensed code out there. Forks and competing clones mean that there are people in a community who disagree about something, which is generally healthy even though it has the unfortunate effect of splitting manpower. It means that we avoid an Apple-like stagnating technical monoculture, for example. Sometimes forks have a good idea and prosper (like Cinnamon), or even replace their genitor (like systemd), other times they fail to evolve beyond their original concept, stagnate, and end up dying from technical debt poisoning (like Unity). Nobody can tell which one it will be in advance, I would say. On the other hand, I find your other point more interesting:
Also, I don't think that something like this will fly on the longterm with Linux distributions:
glaubitz@suse-laptop:~/suse/openSUSE:Factory/librsvg/librsvg-2.42.0/rust/vendor> find . -name "*.rs" |wc -l 988 glaubitz@suse-laptop:~/suse/openSUSE:Factory/librsvg/librsvg-2.42.0/rust/vendor>
This undermines the work of security teams in Linux distributions. At least in Debian, including third-party libraries instead of using the versions available in the distribution tree is not allowed.
I'm very much surprised that this is apparently acceptable in openSUSE. If every Rust package is going to be like that in the future, I'll already sent out my condolences to anyone working on distribution security teams.
This point is actually not Rust-specific in my eyes. I see it as something which has been brewing for a while in the programming community, and I'm surprised that the issue has never arised earlier. The heart of the problem as far as I see it is that there is no perfect software distribution method. Two extreme models have been popular for a while, the Windows/OSX strategy of packaging every nontrivial dependency with the application, and the Linux/BSD strategy of building one giant software repository per software distribution. This dichotomy is making the life of every multi-platform project difficult, especially so as both strategies ultimately have very serious drawbacks: * The "package the world" approach not only causes oversized downloads, but makes backwards-compatible dependency updates (like, as you point out, security patches) unnecessarily slow to propagate through the ecosystem. Cross-application communication can also get fiendish in this approach, which does not encourage software to work together. And application installation, clean removal and update is also generally a mess. * The centralized repo approach, on its side, is much more convenient for the end user, but for the developer it means that software must be packaged N times instead of one (in order to please each distribution's repo management customs) and it gets very messy any time a dependency pushes a backwards-incompatible update. From a security point of view, other issues arise due because the centralized repo is a single point of failure, and third-party repos (which are often added out of necessity in order to get sufficiently recent software) are not held to the same quality and security standard as the main one. Frustrated with this state of affair, the community of almost every modern programming language has gone all "we can do better" and built their own library distribution and dependency management mechanism. And so we got Maven for Java, PyPI and Conda for Python, Gems for Ruby, go get for Go, NPM for Javascript, Crates.io for Rust... and the list goes on. These distribution mechanism vary widely in capabilities, but one recurring theme is that application developers want to have more control over their dependencies, and in particular to update them only after in-house testing. This results in a strange hybrid between the two historical approach: * There is ~one centralized repo per programming language, which is generally less problematic than one per Linux distribution in practice. * Applications package their dependencies and the one who performs the build gets to pick the dependency versions, like on Windows and OSX. I am surprised that it is the first time that these custom package management schemes get into a nontrivial conflict with the standard system package management scheme of a Linux distro. AFAIK, these things have been around for a long while, and even programming languages which encourage statically linking everything are not new (think Go). So hasn't anybody been thinking about this issue before? Cheers, Hadrien -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org