On Wednesday, November 20, 2019 11:26:41 PM CET William Brown wrote:
Neal, what do you think of the idea from Ludwig of having a system rust compiler (more stable and conservative, used to build all the openSUSE rings), and a updated version of the compiler co-installable for developers.> It's not a _terrible_ idea. The only problem with it is that Firefox forces Rust to move forward anyway. That's why RHEL doesn't do this right now, either.
This isn't viable - the moment I, a developer, take my project from development -> packaging, I'm now on an older compiler that my dependencies won't compile with.
I understand. This is the reason of having two version of the compiler in Leap / SLE and one single version in TW. One version will be the system compiler, associated to the service pack release, or in the case of Leap, to the subversion number. This is the compiler used to build the distribution and will be updated in each SPx, based on some criteria that balance change and stability. Because Rust is very strong in the backward compatibility, I think that we can expect that in each SP / Leap release, the system compiler will be updated to the last version. For developers, there will be a more update version available officially, that is moving at the same peace that upstream. This is the only one that developers will use, and a `zypper up` will take care of updating it in the distribution. But we cannot expect that the system compiler will be updated at the same speed that upstream, because this requires that every 6 weeks, a routinary `zypper up` will update the compiler, but also the user applications that are using rust. This goes against the stability guarantees and reproducibility that SLE and Leap offer to the users. Imagine that I use the new compiler in my SLE to get my Firefox, and I get a different binary that the one distributed with the system.
They have to be the same version, and they have to both move at the rate upstream rust is moving as many dependencies really do move at that cadence too. This is even more important when you remember that rust doesn't distinguish security releases from feature ones, so an update to a dependency for security, may require newer compiler versions.
Yes, exactly. You will be using the compiler for developers.
The assumption here is that Rust needs to be slowed down. That's not the case at all. You can move fast _without_ breaking things. The problem is that today there are no Rust compiler and language developers who understand this and help factor that into the future development of the stack.
I think that again, Rust has different targets (containers, static applications, continual integration and release) than an OS packaging system (shared libs, C, patched backports). It's not that they don't understand as much as they actively chose to follow a different path.
Which is why we are struggling. The mental model of how we did things with C based programs doesn't apply. For years Red Hat / Fedora have tried to for python to be packaged like it's C, when it's not (which has left pip/pypi in bad places) and causes python on a system to be missing many libraries that do exist. This is why python as a community exclusively recommends virtualenv, not OS/system python. We'll probably end up in the exact same place with rust (cargo vs zypper in crate-foo). So we can either keep fighting against it, and people will just use cargo + vendoring anyway, or we can accept that the ship has sailed and try to work with it instead.
No, there is a difference. In Python you need to install the library before it can be used, as is an interpreted language, and in C you need to install the library as is dynamically linked to the application. The proposal of packaging the crates is not about using `zypper in rust- regex`, is about using `BuildRequires: rust-regex` during the creation of `kanidm`. No crate will be usually installed in the system, as the RPM only contains the source code. A change in `rust-regex` in OBS will trigger a rebuild of `kanidm`, as will be any update of any crate that belongs to the transitive closure of the dependencies of your package. This avoid vendoring, and avoid the security implications related with it. But during the daily development of kanidm, you are free to use cargo to install the crates in the user space. During the packaging phase of kanidm, the last version of the crate needs to be available in OBS before it can be compiled, tho. So multiversioning is something that we need to address in OBS, with the same expectations that cargo have.
from M Matz:
That's a downside to QA process, not the packaging issue anymore. As far as I know we are discussing packaging right now, aren't we?
William (rightly) wants to have something realistic for the distro. Packaging (as in merely creating and submitting packages easily) is only one part, it can't be seen in isolation. If you can submit a thousand packages in 10 minutes doesn't really matter, if those thousand packages then don't land in Factory, or only six months after submission. The whole thing needs to work on all levels.
So the more I'm reading into this thread the more I think that my conclusion here is:
* We should have rust/cargo's release cadence seperate from the release cycles of a distro (ie fedora modularity in a way, or tw style)
IMHO we do not mix the real problem of the review process, with the pace of upgrade. If we need to upgrade a lot of packages, and we have the tools to do that, lets fix the review process. I am sure that this way we help more. I am in favor of moving fast, but again, there are reasons in SLE / Leap (reproducibility build, stability guarantees) that are not related with the review process problem.
* Packaging crates to rpms is not feasible or scalable, and we should not attempt it - cargo already has vendor and all the packages I've seen in obs today use vendored dependencies.
Packaging crates is is feasible. Is not scalable because the review process, not because we cannot update all the packages every day. and use it the in the spec file as a BuildRequires as is expected (and as is implemented right now on Fedora) We can use vendoring now, until we fix those problems, but we need to aim to the better solution.
* We should instead focus on packaging "edge" programs that use rust IE firefox, ripgrep.
For TW, sure.
* Focus pkg QA efforts on the edge packages
* We could consider integration of tools like cargo-audit to scan for security issues in packages (similar to clamav in obs already) to help ensure vendored deps are "up to date".
Something like this is a good idea, like the feature that github provides, that is pointing to you that have a requirement on a library that have doing only this integration is not fixing the vendoring issue.
* Improve the rpm spec docs related to rust/cargo to help make it easier for people to give us packaged programs.
SUSE Software Solutions Germany GmbH Maxfeldstr. 5 90409 Nuremberg (HRB 36809, AG Nürnberg) Managing Director: Felix Imendörffer