[opensuse-factory] Tumbleweed - Review of the week 2018/03
Dear Tumbleweed users and hackers, A steady flow of snapshots is reaching the Tumbleweed users: this week, another 4 snapshots have been released onto the users, some with a bit more bandwidth demand, some with issues that were not anticipated by the test suite (but easily resolved by the user, and fixes underway) The week saw releases of the snapshot 0110, 0114, 0116 and 0117, with those changes: * mpfr 4.0: as announced last week, we needed an almost complete rebuild of the distro, as this is deeply nested. This resulted in snapshot 0110 being larger than average. * Squid 4.0.22 (upgraded from 3.5.27) * RPM 4.14. Caution for packagers: rpm is less forgiving on errors in spec files * Bind 9.11.2 * Mesa 17.3.2: in order to improve the distro build performance, Meas was split into two parts to be built. Users that updated their system using “–no-recommends” did not get Mesa-dri auto-installed, resulting in the graphical system possibly not starting up. Simply install Mesa- dri for now manually (dependency chain fixes are underway) * Linux kernel 4.14.13 * librsvg 2.42.0: rewritten in Rust * OpenSSH 7.6p1 * KDE Applications 17.12.1 * Default firewall module picked for new installs is now firewalld * Last, but not least: libstorage-ng has arrived But, Tumbleweed would not be rolling if no further things would be in the works already. The larger topics are: * Glibc will completely drop sunrpc support (we have tirpc available, with ipv6 support) * Linux kernel 4.14.14 * Change of default Ruby version from 2.4 to 2.5 * KDE Frameworks 5.42.0 * KDE Plasma 5.12.0 The list of work in progress feels quite short compared to what has been achieved last week, but keep in mind that some things have been in the works for quite a while, it was just coincidence that so many things turned out ready at the same time. Cheers, Dominique
The week saw releases of the snapshot 0110, 0114, 0116 and 0117, with those changes:
* mpfr 4.0: as announced last week, we needed an almost complete rebuild of the distro, as this is deeply nested. This resulted in snapshot 0110 being larger than average. * Squid 4.0.22 (upgraded from 3.5.27) * RPM 4.14. Caution for packagers: rpm is less forgiving on errors in spec files * Bind 9.11.2 * Mesa 17.3.2: in order to improve the distro build performance, Meas was split into two parts to be built. Users that updated their system using “–no-recommends” did not get Mesa-dri auto-installed, resulting in the graphical system possibly not starting up. Simply install Mesa- dri for now manually (dependency chain fixes are underway) * Linux kernel 4.14.13 * librsvg 2.42.0: rewritten in Rust * OpenSSH 7.6p1 * KDE Applications 17.12.1 * Default firewall module picked for new installs is now firewalld * Last, but not least: libstorage-ng has arrived
I would also like to add that snapshot 0117 introduced the new btrfs default subvolume layout Any fresh installation of Tumbleweed using the default btrfs root filesystem will no longer have multiple subvolumes under /var (eg /var/lib/mysql, /var/cache, etc) and instead have a single unified /var subvolume. This simplifies snapshots and rollbacks, will prevent accidental dataloss on rollback for any user data held in /var, and improve performance of any database or VM images that are held in /var as all of /var also now has Copy-on-Write disabled by default. It's also particularly useful for openSUSE Kubic which was struggling with the consequences of much of /var being read-only as it was considered part of Kubic's read-only root filesystem. Formerly important system data that was located in /var is now available in /usr. In particular rpm's database has moved from /var/lib/rpmdb to /usr/lib/sysimage/rpm (with backwards compatible symlinks in place), and /var/adm/fillup-templates now should be located in /usr/share/fillup-templates. If we've missed any important system data that is still in /var but needs to be contained in a system snapshot, please contact me urgently so we can address it's relocation from /var ASAP. Packagers can expect rpmlint rules to prevent the storing of files in /var/adm/fillup-templates in one of Tumbleweed's snapshots next week. We will not be automatically moving user data from the old structure to the new one, but I'm open to any suggestions on how to script it if anyone has any bright ideas. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
I would also like to add that snapshot 0117 introduced the new btrfs default subvolume layout
Any fresh installation of Tumbleweed using the default btrfs root filesystem will no longer have multiple subvolumes under /var (eg /var/lib/mysql, /var/cache, etc) and instead have a single unified /var subvolume.
This simplifies snapshots and rollbacks, will prevent accidental dataloss on rollback for any user data held in /var, and improve performance of any database or VM images that are held in /var as all of /var also now has Copy-on-Write disabled by default. It's also particularly useful for openSUSE Kubic which was struggling with the consequences of much of /var being read-only as it was considered part of Kubic's read-only root filesystem.
We will not be automatically moving user data from the old structure to the new one, but I'm open to any suggestions on how to script it if anyone has any bright ideas.
Richard, is there a documented manual way (wiki, github, ml), for those who will not do a reinstall, but would like to move to the new layout ? I'm a bit surprised about removing data checksum on /var (nocow implying this) If you have a bit spare time to point me to some material explaining the decision, I would be really interested. Thanks. -- Bruno Friedmann Ioda-Net Sàrl www.ioda-net.ch Bareos Partner, openSUSE Member, fsfe fellowship GPG KEY : D5C9B751C4653227 irc: tigerfoot -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Sun, Jan 21, Bruno Friedmann wrote:
We will not be automatically moving user data from the old structure to the new one, but I'm open to any suggestions on how to script it if anyone has any bright ideas.
Richard, is there a documented manual way (wiki, github, ml), for those who will not do a reinstall, but would like to move to the new layout ?
Richard ask for suggestions how this can be done, so no, there is none.
I'm a bit surprised about removing data checksum on /var (nocow implying this) If you have a bit spare time to point me to some material explaining the decision, I would be really interested.
Please think about what are the advantages of CoW, what the disadvantages and which data is stored below /var. And look how many subvolumes we had before with NoCoW below /var. Then it should be pretty obious, that performance is more important here than the very limited benefit of CRC32. Thorsten -- Thorsten Kukuk, Distinguished Engineer, Senior Architect SLES & CaaSP SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nuernberg, Germany GF: Felix Imendoerffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nuernberg) -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On dimanche, 21 janvier 2018 11.46:00 h CET Thorsten Kukuk wrote:
On Sun, Jan 21, Bruno Friedmann wrote:
We will not be automatically moving user data from the old structure to the new one, but I'm open to any suggestions on how to script it if anyone has any bright ideas.
Richard, is there a documented manual way (wiki, github, ml), for those who will not do a reinstall, but would like to move to the new layout ?
Richard ask for suggestions how this can be done, so no, there is none.
I'm a bit surprised about removing data checksum on /var (nocow implying this) If you have a bit spare time to point me to some material explaining the decision, I would be really interested.
Please think about what are the advantages of CoW, what the disadvantages and which data is stored below /var. And look how many subvolumes we had before with NoCoW below /var. Then it should be pretty obious, that performance is more important here than the very limited benefit of CRC32.
Thorsten
Hi Thorsten, while I'm very new in using btrfs, I'm now a puzzled. If I have nocow on /var it means that for example named will be no more protected from data corruption, from my understanding pov. Or I missed one important information ? -- Bruno Friedmann Ioda-Net Sàrl www.ioda-net.ch Bareos Partner, openSUSE Member, fsfe fellowship GPG KEY : D5C9B751C4653227 irc: tigerfoot -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Hi all! Bruno Friedmann-2 wrote
(...)
I would also like to add that snapshot 0117 introduced the new btrfs default subvolume layout (...) Richard, is there a documented manual way (wiki, github, ml), for those who will not do a reinstall, but would like to move to the new layout ? (...)
I am one of those having the old layout, with multiple subvolumes under /var *and* with no "@" volume. My question is: if I want to reinstall to get the new btrfs layout (given I've not seen a thorough answer to the how-to guide query by Bruno), what would be the best way to avoid re-doing a lot of work? What should I back up from the old boot partition (I guess /etc, something more?)? Is there a way to save a list of all the installed packages (those that are not part of a standard installation) and safely re-apply it once reinstalled? Other considerations that come to mind? Thank you in advance Cris -- Sent from: http://opensuse.14.x6.nabble.com/opensuse-factory-f3292933.html -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Hi all! Bruno Friedmann-2 wrote
(...)
I would also like to add that snapshot 0117 introduced the new btrfs default subvolume layout (...) Richard, is there a documented manual way (wiki, github, ml), for those who will not do a reinstall, but would like to move to the new layout ? (...)
I am one of those having the old layout, with multiple subvolumes under /var *and* with no "@" volume. My question is: if I want to reinstall to get the new btrfs layout (given I've not seen a thorough answer to the how-to guide query by Bruno), what would be the best way to avoid re-doing a lot of work? What should I back up from the old boot partition (I guess /etc, something more?)? Is there a way to save a list of all the installed packages (those that are not part of a standard installation) and safely re-apply it once reinstalled? Other considerations that come to mind? Thank you in advance Cris -- Sent from: http://opensuse.14.x6.nabble.com/opensuse-factory-f3292933.html -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Il 29/01/2018 19:28, Cris70 ha scritto:
Hi all!
Bruno Friedmann-2 wrote
(...)
I would also like to add that snapshot 0117 introduced the new btrfs default subvolume layout (...) Richard, is there a documented manual way (wiki, github, ml), for those who will not do a reinstall, but would like to move to the new layout ? (...)
I am one of those having the old layout, with multiple subvolumes under /var *and* with no "@" volume.
My question is: if I want to reinstall to get the new btrfs layout (given I've not seen a thorough answer to the how-to guide query by Bruno), what would be the best way to avoid re-doing a lot of work? What should I back up from the old boot partition (I guess /etc, something more?)? Is there a way to save a list of all the installed packages (those that are not part of a standard installation) and safely re-apply it once reinstalled? Other considerations that come to mind?
It depends, usually /etc is enough. You can save a list of installed packages with # rpm -qa | sort > rpmlist.txt Daniele. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday, 2018-01-29 at 20:41 +0100, Daniele wrote:
Il 29/01/2018 19:28, Cris70 ha scritto:
Hi all!
Bruno Friedmann-2 wrote
(...)
I would also like to add that snapshot 0117 introduced the new btrfs default subvolume layout (...) Richard, is there a documented manual way (wiki, github, ml), for those who will not do a reinstall, but would like to move to the new layout ? (...)
I am one of those having the old layout, with multiple subvolumes under /var *and* with no "@" volume.
My question is: if I want to reinstall to get the new btrfs layout (given I've not seen a thorough answer to the how-to guide query by Bruno), what would be the best way to avoid re-doing a lot of work? What should I back up from the old boot partition (I guess /etc, something more?)? Is there a way to save a list of all the installed packages (those that are not part of a standard installation) and safely re-apply it once reinstalled? Other considerations that come to mind?
It depends, usually /etc is enough. You can save a list of installed packages with # rpm -qa | sort > rpmlist.txt
It only works if you are not using any extra repos, because there is no way I know for saving list of packages and from which repo is each package. - -- Cheers, Carlos E. R. (from openSUSE 42.2 x86_64 "Malachite" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlpvoPEACgkQtTMYHG2NR9XMHACeJNgbKyv9UDGedAtCdALuI/4O QRsAnAnJxnSGZP+PvGvDfkV4iWzvM+rG =UENJ -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
* Carlos E. R. <robin.listas@telefonica.net> [01-29-18 17:34]:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Monday, 2018-01-29 at 20:41 +0100, Daniele wrote:
Il 29/01/2018 19:28, Cris70 ha scritto:
Hi all!
Bruno Friedmann-2 wrote
(...)
I would also like to add that snapshot 0117 introduced the new btrfs default subvolume layout (...) Richard, is there a documented manual way (wiki, github, ml), for those who will not do a reinstall, but would like to move to the new layout ? (...)
I am one of those having the old layout, with multiple subvolumes under /var *and* with no "@" volume.
My question is: if I want to reinstall to get the new btrfs layout (given I've not seen a thorough answer to the how-to guide query by Bruno), what would be the best way to avoid re-doing a lot of work? What should I back up from the old boot partition (I guess /etc, something more?)? Is there a way to save a list of all the installed packages (those that are not part of a standard installation) and safely re-apply it once reinstalled? Other considerations that come to mind?
It depends, usually /etc is enough. You can save a list of installed packages with # rpm -qa | sort > rpmlist.txt
It only works if you are not using any extra repos, because there is no way I know for saving list of packages and from which repo is each package.
- -- Cheers, Carlos E. R. (from openSUSE 42.2 x86_64 "Malachite" at Telcontar)
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2
iEYEARECAAYFAlpvoPEACgkQtTMYHG2NR9XMHACeJNgbKyv9UDGedAtCdALuI/4O QRsAnAnJxnSGZP+PvGvDfkV4iWzvM+rG =UENJ -----END PGP SIGNATURE-----
but: zypper se -si > package.lst.txt will cannot remember the last time I resorted to yast for package management. -- (paka)Patrick Shanahan Plainfield, Indiana, USA @ptilopteri http://en.opensuse.org openSUSE Community Member facebook/ptilopteri Registered Linux User #207535 @ http://linuxcounter.net Photos: http://wahoo.no-ip.org/piwigo paka @ IRCnet freenode -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday, 2018-01-29 at 18:15 -0500, Patrick Shanahan wrote:
* Carlos E. R. <t> [01-29-18 17:34]:
On Monday, 2018-01-29 at 20:41 +0100, Daniele wrote:
Il 29/01/2018 19:28, Cris70 ha scritto:
It depends, usually /etc is enough. You can save a list of installed packages with # rpm -qa | sort > rpmlist.txt
It only works if you are not using any extra repos, because there is no way I know for saving list of packages and from which repo is each package.
but: zypper se -si > package.lst.txt will
oh, right! :-) Still, it can not be fed back for reinstall, but I might be able to concoct something. Thanks. - -- Cheers, Carlos E. R. (from openSUSE 42.2 x86_64 "Malachite" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEARECAAYFAlpv6xgACgkQtTMYHG2NR9WGcQCeMEu6/Zib75HleIOs07Wj11De WCYAn0PD0vAZK7+ZFsfkOrVNpDuGGpZ2 =C2GG -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
* Carlos E. R. <robin.listas@telefonica.net> [01-29-18 22:49]:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Monday, 2018-01-29 at 18:15 -0500, Patrick Shanahan wrote:
* Carlos E. R. <t> [01-29-18 17:34]:
On Monday, 2018-01-29 at 20:41 +0100, Daniele wrote:
Il 29/01/2018 19:28, Cris70 ha scritto:
It depends, usually /etc is enough. You can save a list of installed packages with # rpm -qa | sort > rpmlist.txt
It only works if you are not using any extra repos, because there is no way I know for saving list of packages and from which repo is each package.
but: zypper se -si > package.lst.txt will
oh, right! :-)
Still, it can not be fed back for reinstall, but I might be able to concoct something. Thanks.
- -- Cheers, Carlos E. R. (from openSUSE 42.2 x86_64 "Malachite" at Telcontar)
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2
iEYEARECAAYFAlpv6xgACgkQtTMYHG2NR9WGcQCeMEu6/Zib75HleIOs07Wj11De WCYAn0PD0vAZK7+ZFsfkOrVNpDuGGpZ2 =C2GG
for i in list;do zypper in $i;done -- (paka)Patrick Shanahan Plainfield, Indiana, USA @ptilopteri http://en.opensuse.org openSUSE Community Member facebook/ptilopteri Registered Linux User #207535 @ http://linuxcounter.net Photos: http://wahoo.no-ip.org/piwigo paka @ IRCnet freenode -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/30/2018 05:15 AM, Patrick Shanahan wrote:
for i in list;do zypper in $i;done
Note that zypper can install multiple packages at once and in this case you really want to do that as there is a lot of overhead involved in calling zypper (refreshing repos and all that) Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 2018-01-30 09:41, Stephan Kulow wrote:
On 01/30/2018 05:15 AM, Patrick Shanahan wrote:
for i in list;do zypper in $i;done
Note that zypper can install multiple packages at once and in this case you really want to do that as there is a lot of overhead involved in calling zypper (refreshing repos and all that)
Absolutely, I was going to say that :-) So parse the list, and produce one list per repo, then tell zypper to install that list from that particular repo; repeat for each repo. I don't know if it is possible to do in a single zypper call? -- Cheers / Saludos, Carlos E. R. (from 42.2 x86_64 "Malachite" at Telcontar)
On Mon, Jan 29, 2018 at 11:32 PM, Carlos E. R. <robin.listas@telefonica.net> wrote:
It only works if you are not using any extra repos, because there is no way I know for saving list of packages and from which repo is each package.
I don't know either. But I know that kiwi makes a packages list for all the installed RPMS in the image it has created. It is something like this: libssh4|(none)|0.7.5|30.1|x86_64|obs://build.opensuse.org/utilities/openSUSE_Leap_42.3/dc245714e2e845fd36736fdbe56c7a5d-libssh icewm-lite|(none)|1.3.12|5.6|x86_64|obs://build.opensuse.org/openSUSE:Leap:42.3/standard/535845b9ab9d6015ddaf3a24c68e5181-icewm zypp-plugin-python|(none)|0.5|7.3|x86_64|obs://build.opensuse.org/openSUSE:Leap:42.3/standard/c380ddb7a254fcb8ca9c8b728765c032-zypp-plugin libgcc_s1|(none)|7.2.1+r253435|3.2|x86_64|obs://build.opensuse.org/openSUSE:Maintenance:7522/openSUSE_Leap_42.3_Update/997f14d55d599b4c148fafaf3315b95f-gcc7.openSUSE_Leap_42.3_Update Which is the rpm name, version, release, architecture, and repo. I wonder how that list is made... -- Roger Oberholtzer -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 30.01.2018 09:32, Roger Oberholtzer wrote:
Which is the rpm name, version, release, architecture, and repo.
I wonder how that list is made...
man rpm /queryformat -- Stefan Seyfried "For a successful technology, reality must take precedence over public relations, for nature cannot be fooled." -- Richard Feynman -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, Jan 30, 2018 at 12:27 PM, Stefan Seyfried <stefan.seyfried@googlemail.com> wrote:
On 30.01.2018 09:32, Roger Oberholtzer wrote:
Which is the rpm name, version, release, architecture, and repo.
I wonder how that list is made...
man rpm /queryformat
And tag for "repo" is ... ? -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, Jan 30, 2018 at 11:56 AM, Andrei Borzenkov <arvidjaar@gmail.com> wrote:
On Tue, Jan 30, 2018 at 12:27 PM, Stefan Seyfried <stefan.seyfried@googlemail.com> wrote:
On 30.01.2018 09:32, Roger Oberholtzer wrote:
Which is the rpm name, version, release, architecture, and repo.
I wonder how that list is made...
man rpm /queryformat
And tag for "repo" is ... ?
Try 'VENDOR'. $ rpm -qa --qf '%{NAME} %{VENDOR}\n' Robert -- http://robert.muntea.nu/ -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 30.01.2018 10:56, Andrei Borzenkov wrote:
On Tue, Jan 30, 2018 at 12:27 PM, Stefan Seyfried <stefan.seyfried@googlemail.com> wrote:
On 30.01.2018 09:32, Roger Oberholtzer wrote:
Which is the rpm name, version, release, architecture, and repo.
I wonder how that list is made...
man rpm /queryformat
And tag for "repo" is ... ?
There is none. But %{disturl} will give hintes that can help you to map to a local repo or just find your most useful header with rpm -qa --queryformat "$(rpm --querytags|sed 's/^\(.*\)$/%{NAME} \1: %{\1}/')\n" -- Stefan Seyfried "For a successful technology, reality must take precedence over public relations, for nature cannot be fooled." -- Richard Feynman -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Jan 30 2018, Andrei Borzenkov <arvidjaar@gmail.com> wrote:
On Tue, Jan 30, 2018 at 12:27 PM, Stefan Seyfried <stefan.seyfried@googlemail.com> wrote:
On 30.01.2018 09:32, Roger Oberholtzer wrote:
Which is the rpm name, version, release, architecture, and repo.
I wonder how that list is made...
man rpm /queryformat
And tag for "repo" is ... ?
%{DISTURL}. See <https://github.com/openSUSE/kiwi/blob/master/modules/KIWIImageCreator.pm#L940>. Andreas. -- Andreas Schwab, SUSE Labs, schwab@suse.de GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE 1748 E4D4 88E3 0EEA B9D7 "And now for something completely different." -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 2018-01-30 11:33, Andreas Schwab wrote:
On Jan 30 2018, Andrei Borzenkov <arvidjaar@gmail.com> wrote:
On Tue, Jan 30, 2018 at 12:27 PM, Stefan Seyfried <stefan.seyfried@googlemail.com> wrote:
On 30.01.2018 09:32, Roger Oberholtzer wrote:
Which is the rpm name, version, release, architecture, and repo.
I wonder how that list is made...
man rpm /queryformat
And tag for "repo" is ... ?
%{DISTURL}.
Yes, but the syntax for zypper install uses repo name or alias, not url. I use this concoction myself: rpm -q -a --queryformat "%{INSTALLTIME};%{INSTALLTIME:day}; \ %{BUILDTIME:day}; %{NAME};%{VERSION}-%-7{RELEASE};%{arch}; \ %{VENDOR};%{PACKAGER};%{DISTRIBUTION};%{DISTTAG}\n" \ | sort | cut --fields="2-" --delimiter=\; \ | tee rpmlist.csv | less -S or rpm -q -a --queryformat "%{INSTALLTIME}\t%{INSTALLTIME:day} \ %{BUILDTIME:day} %-30{NAME}\t%15{VERSION}-%-7{RELEASE}\t%{arch} \ %25{VENDOR}%25{PACKAGER} == %{DISTRIBUTION} %{DISTTAG}\n" \ | sort | cut --fields="2-" | tee rpmlist | less -S The repo can sometimes be deduced. -- Cheers / Saludos, Carlos E. R. (from 42.2 x86_64 "Malachite" at Telcontar)
* Dominique Leuenberger / DimStar <dimstar@opensuse.org> [01-20-18 07:19]:
Dear Tumbleweed users and hackers,
A steady flow of snapshots is reaching the Tumbleweed users: this week, another 4 snapshots have been released onto the users, some with a bit more bandwidth demand, some with issues that were not anticipated by the test suite (but easily resolved by the user, and fixes underway)
The week saw releases of the snapshot 0110, 0114, 0116 and 0117, with those changes: [...] * Default firewall module picked for new installs is now firewalld
when will SuSEfirewall2 be migrated to the new firewalld? tks, -- (paka)Patrick Shanahan Plainfield, Indiana, USA @ptilopteri http://en.opensuse.org openSUSE Community Member facebook/ptilopteri Registered Linux User #207535 @ http://linuxcounter.net Photos: http://wahoo.no-ip.org/piwigo paka @ IRCnet freenode -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Il 20/01/2018 10:51, Patrick Shanahan ha scritto:
* Dominique Leuenberger / DimStar <dimstar@opensuse.org> [01-20-18 07:19]:
Dear Tumbleweed users and hackers,
A steady flow of snapshots is reaching the Tumbleweed users: this week, another 4 snapshots have been released onto the users, some with a bit more bandwidth demand, some with issues that were not anticipated by the test suite (but easily resolved by the user, and fixes underway)
The week saw releases of the snapshot 0110, 0114, 0116 and 0117, with those changes: [...] * Default firewall module picked for new installs is now firewalld
when will SuSEfirewall2 be migrated to the new firewalld?
tks,
...And what happens to users which are relying on Susefirewall2 with custom rules and settings? The firewalld migration is/will be mandatory/silent or could be decided by the user? Thanks and regards, -- Marco Calistri Linux version : openSUSE Tumbleweed 20180116 Kernel: 4.14.14-1.geef6178-default - Cinnamon 3.6.7 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
субота, 20 січня 2018 р. 15:06:47 EET Marco Calistri написано:
Il 20/01/2018 10:51, Patrick Shanahan ha scritto:
when will SuSEfirewall2 be migrated to the new firewalld?
tks,
...And what happens to users which are relying on Susefirewall2 with custom rules and settings?
The firewalld migration is/will be mandatory/silent or could be decided by the user?
Thanks and regards,
IMHO, there is no sense to port SuSEfirewall2 (Perl front-end for iptables) to firewalld (another, Python, front-end for iptables). But making GUI for YaST, something like yast-firewalld, looks fine. -- Kind regards, Mykola Krachkovsky -- Найкращі побажання, Микола Крачковський
* Mykola Krachkovsky <w01dnick@gmail.com> [01-21-18 07:44]:
субота, 20 січня 2018 р. 15:06:47 EET Marco Calistri написано:
Il 20/01/2018 10:51, Patrick Shanahan ha scritto:
when will SuSEfirewall2 be migrated to the new firewalld?
tks,
...And what happens to users which are relying on Susefirewall2 with custom rules and settings?
The firewalld migration is/will be mandatory/silent or could be decided by the user?
Thanks and regards,
IMHO, there is no sense to port SuSEfirewall2 (Perl front-end for iptables) to firewalld (another, Python, front-end for iptables). But making GUI for YaST, something like yast-firewalld, looks fine.
you are saying *yast-firewalld* would apply the same "custom" rules and settings? -- (paka)Patrick Shanahan Plainfield, Indiana, USA @ptilopteri http://en.opensuse.org openSUSE Community Member facebook/ptilopteri Registered Linux User #207535 @ http://linuxcounter.net Photos: http://wahoo.no-ip.org/piwigo paka @ IRCnet freenode -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Sun, Jan 21, Patrick Shanahan wrote:
IMHO, there is no sense to port SuSEfirewall2 (Perl front-end for iptables) to firewalld (another, Python, front-end for iptables). But making GUI for YaST, something like yast-firewalld, looks fine.
you are saying *yast-firewalld* would apply the same "custom" rules and settings?
I doubt. And to be honest, with something such security relevant I would never trust automatical conversation of custom rules from one software to a complete different software. Thorsten -- Thorsten Kukuk, Distinguished Engineer, Senior Architect SLES & CaaSP SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nuernberg, Germany GF: Felix Imendoerffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nuernberg) -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
неділя, 21 січня 2018 р. 15:05:23 EET Patrick Shanahan написано:
* Mykola Krachkovsky <w01dnick@gmail.com> [01-21-18 07:44]:
IMHO, there is no sense to port SuSEfirewall2 (Perl front-end for iptables) to firewalld (another, Python, front-end for iptables). But making GUI for YaST, something like yast-firewalld, looks fine.
you are saying *yast-firewalld* would apply the same "custom" rules and settings?
No, I mean it would be your choice to keep good ol' SuSEfirewall2+yast-firewall or migrate to firewalld+(hypothetical)yast-firewalld, using susefirewall2-to- firewalld or just recreate custom rules manually. -- Kind regards, Mykola Krachkovsky -- Найкращі побажання, Микола Крачковський
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Il 21/01/2018 10:42, Mykola Krachkovsky ha scritto:
субота, 20 січня 2018 р. 15:06:47 EET Marco Calistri написано:
Il 20/01/2018 10:51, Patrick Shanahan ha scritto:
when will SuSEfirewall2 be migrated to the new firewalld?
tks,
...And what happens to users which are relying on Susefirewall2 with custom rules and settings?
The firewalld migration is/will be mandatory/silent or could be decided by the user?
Thanks and regards,
IMHO, there is no sense to port SuSEfirewall2 (Perl front-end for iptables) to firewalld (another, Python, front-end for iptables). But making GUI for YaST, something like yast-firewalld, looks fine.
I've for the moment executed a "zypper al firewalld" since I want keep using Susefirewall2. Regards, - -- Marco Calistri -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJaZhUcAAoJEF8u7h9gwTTlWFQH/RSA4cPxNMCpzwNKH5sGQv+K ENMHM+SHPJavaKgaP1A//2bS9gQF3OzNlNpT3sLvpYjCpLZjjnaHX001Ku2Q2/Z9 dGpWqgQWol3kVhVXV9kvpfc2CorY6UBLqajMim4gWGlJwghY9ZMciNpLQU8V1a9k lfD9IcgYhoTRBZ8cuTxJZLe3iRI+5G94GJj6lQfzNNMco5YlZ1hwBUaKH8ZpxILq 7f98DGIg0SKTDINL9wukcxGeXWSC9YyQ10MOLRAidvYa0g7Mx5osR97Cl8OYTLm3 b0JsyuYnYWAaEvXp6F/b4bZ1Kr13bXZ1aJL5+RFYL/WWpxf/lgx/CK6fonIqJ0k= =d3IM -----END PGP SIGNATURE----- N�����r��y隊Z)z{.���r�+�맲��r��z�^�ˬz��N�(�֜��^� ޭ隊Z)z{.���r�+��0�����Ǩ�
Hi,
* Default firewall module picked for new installs is now firewalld
when will SuSEfirewall2 be migrated to the new firewalld?
tks,
...And what happens to users which are relying on Susefirewall2 with custom rules and settings?
The firewalld migration is/will be mandatory/silent or could be decided by the user?
We're in the process of changing the default firewall from SuSEfirewall2 to firewalld for SLE-15 and openSUSE Leap 15. The YaST installer should now be able to enable/disable firewalld and open/close the ssh port for it. The YaST firewall module will try to start the firewall-config X application for configuring firewalld at the moment. There will be some time without a YaST curses GUI for firewalld. firewalld comes with the firewall-cmd command line tool for configuring it. There will not be an automated migration path from an old SuSEfirewall2 configuration to a firewalld configuration. There is a package "susefirewall2-to-firewalld" which contains a utility for converting SuSEfirewall2 configurations to firewalld. It's only a supporting tool that tries to do the right thing. But it requires manual interaction and review of the resulting firewall rules. SuSEfirewall2 can stay in Tumbleweed for the moment but there are no plans to ship it as a legacy module in releases (at least not in SLE-15). SuSEfirewall2 and firewalld can live side by side but the user needs to take care that only one of them is active at any time. For users that extensively use SuSEfirewall2 with custom rules etc. I recommend to carefully setup new firewall rules using firewalld command line or GUI utilities. firewalld allows to pass raw iptables rules and also so called "rich rules" (proprietary simpler syntax provided by firewalld). These can be used to add custom rules to firewalld that are not otherwise covered by firewalld features. Regards Matthias -- Matthias Gerstner <matthias.gerstner@suse.de> Dipl.-Wirtsch.-Inf. (FH), Security Engineer https://www.suse.com/security Telefon: +49 911 740 53 290 GPG Key ID: 0x14C405C971923553 SUSE Linux GmbH GF: Felix Imendörffer, Jane Smithard, Graham Norton HRB 21284 (AG Nuernberg)
On 01/20/2018 01:16 PM, Dominique Leuenberger / DimStar wrote:
* librsvg 2.42.0: rewritten in Rust
This is actually a huge change as librsvg is a core library with a large number of reverse dependencies now partially written in a programming language with limited architecture support. Please note that Rust upstream still considers only x86_64/_32 to be tier 1 targets, all other architectures are merely considered tier 2 or less which means that the Rust compiler is not guaranteed to produce working code, meaning that one update of the Rust compiler or librsvg may result in librsvg and potentially the many packages depending on it to stop working. Furthermore, this change limits the bootstrappability of openSUSE to the architectures supported by the Rust compiler. If, for example, the community would decide to support RISC-V, it would not be possible as the Rust compiler doesn't support that particular architecture at the moment although I have heard that there are ongoing efforts. Finally, one problem with Rust that I ran into when working on the Rust compiler code itself is the high volatility of the language and the code. Things that used to build fine with Rust 1.19 would break with 1.20 and so on. From a distribution maintainer's point of view, this can be very tedious and annoying. Overall, I think the move to memory-safe languages is generally a good idea. However, I'm still a bit worried with the move to Rust as I think the language isn't yet as stable as it should be for writing core libraries in it. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 2018-01-20, John Paul Adrian Glaubitz <adrian.glaubitz@suse.com> wrote:
This is actually a huge change as librsvg is a core library with a large number of reverse dependencies now partially written in a programming language with limited architecture support.
Yes, I agree this is a problem -- maybe we should keep around an old version of librsvg purely for "tier 2" architecture support?
Finally, one problem with Rust that I ran into when working on the Rust compiler code itself is the high volatility of the language and the code. Things that used to build fine with Rust 1.19 would break with 1.20 and so on. From a distribution maintainer's point of view, this can be very tedious and annoying.
This is not correct for the *stable* compiler, because they provide stability guarantees for it and they do regular "crater runs" (rebuild every crate in the Rust ecosystem, checking if there are any new errors or warnings). I find it quite improbable that you hit this issue in the *stable* compiler (and if you did, it was a bug, and I hope you reported it). The *unstable* compiler (by it's nature) doesn't provide any such guarantees.
Overall, I think the move to memory-safe languages is generally a good idea. However, I'm still a bit worried with the move to Rust as I think the language isn't yet as stable as it should be for writing core libraries in it.
My main concern with Rust at the moment is that their ecosystem is very similar to Node. I was trying to write some simple tools in Rust for my own usage, and found that for simple things like email message parsing there are at least 5 libraries which are all incomplete or overly naive. Compare this to Python or Go which has effectively one "common" library that is great and widely used because everyone contributes to the same thing. A common argument is that the Rust ecosystem is young, but I think it's a symptom of their Node-like packaging design. There are some examples of a "one good library", such as nix or tokio, but those are the minority (from what I've seen). I hope this is something that will improve in the near future, because it makes it difficult to get started on a project (or maintain it). -- Aleksa Sarai Senior Software Engineer (Containers) SUSE Linux GmbH <https://www.cyphar.com/>
On 01/21/2018 02:18 AM, Aleksa Sarai wrote:
On 2018-01-20, John Paul Adrian Glaubitz <adrian.glaubitz@suse.com> wrote:
This is actually a huge change as librsvg is a core library with a large number of reverse dependencies now partially written in a programming language with limited architecture support.
Yes, I agree this is a problem -- maybe we should keep around an old version of librsvg purely for "tier 2" architecture support?
Aren't arm64, ppc64el and s390x tier 1 architectures in SLE?
Finally, one problem with Rust that I ran into when working on the Rust compiler code itself is the high volatility of the language and the code. Things that used to build fine with Rust 1.19 would break with 1.20 and so on. From a distribution maintainer's point of view, this can be very tedious and annoying.
This is not correct for the *stable* compiler, because they provide stability guarantees for it and they do regular "crater runs" (rebuild every crate in the Rust ecosystem, checking if there are any new errors or warnings). I find it quite improbable that you hit this issue in the *stable* compiler (and if you did, it was a bug, and I hope you reported it). The *unstable* compiler (by it's nature) doesn't provide any such guarantees.
One example for this is the fact that you need exactly version N-1 to build version N of the Rust compiler. Using a slightly older version or even version N does not work. Tried that several times. This was extremely annoying when I was working on the sparc64 code in the Rust compiler. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 2018-01-21, John Paul Adrian Glaubitz <adrian.glaubitz@suse.com> wrote:
This is actually a huge change as librsvg is a core library with a large number of reverse dependencies now partially written in a programming language with limited architecture support.
Yes, I agree this is a problem -- maybe we should keep around an old version of librsvg purely for "tier 2" architecture support?
Aren't arm64, ppc64el and s390x tier 1 architectures in SLE?
I meant "tier 2" from the PoV of Rust, not SLE.
Finally, one problem with Rust that I ran into when working on the Rust compiler code itself is the high volatility of the language and the code. Things that used to build fine with Rust 1.19 would break with 1.20 and so on. From a distribution maintainer's point of view, this can be very tedious and annoying.
This is not correct for the *stable* compiler, because they provide stability guarantees for it and they do regular "crater runs" (rebuild every crate in the Rust ecosystem, checking if there are any new errors or warnings). I find it quite improbable that you hit this issue in the *stable* compiler (and if you did, it was a bug, and I hope you reported it). The *unstable* compiler (by it's nature) doesn't provide any such guarantees.
One example for this is the fact that you need exactly version N-1 to build version N of the Rust compiler. Using a slightly older version or even version N does not work. Tried that several times.
This is an exception, not the rule, and is something that is solved by packaging (as it has been solved in openSUSE with the bootstrap packages). There are several other compilers that have this requirement (Go does for example -- though to be honest we ignore it when packaging for the most part). -- Aleksa Sarai Senior Software Engineer (Containers) SUSE Linux GmbH <https://www.cyphar.com/>
On 01/22/2018 02:09 AM, Aleksa Sarai wrote:
Aren't arm64, ppc64el and s390x tier 1 architectures in SLE?
I meant "tier 2" from the PoV of Rust, not SLE.
Yes, I know. It was more a rhetorical question to underline the problem. But it was probably unnecessary.
One example for this is the fact that you need exactly version N-1 to build version N of the Rust compiler. Using a slightly older version or even version N does not work. Tried that several times.
This is an exception, not the rule, and is something that is solved by packaging (as it has been solved in openSUSE with the bootstrap packages).
"@daym Have you tried building Rust with exactly one version before? Rust version 1.x only supports bootstrapping from version 1.(x-1), not 1.(x-2) or below, and also not 1.x or newer. And can you maybe paste an error somewhere?"
https://github.com/rust-lang/rust/issues/45593#issuecomment-340187339
You also realize this when you try building rustc from source. When you build 1.23, it downloads 1.22 and so on. Furthermore, keeping Firefox up to date already puts requirements on the Rust version. I can't find the mailing list thread at the moment, but one of Debian's Rust maintainers who also works upstream at Mozilla has said that Rust always has to be updated as well when you want to update Firefox. This is very problematic for LTS distributions when they are shipping Firefox ESR which is going to introduce a Rust dependency with the next ESR release which will be version 60. So, if SLE wants to update to the next ESR release of Firefox, it will also have to include Rust in the same maintenance request.
There are several other compilers that have this requirement (Go does for example -- though to be honest we ignore it when packaging for the most part).
That's not true. Golang-go can be built using gcc-go which can be bootstrapped purely from C. In fact, the Golang-Go compiler is currrently built using gcc-go in Debian. Upstream Go always ensures that golang-go can be built with gcc-go. Rust has mrustc for that, but that one isn't supporting anything beyond x86 at the moment. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 2018-01-22, John Paul Adrian Glaubitz <adrian.glaubitz@suse.com> wrote:
One example for this is the fact that you need exactly version N-1 to build version N of the Rust compiler. Using a slightly older version or even version N does not work. Tried that several times.
This is an exception, not the rule, and is something that is solved by packaging (as it has been solved in openSUSE with the bootstrap packages).
"@daym Have you tried building Rust with exactly one version before? Rust version 1.x only supports bootstrapping from version 1.(x-1), not 1.(x-2) or below, and also not 1.x or newer. And can you maybe paste an error somewhere?"
https://github.com/rust-lang/rust/issues/45593#issuecomment-340187339
You also realize this when you try building rustc from source. When you build 1.23, it downloads 1.22 and so on.
Furthermore, keeping Firefox up to date already puts requirements on the Rust version. I can't find the mailing list thread at the moment, but one of Debian's Rust maintainers who also works upstream at Mozilla has said that Rust always has to be updated as well when you want to update Firefox. This is very problematic for LTS distributions when they are shipping Firefox ESR which is going to introduce a Rust dependency with the next ESR release which will be version 60.
I'm not sure we're understanding each other here -- my point was that the *only* Rust project which has this policy for compiling new versions is the Rust compiler. No other Rust project requires this. That's what I meant by "exception, not the rule". So I agree with what you wrote, but it doesn't have much to do with what I was trying to say, which is that the following quote ...
Finally, one problem with Rust that I ran into when working on the Rust compiler code itself is the high volatility of the language and the code. Things that used to build fine with Rust 1.19 would break with 1.20 and so on. From a distribution maintainer's point of view, this can be very tedious and annoying.
... is not accurate for any project other than the Rust compiler (and the reason for the Rust compiler having this requirement is so that they can use new language features in the compiler itself, not because of stability issues with the language). Any other Rust project must be able to build on 1.x and 1.(x+1) with no changes (and the Rust compiler team tests this quite heavily).
So, if SLE wants to update to the next ESR release of Firefox, it will also have to include Rust in the same maintenance request.
We ship a new Go package with most new Docker releases as well (as they usually update the compiler they use every couple of releases, and in the past there were bugs where Docker would break if it was built with the wrong compiler version). This is not really a new thing.
There are several other compilers that have this requirement (Go does for example -- though to be honest we ignore it when packaging for the most part).
That's not true. Golang-go can be built using gcc-go which can be bootstrapped purely from C. In fact, the Golang-Go compiler is currrently built using gcc-go in Debian.
Upstream Go always ensures that golang-go can be built with gcc-go. Rust has mrustc for that, but that one isn't supporting anything beyond x86 at the moment.
Hmmm, this is a policy that appears to have changed after the compiler rewrite to Go. I distinctly remember watching a talk where rsc described that they would require you to have version (n-1) of the compiler in order to build version n -- so that the compiler could take advantage of new language features -- and you would start the bootstrapping at go1.4. However, the documentation (and build tools) make no mention of this and just say that you can build everything with go1.4. That's a little bit odd. But yes, I am aware that this is how we (and other distributions) build Go, I was just under the impression that we packaged it this way with go1.5 because "it just worked" and it removed the (n-1) bootstrapping problem. I stand corrected. -- Aleksa Sarai Senior Software Engineer (Containers) SUSE Linux GmbH <https://www.cyphar.com/>
On 01/22/2018 04:22 PM, Aleksa Sarai wrote:
I'm not sure we're understanding each other here -- my point was that the *only* Rust project which has this policy for compiling new versions is the Rust compiler. No other Rust project requires this. That's what I meant by "exception, not the rule". So I agree with what you wrote, but it doesn't have much to do with what I was trying to say, which is that the following quote ...
So, you say it's guaranteed that only the Rust compiler will only ever use particular code that will be deprecated in release N+1 or only available in release N-1? I did build test it myself. I tried building Rust 1.22 with Rust 1.20 which failed with actual compiler errors, not just a warning that I have to use the proper version. And I think it's absolutely not unlikely that Rust project X will run into such a problem as well. What keeps Rust project X from using certain language features that were only recently added or removed? The problem with Rust is simply the lack of stabilization. It's absolutely insane that they think it's ok to break compatibility in minor versions and it blows my mind that so many people find that acceptable. Rust upstream lives in a universe where they think that distributions are an outdated concept. This is why they are shipping their own package manager and consider such breaking changes in minor releases acceptable.
Finally, one problem with Rust that I ran into when working on the Rust compiler code itself is the high volatility of the language and the code. Things that used to build fine with Rust 1.19 would break with 1.20 and so on. From a distribution maintainer's point of view, this can be very tedious and annoying.
... is not accurate for any project other than the Rust compiler (and the reason for the Rust compiler having this requirement is so that they can use new language features in the compiler itself, not because of stability issues with the language). Any other Rust project must be able to build on 1.x and 1.(x+1) with no changes (and the Rust compiler team tests this quite heavily).
What keeps project X from using certain features of Rust? I have seen projects which would only build with Rust Nightly.
So, if SLE wants to update to the next ESR release of Firefox, it will also have to include Rust in the same maintenance request.
We ship a new Go package with most new Docker releases as well (as they usually update the compiler they use every couple of releases, and in the past there were bugs where Docker would break if it was built with the wrong compiler version). This is not really a new thing.
I don't think you always need the latest version of Go for updating Docker. I have worked with both codebases myself and never ran into this issue.
Upstream Go always ensures that golang-go can be built with gcc-go. Rust has mrustc for that, but that one isn't supporting anything beyond x86 at the moment.
Hmmm, this is a policy that appears to have changed after the compiler rewrite to Go. I distinctly remember watching a talk where rsc described that they would require you to have version (n-1) of the compiler in order to build version n -- so that the compiler could take advantage of new language features -- and you would start the bootstrapping at go1.4.
Huh? If it changed after the compiler rewrite in Go, wouldn't that mean that before that the compiler wasn't written in Go which means that you didn't have that problem in the first place?
However, the documentation (and build tools) make no mention of this and just say that you can build everything with go1.4. That's a little bit odd. But yes, I am aware that this is how we (and other distributions) build Go, I was just under the impression that we packaged it this way with go1.5 because "it just worked" and it removed the (n-1) bootstrapping problem.
I stand corrected.
Fair enough. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Mon, 2018-01-22 at 16:36 +0100, John Paul Adrian Glaubitz wrote:
On 01/22/2018 04:22 PM, Aleksa Sarai wrote:
I'm not sure we're understanding each other here -- my point was that the *only* Rust project which has this policy for compiling new versions is the Rust compiler. No other Rust project requires this. That's what I meant by "exception, not the rule". So I agree with what you wrote, but it doesn't have much to do with what I was trying to say, which is that the following quote ...
So, you say it's guaranteed that only the Rust compiler will only ever use particular code that will be deprecated in release N+1 or only available in release N-1?
I did build test it myself. I tried building Rust 1.22 with Rust 1.20 which failed with actual compiler errors, not just a warning that I have to use the proper version. And I think it's absolutely not unlikely that Rust project X will run into such a problem as well. What keeps Rust project X from using certain language features that were only recently added or removed?
The problem with Rust is simply the lack of stabilization. It's absolutely insane that they think it's ok to break compatibility in minor versions and it blows my mind that so many people find that acceptable.
Sorry but I think that you have a missunderstanding here (or is me the one that do not get your point) Rust is really stable as a language. But the stability guarantee provided by Rust needs to be understanding as this: * I create a source S, that compile in Rust R and behaves like B Rust guarantee that S, when compiled in R+1, R+2 .. R+n will compile, and the result will behave like B. There is a process of deprecation, and there is a process of optional enablement of new features, and this section can require the removal of some lines in future versions of the compiler, when this change is enabled by default. What you are referring is a different issue: * I create a source S, that compile in Rust R and behaves like B, but I require that my new version S+1 will also compile for the old version of R. That is simply ... uhm ... like no. No one can guarantee this. A new version of your code, if use specific features enabled in one version of Rust, will not compile in a old version of Rust that do not provide this feature at all.
Rust upstream lives in a universe where they think that distributions are an outdated concept. This is why they are shipping their own package manager and consider such breaking changes in minor releases acceptable.
I really do not get your point. Cargo is like make, is not (and will never be) a full feature package manager. Is true that there is an overlap: cargo can download from crates.io, but is not the way to install anything into the system. This is responsibility of the OS. Is only a helper to the developer, during the development process. Cargo is a living code, that is attached to the Rust version. There is a relation between both, so generaly an update of one require the update of the other. What is wrong with that?
What keeps project X from using certain features of Rust? I have seen projects which would only build with Rust Nightly.
That is a decision of the developer. I can agree that some code that depends on Nightly are not good candidates to be package in the OS (like Rocket). But this do not make any 'wrong thing' in the Rust side. Is the developer who decide that the unestable feature that is provided with nightly is OK for the project. Eventually this feature will live in stable (or not), and in any case is responsibility of the developer to change the code to adjust with the new shape of this feature. But once is there, this program (in this exact version) will compile in any future Rust. You have the same problem with C++, but in a different speed. I can use GNU options, or C++17 in my new version of the project, that of course will not copile in gcc 4.6. -- SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstraße 5, 90409 Nürnberg, Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/22/2018 05:06 PM, Alberto Planas Dominguez wrote:
Sorry but I think that you have a missunderstanding here (or is me the one that do not get your point)
Rust is really stable as a language. But the stability guarantee provided by Rust needs to be understanding as this:
* I create a source S, that compile in Rust R and behaves like B
Rust guarantee that S, when compiled in R+1, R+2 .. R+n will compile, and the result will behave like B.
This is not the experience I made. I had sources that built with R-1 but not with R or R+1.
There is a process of deprecation, and there is a process of optional enablement of new features, and this section can require the removal of some lines in future versions of the compiler, when this change is enabled by default.
Deprecation is fine when you do it gradually. It's too fast when you do it between minor releases.
Rust upstream lives in a universe where they think that distributions are an outdated concept. This is why they are shipping their own package manager and consider such breaking changes in minor releases acceptable.
I really do not get your point. Cargo is like make, is not (and will never be) a full feature package manager. Is true that there is an overlap: cargo can download from crates.io, but is not the way to install anything into the system. This is responsibility of the OS. Is only a helper to the developer, during the development process.
Firefox makes extensive use of cargo's feature to download packages from crate.io. I don't know why you think it's not a package manager.
Cargo is a living code, that is attached to the Rust version. There is a relation between both, so generaly an update of one require the update of the other. What is wrong with that?
I didn't say anything about that, did I?
What keeps project X from using certain features of Rust? I have seen projects which would only build with Rust Nightly.
That is a decision of the developer. I can agree that some code that depends on Nightly are not good candidates to be package in the OS (like Rocket). But this do not make any 'wrong thing' in the Rust side. Is the developer who decide that the unestable feature that is provided with nightly is OK for the project. Eventually this feature will live in stable (or not), and in any case is responsibility of the developer to change the code to adjust with the new shape of this feature. But once is there, this program (in this exact version) will compile in any future Rust.
You can always say it's the developer's fault. However, that doesn't really help you in this case when you're a distribution maintainer, you're still in the situation that you cannot upgrade package X without updating the compiler or breaking package Y.
You have the same problem with C++, but in a different speed. I can use GNU options, or C++17 in my new version of the project, that of course will not copile in gcc 4.6.
Different speed is a huge understatement, a seriously huge understatement. C++ or Fortran are much more careful when making such changes in order to not break existing code bases. Simply because you cannot expect all your users to constantly update their codebase just because they are updating their toolchain. And you didn't even address the problem that Rust upstream effectively doesn't care about anything besides x86/x86_64. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Mon, 2018-01-22 at 18:08 +0100, John Paul Adrian Glaubitz wrote:
On 01/22/2018 05:06 PM, Alberto Planas Dominguez wrote:
* I create a source S, that compile in Rust R and behaves like B
Rust guarantee that S, when compiled in R+1, R+2 .. R+n will compile, and the result will behave like B.
This is not the experience I made. I had sources that built with R-1 but not with R or R+1.
In that case a bug report needs to be submited. After 1.0, this is not supposed the happend. Is this happening in a public project? (sorry if the name of this project was announced before, I joined late) As a mater of fact there are some changes that were not backward compatible, but those were bugs that affected the soundness of the compiler, and the usage was not very common. In any case I am not aware of any project that was hit by those changes.
Firefox makes extensive use of cargo's feature to download packages from crate.io. I don't know why you think it's not a package manager.
For example, you cannot uninstall packages with cargo. Also was never recommended to install anything in the system. Is more similar to pip, gem, maven, go get, they can be used to fetch code that is needed during the development phase. But also is like `make`, because orchestrate the compilation.
Cargo is a living code, that is attached to the Rust version. There is a relation between both, so generaly an update of one require the update of the other. What is wrong with that?
I didn't say anything about that, did I?
Right. No, you didn't. My intention was extrapolate this example with the observation that Rust N needs Rust N-1 to compile but do not work for N-2. I didn't make my point clear, so lets forget about cargo as an example.
What keeps project X from using certain features of Rust? I have seen projects which would only build with Rust Nightly.
That is a decision of the developer. I can agree that some code that depends on Nightly are not good candidates to be package in the OS (like Rocket). But this do not make any 'wrong thing' in the Rust side. Is the developer who decide that the unestable feature that is provided with nightly is OK for the project. Eventually this feature will live in stable (or not), and in any case is responsibility of the developer to change the code to adjust with the new shape of this feature. But once is there, this program (in this exact version) will compile in any future Rust.
You can always say it's the developer's fault. However, that doesn't really help you in this case when you're a distribution maintainer, you're still in the situation that you cannot upgrade package X without updating the compiler or breaking package Y.
I understand, but your argument is not fair. If a developer use unstable features for version X, this code will compile very narrow window of compilers, and there is not guarantee that this feature that live in nightly will reach beta or stable. This make, by definition, this software unsuitable for packaging. But if a package use a version of Rust that is stable, this package will compile when you update in OBS the version of Rust.
You have the same problem with C++, but in a different speed. I can use GNU options, or C++17 in my new version of the project, that of course will not copile in gcc 4.6.
Different speed is a huge understatement, a seriously huge understatement.
Well C++03, C++11, C++14, C++17. You are right, but is not fair to compare a language that is 35 y.o (from 1983) with one that is from 2015.
C++ or Fortran are much more careful when making such changes in order to not break existing code bases.
And so is Rust. Changes that can affect the guarantee of back- compatibility are evaluated against crates.io. This procude data of the impact of the change.
Simply because you cannot expect all your users to constantly update their codebase just because they are updating their toolchain.
I can see that is a problem that if I want the last version of exa or ripgrep, and this version use features included in 1.24, I will need to update the compiler. But is not expected that this update will break any codebase that use Rust stable. This is a side effect of a young ecosystem, that will fade out eventually. Also the epoch RFC will help here. But this doesn't mean that Rust is constantly breaking your code.
And you didn't even address the problem that Rust upstream effectively doesn't care about anything besides x86/x86_64.
Sure. Clearly FF is not running on ARM on Android. I think that you have another missunderstanding on Tier 2 concept here [1] [1] https://forge.rust-lang.org/platform-support.html -- SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstraße 5, 90409 Nürnberg, Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/22/2018 07:02 PM, Alberto Planas Dominguez wrote:
As a mater of fact there are some changes that were not backward compatible, but those were bugs that affected the soundness of the compiler, and the usage was not very common. In any case I am not aware of any project that was hit by those changes.
No, I have been there, done that. It's currently too frustrating.
Different speed is a huge understatement, a seriously huge understatement.
Well C++03, C++11, C++14, C++17. You are right, but is not fair to compare a language that is 35 y.o (from 1983) with one that is from 2015.
And that's why I think that fundamental packages like librsvg shouldn't be ported to a language from 2015 which is still subject to quick movement. We are not talking about some random leaf package here. I don't know how to run this check in openSUSE, but look at the packages whose build dependencies would become uninstallable on most architectures in Debian if we were to upgrade librsvg to the Rust version:
You don't think this is a problem?
And you didn't even address the problem that Rust upstream effectively doesn't care about anything besides x86/x86_64.
Sure. Clearly FF is not running on ARM on Android. I think that you have another missunderstanding on Tier 2 concept here [1]
[1] https://forge.rust-lang.org/platform-support.html Thanks for trying to paint me as an uneducated person. But just in case you didn't read the link you posted:
Tier 2 platforms can be thought of as “guaranteed to build”. **Automated tests are not run so it’s not guaranteed to produce a working build**, but platforms often work to quite a good degree and patches are always welcome! Specifically, these platforms are required to have each of the following:
Official binary releases are provided for the platform. **Automated building is set up, but may not be running tests.** Landing changes to the rust-lang/rust repository’s master branch is gated on platforms building. For some platforms only the standard library is compiled, but for others rustc and cargo are too.
I don't think we would accept gcc to be not passing it's testsuite on any of the platforms openSUSE supports. Apparently, it's acceptable for Rust. And before you want to accuse me of more incompetence: In Debian, I'm a porter for most for most of the unofficial architectures. I think I can say I have somewhat an experience in this field. PS: I didn't know that SLE only supports x86_64 and ARM :). Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, 2018-01-23 at 10:37 +0100, John Paul Adrian Glaubitz wrote:
On 01/22/2018 07:02 PM, Alberto Planas Dominguez wrote:
As a mater of fact there are some changes that were not backward compatible, but those were bugs that affected the soundness of the compiler, and the usage was not very common. In any case I am not aware of any project that was hit by those changes.
No, I have been there, done that. It's currently too frustrating.
In that case this is anothera different issue. Can I assume by the content that is librsvg the one that breaks from one stable version to another? We are diverging for the main topic. Your assert is that Rust is willing to break code that target stable-Rust in almost each release, and I think that this is a missunderstanding about the expectations that are fair to use agains the compiler. That is all.
I don't know how to run this check in openSUSE, but look at the packages whose build dependencies would become uninstallable on most architectures in Debian if we were to upgrade librsvg to the Rust version:
`zypper info` can help here.
You don't think this is a problem?
And you didn't even address the problem that Rust upstream effectively doesn't care about anything besides x86/x86_64.
Sure. Clearly FF is not running on ARM on Android. I think that you have another missunderstanding on Tier 2 concept here [1]
Thanks for trying to paint me as an uneducated person. But just in case you didn't read the link you posted:
Sorry if I make this impression. I didn't choose the right words. What I try to say is that tier 2 is not exacly "doesn't care about anything besides x86/x86_64". Is simply reflecting that some automatic tests are not running for each commit to the compiler, but is expected to work. And in the case of Android and ARM is clearly working.
And before you want to accuse me of more incompetence: In Debian, I'm a porter for most for most of the unofficial architectures.
Did I? Sorry if I made this impression! -- SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstraße 5, 90409 Nürnberg, Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/23/2018 11:26 AM, Alberto Planas Dominguez wrote:
In that case this is anothera different issue. Can I assume by the content that is librsvg the one that breaks from one stable version to another?
We are diverging for the main topic. Your assert is that Rust is willing to break code that target stable-Rust in almost each release, and I think that this is a missunderstanding about the expectations that are fair to use agains the compiler. That is all.
No, we are not. Two main points are still valid: 1) Rust is not as stable as it should be for core packages. 2) Rust doesn't consider architectures which SUSE considers supported as supported. This is definitely a problem.
I don't know how to run this check in openSUSE, but look at the packages whose build dependencies would become uninstallable on most architectures in Debian if we were to upgrade librsvg to the Rust version:
`zypper info` can help here.
I don't think you can use zypper to determine the build dependencies for OBS, can you? The list I obtained for Debian came through the build system, not anything local on my disk.
https://wiki.debian.org/ftpmaster_Removals#Reverse_Dependencies
Sorry if I make this impression. I didn't choose the right words. What I try to say is that tier 2 is not exacly "doesn't care about anything besides x86/x86_64". Is simply reflecting that some automatic tests are not running for each commit to the compiler, but is expected to work. And in the case of Android and ARM is clearly working.
And you are missing the point. Working now doesn't mean it's not going to break for the next version. Again, a compiler which is not verified to pass it's testsuite should not be used for production code in my experience. I have seen too many things break on the buildds in Debian when the testsuites for gcc were disabled. Not having Rust upstream build the compiler natively and run the testsuite on architectures that SUSE considers to be supported is a problem in my experience. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, 2018-01-23 at 13:12 +0100, John Paul Adrian Glaubitz wrote:
On 01/23/2018 11:26 AM, Alberto Planas Dominguez wrote:
We are diverging for the main topic. Your assert is that Rust is willing to break code that target stable-Rust in almost each release, and I think that this is a missunderstanding about the expectations that are fair to use agains the compiler. That is all.
No, we are not.
Two main points are still valid:
1) Rust is not as stable as it should be for core packages.
But this is an affirmation that needs some data. As explained before Rust stable will guarantee that the same package that compiles for one version will compile for the next. I am still waiting for an example where this is not true, and I am not able to see the build failure in librsvg in OBS.
2) Rust doesn't consider architectures which SUSE considers supported as supported. This is definitely a problem.
I can see a problem in S390X, is a tier 2 but I am sure that there a bugs there. But is different for ARM: there is some effort in moving it to tier 1.
https://wiki.debian.org/ftpmaster_Removals#Reverse_Dependencies Sorry if I make this impression. I didn't choose the right words. What I try to say is that tier 2 is not exacly "doesn't care about anything besides x86/x86_64". Is simply reflecting that some automatic tests are not running for each commit to the compiler, but is expected to work. And in the case of Android and ARM is clearly working.
And you are missing the point. Working now doesn't mean it's not going to break for the next version.
I am not. My point is that the affirmation "doesn't care about anything besides x86/x86_64" is not true. There are test running for t2 architectures, but not all the time the automatic test are executed. This indicate lack or resources and documentation, but also an effort to change the situation. But, we can agree that the expectations for a compiler in t2 is not like in t1.
Again, a compiler which is not verified to pass it's testsuite should not be used for production code in my experience. I have seen too many things break on the buildds in Debian when the testsuites for gcc were disabled.
We agree on this.
Not having Rust upstream build the compiler natively and run the testsuite on architectures that SUSE considers to be supported is a problem in my experience.
In this too. -- SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstraße 5, 90409 Nürnberg, Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, 2018-01-23 at 13:49 +0100, Alberto Planas Dominguez wrote:
On Tue, 2018-01-23 at 13:12 +0100, John Paul Adrian Glaubitz wrote:
On 01/23/2018 11:26 AM, Alberto Planas Dominguez wrote:
We are diverging for the main topic. Your assert is that Rust is willing to break code that target stable-Rust in almost each release, and I think that this is a missunderstanding about the expectations that are fair to use agains the compiler. That is all.
No, we are not.
Two main points are still valid:
1) Rust is not as stable as it should be for core packages.
But this is an affirmation that needs some data. As explained before Rust stable will guarantee that the same package that compiles for one version will compile for the next. I am still waiting for an example where this is not true, and I am not able to see the build failure in librsvg in OBS.
I did make one example of this: https://lists.opensuse.org/opensuse-factory/2018-01/msg00488.html granted, it's the only example I know from openSUSE Tumbleweed Cheers Dominique
On Tue, 2018-01-23 at 13:55 +0100, Dominique Leuenberger / DimStar wrote:
On Tue, 2018-01-23 at 13:49 +0100, Alberto Planas Dominguez wrote:
On Tue, 2018-01-23 at 13:12 +0100, John Paul Adrian Glaubitz wrote:
On 01/23/2018 11:26 AM, Alberto Planas Dominguez wrote:
We are diverging for the main topic. Your assert is that Rust is willing to break code that target stable-Rust in almost each release, and I think that this is a missunderstanding about the expectations that are fair to use agains the compiler. That is all.
No, we are not.
Two main points are still valid:
1) Rust is not as stable as it should be for core packages.
But this is an affirmation that needs some data. As explained before Rust stable will guarantee that the same package that compiles for one version will compile for the next. I am still waiting for an example where this is not true, and I am not able to see the build failure in librsvg in OBS.
I did make one example of this: https://lists.opensuse.org/opensuse-factory/2018-01/msg00488.html
granted, it's the only example I know from openSUSE Tumbleweed
Oh thanks! But this is easy to explain: https://blog.rust-lang.org/2018/01/04/Rust-1.23.html This is to remove a warning. Probably FF is compiling with `deny(warnings)` somewhere. And actually this is a good example about the stability guarantees that Rust is expected to provide. `std::ascii::AsciiExt` is a trait that is not used anymore, instead of dropping it from the standard library they provide an empty trait. Because of this, a warning is generated: the program will compile, and will behave as when it was compiled with an older version. The issue here is that FF is considering warnings as errors. -- SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstraße 5, 90409 Nürnberg, Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/23/2018 01:49 PM, Alberto Planas Dominguez wrote:
2) Rust doesn't consider architectures which SUSE considers supported as supported. This is definitely a problem.
I can see a problem in S390X, is a tier 2 but I am sure that there a bugs there. But is different for ARM: there is some effort in moving it to tier 1.
Great, then there is only MIPS*, PPC*, and S390 left. Not even talking about things like RISC-V or SPARC.
And you are missing the point. Working now doesn't mean it's not going to break for the next version.
I am not. My point is that the affirmation "doesn't care about anything besides x86/x86_64" is not true. There are test running for t2 architectures, but not all the time the automatic test are executed. This indicate lack or resources and documentation, but also an effort to change the situation.
But, we can agree that the expectations for a compiler in t2 is not like in t1.
I just tried building Rust 1.23 on Debian unstable (MIPS) today, still fails. Both for native and cross-builds. Tried a cross-build for PPC32 yesterday, also failed. Both architectures are Tier 2 and I didn't even manage to get it to build, not even talking about running the testsuite.
Again, a compiler which is not verified to pass it's testsuite should not be used for production code in my experience. I have seen too many things break on the buildds in Debian when the testsuites for gcc were disabled.
We agree on this.
Not having Rust upstream build the compiler natively and run the testsuite on architectures that SUSE considers to be supported is a problem in my experience.
In this too.
Then why do you think it's ok to have librsvg rustified at this point? Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, 23 Jan 2018 18:16:04 +0100, John Paul Adrian Glaubitz <adrian.glaubitz@suse.com> wrote:
On 01/23/2018 01:49 PM, Alberto Planas Dominguez wrote:
2) Rust doesn't consider architectures which SUSE considers supported as supported. This is definitely a problem.
I can see a problem in S390X, is a tier 2 but I am sure that there a bugs there. But is different for ARM: there is some effort in moving it to tier 1.
Great, then there is only MIPS*, PPC*, and S390 left. Not even talking about things like RISC-V or SPARC.
and PA-RISC/PA-RISC2 (HP-UX) Things like this have caused nightmares for me to port OpenSource stuff to HP-UX -- H.Merijn Brand http://tux.nl Perl Monger http://amsterdam.pm.org/ using perl5.00307 .. 5.27 porting perl5 on HP-UX, AIX, and openSUSE http://mirrors.develooper.com/hpux/ http://www.test-smoke.org/ http://qa.perl.org http://www.goldmark.org/jeff/stupid-disclaimers/
Le 23/01/2018 à 18:16, John Paul Adrian Glaubitz a écrit :
I just tried building Rust 1.23 on Debian unstable (MIPS) today, still fails. Both for native and cross-builds. Tried a cross-build for PPC32 yesterday, also failed. Both architectures are Tier 2 and I didn't even manage to get it to build, not even talking about running the testsuite.
The fact that the rust team managed to build rustc 1.23 packages for these architectures (along with many of the others that you mention) and you don't manage suggests that rustc compatibility isn't the problem here: https://www.rust-lang.org/fr-FR/other-installers.html I did not find a build log at https://buildd.debian.org/status/package.php?p=rustc&suite=sid , so I assume you did this on a private session. Can you send me a build log so that I can have a look, just in case something interesting stands out to my eyes? Hopefully it's just a minor system configuration issue... Cheers, Hadrien -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/23/2018 10:37 AM, John Paul Adrian Glaubitz wrote:
Tier 2 platforms can be thought of as “guaranteed to build”. **Automated tests are not run so it’s not guaranteed to produce a working build**, but platforms often work to quite a good degree and patches are always welcome! Specifically, these platforms are required to have each of the following:
Official binary releases are provided for the platform. **Automated building is set up, but may not be running tests.** Landing changes to the rust-lang/rust repository’s master branch is gated on platforms building. For some platforms only the standard library is compiled, but for others rustc and cargo are too.
Oh, and to add to that: In Debian, Firefox is now breaking more often on platforms which are not x86/x86_64 as you can see here with armhf currently broken:
https://buildd.debian.org/status/package.php?p=firefox&suite=sid
But I really don't know what people expect from a compiler whose testsuite is only ever run on x86/x86_64 by upstream - unlike virtually any other compiler project on the planet. Heck, even Free Pascal runs tests on all platforms - natively. I know that because I am providing those guys hardware for testing. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Le 23/01/2018 à 11:33, John Paul Adrian Glaubitz a écrit :
On 01/23/2018 10:37 AM, John Paul Adrian Glaubitz wrote:
Tier 2 platforms can be thought of as “guaranteed to build”. **Automated tests are not run so it’s not guaranteed to produce a working build**, but platforms often work to quite a good degree and patches are always welcome! Specifically, these platforms are required to have each of the following:
Official binary releases are provided for the platform. **Automated building is set up, but may not be running tests.** Landing changes to the rust-lang/rust repository’s master branch is gated on platforms building. For some platforms only the standard library is compiled, but for others rustc and cargo are too.
Oh, and to add to that: In Debian, Firefox is now breaking more often on platforms which are not x86/x86_64 as you can see here with armhf currently broken:
https://buildd.debian.org/status/package.php?p=firefox&suite=sid
But I really don't know what people expect from a compiler whose testsuite is only ever run on x86/x86_64 by upstream - unlike virtually any other compiler project on the planet. Heck, even Free Pascal runs tests on all platforms - natively. I know that because I am providing those guys hardware for testing.
Adrian
Then maybe your time would be better spent working together with the Rust team to ensure that they can and do run their tests on armhf as well? Hadrien -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Le 23/01/2018 à 11:42, Hadrien Grasland a écrit :
Le 23/01/2018 à 11:33, John Paul Adrian Glaubitz a écrit :
On 01/23/2018 10:37 AM, John Paul Adrian Glaubitz wrote:
Tier 2 platforms can be thought of as “guaranteed to build”. **Automated tests are not run so it’s not guaranteed to produce a working build**, but platforms often work to quite a good degree and patches are always welcome! Specifically, these platforms are required to have each of the following:
Official binary releases are provided for the platform. **Automated building is set up, but may not be running tests.** Landing changes to the rust-lang/rust repository’s master branch is gated on platforms building. For some platforms only the standard library is compiled, but for others rustc and cargo are too.
Oh, and to add to that: In Debian, Firefox is now breaking more often on platforms which are not x86/x86_64 as you can see here with armhf currently broken:
https://buildd.debian.org/status/package.php?p=firefox&suite=sid
But I really don't know what people expect from a compiler whose testsuite is only ever run on x86/x86_64 by upstream - unlike virtually any other compiler project on the planet. Heck, even Free Pascal runs tests on all platforms - natively. I know that because I am providing those guys hardware for testing.
Adrian
Then maybe your time would be better spent working together with the Rust team to ensure that they can and do run their tests on armhf as well?
Hadrien
PS: By the way, if you go and have a look at the build log for the firefox package on armhf that you mention, you will find that the problem is that the build machine ran out of RAM ("terminate called after throwing an instance of 'std::bad_alloc'"). This has nothing to do with Rust, it is a common problem with large C++ projects too. You may want to beef up the corresponding build node or to reduce the amount of concurrent build processes. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/23/2018 11:56 AM, Hadrien Grasland wrote:
PS: By the way, if you go and have a look at the build log for the firefox package on armhf that you mention, you will find that the problem is that the build machine ran out of RAM ("terminate called after throwing an instance of 'std::bad_alloc'"). This has nothing to do with Rust, it is a common problem with large C++ projects too. You may want to beef up the corresponding build node or to reduce the amount of concurrent build processes.
This was just an example. I didn't check why the build failed *this* time. Again, I am a build engineer at Debian for many architecture and tons of packages have been going through my hands. You can just take my word here, ok? Thanks, Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/23/2018 11:42 AM, Hadrien Grasland wrote:
Then maybe your time would be better spent working together with the Rust team to ensure that they can and do run their tests on armhf as well?
Am I the person who is trying to push Rust everywhere? Rust upstream wants Rust to succeed as a systems programming language. It's therefore their responsibility to make sure the compiler works properly on the supported targets. If they cannot achieve that, they should stop trying to claim how superior Rust is over other languages and stop trying to push it as a systems programming language. I am already doing lots of upstream work in various projects. But this isn't my main job, so you can't expect me to do the homework of the Rust developers. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Le 23/01/2018 à 13:15, John Paul Adrian Glaubitz a écrit :
On 01/23/2018 11:42 AM, Hadrien Grasland wrote:
Then maybe your time would be better spent working together with the Rust team to ensure that they can and do run their tests on armhf as well?
Am I the person who is trying to push Rust everywhere?
Rust upstream wants Rust to succeed as a systems programming language. It's therefore their responsibility to make sure the compiler works properly on the supported targets.
If they cannot achieve that, they should stop trying to claim how superior Rust is over other languages and stop trying to push it as a systems programming language.
I am already doing lots of upstream work in various projects. But this isn't my main job, so you can't expect me to do the homework of the Rust developers.
Adrian
I appreciate your point of view. Indeed, improving upstream testing should not be your main job as a distribution maintainer. That being said, there is a reason why Debian lends testing hardware to projects like the FreePascal compiler: while it is probably not the right thing to do, it is both the nice and the pragmatic thing to do. Given Rust's immense productivity benefits in the system programming space with respect to C and C++, people are bound to use it. Much like they use bleeding-edge C++, Node.js, custom package managers like Maven and PyPI and all other kind of programming environments which make the life of distribution maintainers hard. It is unescapable, in the sense that you cannot simply wish the language and its implementation away from your radar, rant about every Rust-based package that you see pass by, and be done with it. What happens to librsvg today, is happening to gstreamer and other GNOME projects tomorrow, and at some point you will need to face the issue anyhow. Given that, the next best thing to do after obliterating the perceived nuisance is to work on making the product better so that it fits your needs. This is what distribution maintainers have always done: if the project is not quick enough, write the patch yourself, and send it to upstream with a friendly note. Backport fixes from newer versions into the version that you distribute. Help the developer test on new architectures that the distribution supports. Ultimately, these kind of actions benefit the distribution too. Which is why I am saying that working with the Rust community might be more productive, and ultimately beneficial to you, than merely complaining about its current state. Hadrien -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, 2018-01-23 at 13:40 +0100, Hadrien Grasland wrote:
Which is why I am saying that working with the Rust community might be more productive, and ultimately beneficial to you, than merely complaining about its current state.
I really think that this is the best conclusion that we can have for this thread. -- SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) Maxfeldstraße 5, 90409 Nürnberg, Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/23/2018 01:56 PM, Alberto Planas Dominguez wrote:
Which is why I am saying that working with the Rust community might be more productive, and ultimately beneficial to you, than merely complaining about its current state.
I really think that this is the best conclusion that we can have for this thread.
I have:
https://github.com/rust-lang/rust/commits?author=glaubitz https://github.com/rust-lang/rust/search?q=glaubitz&type=Issues&utf8=%E2%9C%93 https://github.com/mozilla/gecko-dev/commits?author=glaubitz
I am not sure why you think I didn't do that already. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/23/2018 01:40 PM, Hadrien Grasland wrote:
Given Rust's immense productivity benefits in the system programming space with respect to C and C++, people are bound to use it. Much like they use bleeding-edge C++, Node.js, custom package managers like Maven and PyPI and all other kind of programming environments which make the life of distribution maintainers hard. It is unescapable, in the sense that you cannot simply wish the language and its implementation away from your radar, rant about every Rust-based package that you see pass by, and be done with it. What happens to librsvg today, is happening to gstreamer and other GNOME projects tomorrow, and at some point you will need to face the issue anyhow.
Well, I have seen in the past what happened when GNOME upstream was ignoring downstream complaints. There was a bug in gvfs-metadata that GNOME upstream insisted didn't exist and just kept closing the bug report in the RedHat bug tracker. At some point, a "strategic customer" ran into the problem as well. And, all of a sudden, GNOME upstream admitted their mistake and fixed the bug. So, I wouldn't see GNOME as a prime example on how software in the open source community should work. Plus, if you look at the number of forks that GNOME's decisions have triggered in the past (MATE, Cinnamon, Deepin etc), it clearly shows that this development model doesn't fly in the long-term. Also, I don't think that something like this will fly on the longterm with Linux distributions: glaubitz@suse-laptop:~/suse/openSUSE:Factory/librsvg/librsvg-2.42.0/rust/vendor> find . -name "*.rs" |wc -l 988 glaubitz@suse-laptop:~/suse/openSUSE:Factory/librsvg/librsvg-2.42.0/rust/vendor> This undermines the work of security teams in Linux distributions. At least in Debian, including third-party libraries instead of using the versions available in the distribution tree is not allowed. I'm very much surprised that this is apparently acceptable in openSUSE. If every Rust package is going to be like that in the future, I'll already sent out my condolences to anyone working on distribution security teams.
Given that, the next best thing to do after obliterating the perceived nuisance is to work on making the product better so that it fits your needs. This is what distribution maintainers have always done: if the project is not quick enough, write the patch yourself, and send it to upstream with a friendly note. Backport fixes from newer versions into the version that you distribute. Help the developer test on new architectures that the distribution supports. Ultimately, these kind of actions benefit the distribution too.
I think you are ignoring the fact that open source projects can also be forked and if GNOME/Freedesktop upstream is enforcing Rust onto its users before it's ready for prime time, you will see more forks happen. Debian (and therefore Ubuntu) is not going to adopt the rustified version of librsvg anytime soon. So, don't count your chickens until they are hatched :-). PS: I just tried building rust 1.23 on MIPS and POWERPC32 today, still fails. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Le 23/01/2018 à 18:11, John Paul Adrian Glaubitz a écrit :
Plus, if you look at the number of forks that GNOME's decisions have triggered in the past (MATE, Cinnamon, Deepin etc), it clearly shows that this development model doesn't fly in the long-term.
I am not convinced by this argument. The open-source community loves to fork projects as soon as a big changes happen (which, like any big change, are bound to cause some disagreement), and sometimes just does so out of the blue for the fun of it. In that sense, a fork does not speak in any way about the well-being of the project which is being forked, and does not usually have a lot of long-term consequences for said project. To convince yourself of that, consider historical examples: * When the KDE3 -> KDE4 transition happened, the Trinity project forked KDE3 (which pretty much parallels MATE with GNOME 2). And now that KDE is putting X11 in maintenance mode, I am ready to bet that some X11 enthusiast is going to fork KDE again. * Wayland has seen Canonical go Mir, and now NVidia are trying to cause a community split in there again. * Before systemd became the norm, there were 3 popular init clones around in the Linux ecosystem. * The BSD community dedicates most of its manpower to writing BSD-licensed equivalents of existing GPL-licensed code out there. Forks and competing clones mean that there are people in a community who disagree about something, which is generally healthy even though it has the unfortunate effect of splitting manpower. It means that we avoid an Apple-like stagnating technical monoculture, for example. Sometimes forks have a good idea and prosper (like Cinnamon), or even replace their genitor (like systemd), other times they fail to evolve beyond their original concept, stagnate, and end up dying from technical debt poisoning (like Unity). Nobody can tell which one it will be in advance, I would say. On the other hand, I find your other point more interesting:
Also, I don't think that something like this will fly on the longterm with Linux distributions:
glaubitz@suse-laptop:~/suse/openSUSE:Factory/librsvg/librsvg-2.42.0/rust/vendor> find . -name "*.rs" |wc -l 988 glaubitz@suse-laptop:~/suse/openSUSE:Factory/librsvg/librsvg-2.42.0/rust/vendor>
This undermines the work of security teams in Linux distributions. At least in Debian, including third-party libraries instead of using the versions available in the distribution tree is not allowed.
I'm very much surprised that this is apparently acceptable in openSUSE. If every Rust package is going to be like that in the future, I'll already sent out my condolences to anyone working on distribution security teams.
This point is actually not Rust-specific in my eyes. I see it as something which has been brewing for a while in the programming community, and I'm surprised that the issue has never arised earlier. The heart of the problem as far as I see it is that there is no perfect software distribution method. Two extreme models have been popular for a while, the Windows/OSX strategy of packaging every nontrivial dependency with the application, and the Linux/BSD strategy of building one giant software repository per software distribution. This dichotomy is making the life of every multi-platform project difficult, especially so as both strategies ultimately have very serious drawbacks: * The "package the world" approach not only causes oversized downloads, but makes backwards-compatible dependency updates (like, as you point out, security patches) unnecessarily slow to propagate through the ecosystem. Cross-application communication can also get fiendish in this approach, which does not encourage software to work together. And application installation, clean removal and update is also generally a mess. * The centralized repo approach, on its side, is much more convenient for the end user, but for the developer it means that software must be packaged N times instead of one (in order to please each distribution's repo management customs) and it gets very messy any time a dependency pushes a backwards-incompatible update. From a security point of view, other issues arise due because the centralized repo is a single point of failure, and third-party repos (which are often added out of necessity in order to get sufficiently recent software) are not held to the same quality and security standard as the main one. Frustrated with this state of affair, the community of almost every modern programming language has gone all "we can do better" and built their own library distribution and dependency management mechanism. And so we got Maven for Java, PyPI and Conda for Python, Gems for Ruby, go get for Go, NPM for Javascript, Crates.io for Rust... and the list goes on. These distribution mechanism vary widely in capabilities, but one recurring theme is that application developers want to have more control over their dependencies, and in particular to update them only after in-house testing. This results in a strange hybrid between the two historical approach: * There is ~one centralized repo per programming language, which is generally less problematic than one per Linux distribution in practice. * Applications package their dependencies and the one who performs the build gets to pick the dependency versions, like on Windows and OSX. I am surprised that it is the first time that these custom package management schemes get into a nontrivial conflict with the standard system package management scheme of a Linux distro. AFAIK, these things have been around for a long while, and even programming languages which encourage statically linking everything are not new (think Go). So hasn't anybody been thinking about this issue before? Cheers, Hadrien -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 2018-01-23 20:54, Hadrien Grasland wrote:
I am surprised that it is the first time that these custom package management schemes get into a nontrivial conflict with the standard system package management scheme of a Linux distro. AFAIK, these things have been around for a long while, and even programming languages which encourage statically linking everything are not new (think Go). So hasn't anybody been thinking about this issue before?
we have hit these issues for years with Maven and nodejs/npm craziness which is why an amazingly low number of these are properly packaged in OBS, even though people would want things like etherpad, jenkins, hadoop and more. In some sense I feel this is hurting the spirit of open source software, because more users and developers end up using binary blobs without being able to build things from source. E.g. when we build jenkins packages by taking the upstream .war file as binary input, we cannot even apply simple patches to problems we find. OTOH perl, python, ruby and some other ecosystems seem to often have been considerate enough to not break backward compatibility, so that things like gem2rpm and equivalents worked well enough to map their concept of packages to ours. Maybe it is also because their tools are designed to primarily ship sources around And maybe they also better avoid cyclic dependencies. Ciao Bernhard M. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Jan 23 2018, John Paul Adrian Glaubitz <adrian.glaubitz@suse.com> wrote:
I don't know how to run this check in openSUSE, but look at the packages whose build dependencies would become uninstallable on most architectures in Debian if we were to upgrade librsvg to the Rust version:
These are the direct dependencies: osc whatdependson openSUSE:Factory librsvg standard x86_64 Andreas. -- Andreas Schwab, SUSE Labs, schwab@suse.de GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE 1748 E4D4 88E3 0EEA B9D7 "And now for something completely different." -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/23/2018 11:46 AM, Andreas Schwab wrote:
I don't know how to run this check in openSUSE, but look at the packages whose build dependencies would become uninstallable on most architectures in Debian if we were to upgrade librsvg to the Rust version:
These are the direct dependencies: osc whatdependson openSUSE:Factory librsvg standard x86_64
Awesome, thank you. That's what I like about these discussions, they will almost always result in learning something new :). Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Hi John, Just chipping in as a Rust developer, and wanted to clear up what I perceive as some misunderstanding: Le 22/01/2018 à 16:36, John Paul Adrian Glaubitz a écrit :
On 01/22/2018 04:22 PM, Aleksa Sarai wrote:
I'm not sure we're understanding each other here -- my point was that the *only* Rust project which has this policy for compiling new versions is the Rust compiler. No other Rust project requires this. That's what I meant by "exception, not the rule". So I agree with what you wrote, but it doesn't have much to do with what I was trying to say, which is that the following quote ...
So, you say it's guaranteed that only the Rust compiler will only ever use particular code that will be deprecated in release N+1 or only available in release N-1?
The Rust compiler uses unstable internal interfaces, which are not exposed to code which builds on stable releases. The closest equivalent which I can think of in the C/++ world is the GCC/binutils duo: to build and use GCC, you need a matching release of binutils, which maps to a relatively narrow time window. Use too new or too old a release of binutils, and your GCC build will fail with weird assembler and linker errors. And conversely, like any C/++ program, binutils itself has some compiler version requirements. This does not preclude GCC and binutils from providing stability guarantees on the programs which they accept to compile, but it is a concern that must be kept in mind when maintaining GCC and binutils packages. From this perspective, the Rust compiler's bootstrapping requirements are not different.
I did build test it myself. I tried building Rust 1.22 with Rust 1.20 which failed with actual compiler errors, not just a warning that I have to use the proper version. And I think it's absolutely not unlikely that Rust project X will run into such a problem as well. What keeps Rust project X from using certain language features that were only recently added or removed?
Since Rust 1.0, users of stable versions of the Rust compiler enjoy a number of stability guarantees: * Stable language features may only be added and deprecated, not removed, so code which builds on version N is guaranteed to build on version N+1. * While feature removal and breaking changes to existing features are eventually planned, they will be done via an epoch mechanism similar to the one used by C and C++. Think about C 89/99/11 and C++ 98/11/14/17. In short, the only thing that must be taken care of, from a distribution maintainer's perspective, is that an application must be compiled with a sufficiently recent stable Rust compiler. This is not a new concern (e.g. it has been an issue in the C++ world for a while). If, on the other hand, you find a package which does not build with a _newer_ release of the Rust compiler than was available at its release date, it is a bug, and you should report it to the Rust team.
The problem with Rust is simply the lack of stabilization. It's absolutely insane that they think it's ok to break compatibility in minor versions and it blows my mind that so many people find that acceptable.
Adding features in a minor software release is considered okay in any modern software versioning scheme. It is only when existing features are changed or removed that compatibility is considered to be broken.
Rust upstream lives in a universe where they think that distributions are an outdated concept. This is why they are shipping their own package manager and consider such breaking changes in minor releases acceptable.
You must understand where they are coming from. Most Linux distributions consider it okay to ship software which lags 5+ years behind official upstream releases, which is not acceptable for a fast-moving software project like Rust (or even to any software project where new releases matter, such as hardware drivers, web browsers, and office suites). And some of the platforms that they target do not ship a standard package management mechanism at all. The rolling release users among us are sadly the minority here. Rust's distribution tools cater to the vast majority of users who are stuck with obsolete operating system packages and want to get modern work done nonetheless. To do this, they sometimes need to bypass the standard distribution package management scheme. But this need not concern you as a distribution maintainer, much like you need not be concerned about users who build and install more recent software releases from source: what users do with their machine is solely their business, so long as they don't come complain when their personal fiddling breaks the system.
Finally, one problem with Rust that I ran into when working on the Rust compiler code itself is the high volatility of the language and the code. Things that used to build fine with Rust 1.19 would break with 1.20 and so on. From a distribution maintainer's point of view, this can be very tedious and annoying.
... is not accurate for any project other than the Rust compiler (and the reason for the Rust compiler having this requirement is so that they can use new language features in the compiler itself, not because of stability issues with the language). Any other Rust project must be able to build on 1.x and 1.(x+1) with no changes (and the Rust compiler team tests this quite heavily).
What keeps project X from using certain features of Rust? I have seen projects which would only build with Rust Nightly.
Software which opts into nightly-only unstable Rust features should be considered unstable as well, and is not a good fit for distribution via normal Linux distribution package management schemes. It should thus be rejected from official Linux distribution repositories. Users who want to install and use such packages will be fine with manually building their own versions, and dealing with compiler breakages as they happen. Cheers, Hadrien -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/22/2018 05:12 PM, Hadrien Grasland wrote:
The Rust compiler uses unstable internal interfaces, which are not exposed to code which builds on stable releases. The closest equivalent which I can think of in the C/++ world is the GCC/binutils duo: to build and use GCC, you need a matching release of binutils, which maps to a relatively narrow time window. Use too new or too old a release of binutils, and your GCC build will fail with weird assembler and linker errors. And conversely, like any C/++ program, binutils itself has some compiler version requirements.
I'm pretty confident there is never a problem when binutils is too new, at least I haven't run into such a problem with my porting work within Debian. And, furthermore, the key point here again is the speed of change: gcc doesn't introduce breaking changes every six weeks, Rust does.
The problem with Rust is simply the lack of stabilization. It's absolutely insane that they think it's ok to break compatibility in minor versions and it blows my mind that so many people find that acceptable.
Adding features in a minor software release is considered okay in any modern software versioning scheme. It is only when existing features are changed or removed that compatibility is considered to be broken.
I wouldn't consider a toolchain a normal piece of software. A toolchain is one of the basic building blocks of your whole distribution. It shouldn't change in crazy ways when you just perform a minor update.
Rust upstream lives in a universe where they think that distributions are an outdated concept. This is why they are shipping their own package manager and consider such breaking changes in minor releases acceptable.
You must understand where they are coming from. Most Linux distributions consider it okay to ship software which lags 5+ years behind official upstream releases, which is not acceptable for a fast-moving software project like Rust (or even to any software project where new releases matter, such as hardware drivers, web browsers, and office suites). And some of the platforms that they target do not ship a standard package management mechanism at all. The rolling release users among us are sadly the minority here.
Well, your perspective would change if you're responsible for maintaining several hundreds desktop machines with several hundred users. Installing a rolling release distribution in such setups would be a nightmare because you would be busy all day long to fix all kinds of regressions. And I'm not necessarily talking about regressions in the form of bugs. It can already be a regression if feature X behaves differently or extension Y doesn't work anymore. It's really frustrating how many upstream projects are refusing to understand this. So many just say "Awww, just go ahead and update to the latest upstream version, no big deal. I've been running Arch on my single-user, single-machine setup for years without problems." It simply doesn't work that way in the enterprise world.
Rust's distribution tools cater to the vast majority of users who are stuck with obsolete operating system packages and want to get modern work done nonetheless. To do this, they sometimes need to bypass the standard distribution package management scheme. But this need not concern you as a distribution maintainer, much like you need not be concerned about users who build and install more recent software releases from source: what users do with their machine is solely their business, so long as they don't come complain when their personal fiddling breaks the system.
It very much becomes concern if a new version of application X requires an additional of 250 packages to be updated. It becomes a nightmare from a security point of view. Who is going to review all these additional updated packages? What's the point of all these fancy security features Rust have when you end up having 25 different versions of libfoo installed on your system? You might as well then just stop installing security updates. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Le 22/01/2018 à 18:29, John Paul Adrian Glaubitz a écrit :
On 01/22/2018 05:12 PM, Hadrien Grasland wrote:
The Rust compiler uses unstable internal interfaces, which are not exposed to code which builds on stable releases. The closest equivalent which I can think of in the C/++ world is the GCC/binutils duo: to build and use GCC, you need a matching release of binutils, which maps to a relatively narrow time window. Use too new or too old a release of binutils, and your GCC build will fail with weird assembler and linker errors. And conversely, like any C/++ program, binutils itself has some compiler version requirements.
I'm pretty confident there is never a problem when binutils is too new, at least I haven't run into such a problem with my porting work within Debian.
I remember reading that too new a binutils could also be a problem back in the days where I was playing with OSdeving (which requires a custom cross-compiler configuration), but you are right that I never experienced it firsthand nor have heared about it in a long while. Maybe it's just an old meme fueled by a couple of backwards-incompatible binutils breakages that happened a long time ago...
And, furthermore, the key point here again is the speed of change: gcc doesn't introduce breaking changes every six weeks, Rust does.
I can certainly understand that fast-changing software can be difficult to deal with. I for one never understood how the people packaging rolling release distros manage to keep up so well with the rapid rate of kernel releases (and the periodical NVidia driver breakages that ensue), or with the rapid update frequency of anything GCC-related (where you basically have to rebuild everything every time). At the same time, one should not shoot the messenger. Fresh software with fast feature and bugfix turnaround is also a good thing from the end user point of view, so long as the project can provide the quality assurance guarantees that come with that. And for this, Rust comes much better equipped than many other projects, as you can read up on https://brson.github.io/2017/07/10/how-rust-is-tested . Now, you claim that this is not enough, and that you have observed breakages. But when I have requested you to provide details and evidence, you have not been able to (and have in fact ignored that question altogether). All you have precisely explained in your previous e-mail is that you had issues bootstrapping the compiler using an older release of itself, a special case which the Rust team is well aware of, and provides special support for through pre-built binaries. You have also briefly mentioned something about a disappearing keyword, without mentioning if that keyword was part of a stable Rust release (which is where Rust's stability guarantees apply) or not. If you are not going to provide further details, I will have to assume that it wasn't.
The problem with Rust is simply the lack of stabilization. It's absolutely insane that they think it's ok to break compatibility in minor versions and it blows my mind that so many people find that acceptable.
Adding features in a minor software release is considered okay in any modern software versioning scheme. It is only when existing features are changed or removed that compatibility is considered to be broken.
I wouldn't consider a toolchain a normal piece of software. A toolchain is one of the basic building blocks of your whole distribution. It shouldn't change in crazy ways when you just perform a minor update.
Tell that to the kernel maintainers next time they will break my video driver or send someone's production system in an infinite bootloop in what was supposed to be a security update. And yet, for some reason, we in the Linux world never had much issue building on top of that. In fact, I would argue that one of Tumbleweed's strength is that is the first Linux distribution which I have used so far which provides concrete answers to this problem (via OpenQA and Btrfs snapshots) without forcing its users into software stagnation along the way. Compared to what major Linux infrastructure projects like the kernel, Mesa, or KDE will still periodically send people through, I would say that Rust did pretty well so far. It has managed to iteratively add many features on a rapid release cycle, without breaking the code of anyone who builds on stable releases, as evidenced by the team regularly re-building and re-testing the entire crates.io package library in their routine procedure. You claim that you found holes in this procedure, but so far you have not provided evidence. And even if you had some, all that would mean is that your discovery would serve to improve the testing procedure and make it better. I, for one, am very happy that some software projects are finally taking steps to provide alternatives to the breakage versus stagnation false dichotomy, that has been with us for way too long in the Linux world.
Rust upstream lives in a universe where they think that distributions are an outdated concept. This is why they are shipping their own package manager and consider such breaking changes in minor releases acceptable.
You must understand where they are coming from. Most Linux distributions consider it okay to ship software which lags 5+ years behind official upstream releases, which is not acceptable for a fast-moving software project like Rust (or even to any software project where new releases matter, such as hardware drivers, web browsers, and office suites). And some of the platforms that they target do not ship a standard package management mechanism at all. The rolling release users among us are sadly the minority here.
Well, your perspective would change if you're responsible for maintaining several hundreds desktop machines with several hundred users. Installing a rolling release distribution in such setups would be a nightmare because you would be busy all day long to fix all kinds of regressions.
And I'm not necessarily talking about regressions in the form of bugs. It can already be a regression if feature X behaves differently or extension Y doesn't work anymore.
It's really frustrating how many upstream projects are refusing to understand this. So many just say "Awww, just go ahead and update to the latest upstream version, no big deal. I've been running Arch on my single-user, single-machine setup for years without problems." It simply doesn't work that way in the enterprise world.
Again, there are two sides to this story. Here, you are taking the side of someone who needs to keep a large production system alive, which I agree is very important and must be respected. At the same time, if you put yourselves in a developer's shoes, it is also extremely frustrating to process bug reports or feature requests about problems which you resolved on the master branch months ago, and to be requested to keep alive old software releases which no one even really wants to be using anymore. Surely, there has to be a way to do better here on both accounts. I am glad to see that some software projects are taking steps to resolve this longstanding issue more cleanly. To see better testing and deployment infrastructures which shrink the risk window, and reduce the need for costly backports. Things like more extensive test suites, better continuous integration, containers and staged feature roll-out are all great news for the software world, which will ultimately all help us leave more of the legacy baggage behind, and stop saying to people "Well, you *could* run Linux, open newer DOCX documents, or write C++17 code on that freshly bought laptop, but for that you will need to take some risks...".
Rust's distribution tools cater to the vast majority of users who are stuck with obsolete operating system packages and want to get modern work done nonetheless. To do this, they sometimes need to bypass the standard distribution package management scheme. But this need not concern you as a distribution maintainer, much like you need not be concerned about users who build and install more recent software releases from source: what users do with their machine is solely their business, so long as they don't come complain when their personal fiddling breaks the system.
It very much becomes concern if a new version of application X requires an additional of 250 packages to be updated. It becomes a nightmare from a security point of view. Who is going to review all these additional updated packages?
What's the point of all these fancy security features Rust have when you end up having 25 different versions of libfoo installed on your system?
You might as well then just stop installing security updates.
For any software stabilization and testing process, there is a point of diminishing returns. No matter how much energy you expend at reviewing the package base, at some point, you will still need to bite the bullet and push the thing to the users, being fully aware that this is where the bulk of the bugs and security holes will be found just by virtue of users being much more numerous than testers. For this reason, I've grown increasingly skeptical of stable Linux distribution release processes over time. They have never been effective at killing the most annoying bugs for me (like broken hardware drivers), all the while forcing me into ancient software whose problems have been fixed upstream months to years ago. Their upgrade procedures are stressful, time-consuming and fragile. I am aware that there is a place for such elaborate release schemes, but personally I would rather see all that QA effort being expended directly on maintenance and continuous improvement of the relevant software projects, rather than on making late software later. Cheers, Hadrien -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/23/2018 08:46 AM, Hadrien Grasland wrote:
And, furthermore, the key point here again is the speed of change: gcc doesn't introduce breaking changes every six weeks, Rust does.
I can certainly understand that fast-changing software can be difficult to deal with. I for one never understood how the people packaging rolling release distros manage to keep up so well with the rapid rate of kernel releases (and the periodical NVidia driver breakages that ensue), or with the rapid update frequency of anything GCC-related (where you basically have to rebuild everything every time).
At the same time, one should not shoot the messenger. Fresh software with fast feature and bugfix turnaround is also a good thing from the end user point of view, so long as the project can provide the quality assurance guarantees that come with that. And for this, Rust comes much better equipped than many other projects, as you can read up on https://brson.github.io/2017/07/10/how-rust-is-tested .
I am not shooting the messenger. I am criticizing the person who thought that writing a core package with a large number of reverse dependencies [1] in a language where upstream can't even be bothered to run the testsuite on anything beyond x86 is a good idea. And, no, not having the resources for that is not the right justification. If you don't have the resources, either a) don't try to push your language for core packages, or b), ask projects like Debian which do have a large test infrastructure with all kinds of architectures
I wouldn't consider a toolchain a normal piece of software. A toolchain is one of the basic building blocks of your whole distribution. It shouldn't change in crazy ways when you just perform a minor update.
Tell that to the kernel maintainers next time they will break my video driver or send someone's production system in an infinite bootloop in what was supposed to be a security update. And yet, for some reason, we in the Linux world never had much issue building on top of that. In fact, I would argue that one of Tumbleweed's strength is that is the first Linux distribution which I have used so far which provides concrete answers to this problem (via OpenQA and Btrfs snapshots) without forcing its users into software stagnation along the way.
If the kernel breaks, I can just switch to a different kernel at the boot prompt. For this very reason, Debian puts every minor kernel release into a separate package. Furthermore, the distribution kernels don't bring such breaking changes, plus the upstream kernel also NEVER breaks any userland.
Again, there are two sides to this story. Here, you are taking the side of someone who needs to keep a large production system alive, which I agree is very important and must be respected. At the same time, if you put yourselves in a developer's shoes, it is also extremely frustrating to process bug reports or feature requests about problems which you resolved on the master branch months ago, and to be requested to keep alive old software releases which no one even really wants to be using anymore. Surely, there has to be a way to do better here on both accounts.
You need to understand that the people who are paying everyone's bills at the end of the month are the one that are using the stable releases. In our other mail, you were saying that Mozilla is a starving organization, maybe you should try to make the connection between these two statements.
For any software stabilization and testing process, there is a point of diminishing returns. No matter how much energy you expend at reviewing the package base, at some point, you will still need to bite the bullet and push the thing to the users, being fully aware that this is where the bulk of the bugs and security holes will be found just by virtue of users being much more numerous than testers.
I think you don't have the slightest clue how QA in enterprise distributions works or how much QA and testing there is before Debian pushes a stable release. This isn't about biting a bullet and pushing something out untested, there is A LOT of testing behind. This is why companies pay very good money for it.
For this reason, I've grown increasingly skeptical of stable Linux distribution release processes over time. They have never been effective at killing the most annoying bugs for me (like broken hardware drivers), all the while forcing me into ancient software whose problems have been fixed upstream months to years ago. Their upgrade procedures are stressful, time-consuming and fragile.
The key point about stable distributions is not that they are bug-free, the key point is that the bugs and problems are well documented. A rapid release will always bring new regressions. Adrian
[1] https://people.debian.org/~glaubitz/librsvg.txt -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 2018-01-23, John Paul Adrian Glaubitz <adrian.glaubitz@suse.com> wrote:
Furthermore, the distribution kernels don't bring such breaking changes, plus the upstream kernel also NEVER breaks any userland.
Except when they do -- in which case we chalk it up as a mistake, Linus gets angry at someone, and we all move along with our day. We don't suddenly start shouting that "Linux is unstable!". -- Aleksa Sarai Senior Software Engineer (Containers) SUSE Linux GmbH <https://www.cyphar.com/>
On Tuesday 2018-01-23 12:18, Aleksa Sarai wrote:
On 2018-01-23, John Paul Adrian Glaubitz <adrian.glaubitz@suse.com> wrote:
Furthermore, the distribution kernels don't bring such breaking changes, plus the upstream kernel also NEVER breaks any userland.
Except when they do -- in which case we chalk it up as a mistake, Linus gets angry at someone, and we all move along with our day. We don't suddenly start shouting that "Linux is unstable!".
But the thing is - why do we always need to get angry first before problems (perceived or real) like rsvg get fixed, whatever the fix may be? -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tuesday, 23 January 2018 12:18 Aleksa Sarai wrote:
On 2018-01-23, John Paul Adrian Glaubitz <adrian.glaubitz@suse.com> wrote:
Furthermore, the distribution kernels don't bring such breaking changes, plus the upstream kernel also NEVER breaks any userland.
Except when they do -- in which case we chalk it up as a mistake, Linus gets angry at someone, and we all move along with our day. We don't suddenly start shouting that "Linux is unstable!".
Not exactly. Linus rarely does get (really) angry just because someone made a mistake and "broke userspace". But when that someone - or someone else - refuses to fix it and starts to argue that it's actually the right thing to do, that's when you can expect one of his famous rants. And this difference is important. If you break users' usecase by accident, it's a mistake and we all make mistakes. You shouldn't make serious ones too often, sure, but nobody is going to rip your head for that. But you shouldn't insist that it's perfectly fine and refuse to fix it. I'm not sure if you meant it that way but "chalk it up as a mistake ... and move on" sounds as if you forgot the "we fix it" part which is quite important. Michal Kubeček -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/23/2018 12:18 PM, Aleksa Sarai wrote:
On 2018-01-23, John Paul Adrian Glaubitz <adrian.glaubitz@suse.com> wrote:
Furthermore, the distribution kernels don't bring such breaking changes, plus the upstream kernel also NEVER breaks any userland.
Except when they do -- in which case we chalk it up as a mistake, Linus gets angry at someone, and we all move along with our day. We don't suddenly start shouting that "Linux is unstable!".
Can you refer to a link where the Linux kernel broke userland the last time and the patch was actually merged? Thanks, Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tuesday, 23 January 2018 13:27 John Paul Adrian Glaubitz wrote:
On 01/23/2018 12:18 PM, Aleksa Sarai wrote:
On 2018-01-23, John Paul Adrian Glaubitz <adrian.glaubitz@suse.com> wrote:
Furthermore, the distribution kernels don't bring such breaking changes, plus the upstream kernel also NEVER breaks any userland.
Except when they do -- in which case we chalk it up as a mistake, Linus gets angry at someone, and we all move along with our day. We don't suddenly start shouting that "Linux is unstable!".
Can you refer to a link where the Linux kernel broke userland the last time and the patch was actually merged?
I don't have an example at hand but there are cases when the breakage wasn't found until the commit reached mainline (maybe even a release). But it would be hard to find an example when it wasn't reverted after the problem was found. Michal Kubeček -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 2018-01-23 13:55, Michal Kubecek wrote:
But it would be hard to find an example when it wasn't reverted after the problem was found.
things like regressions in vanilla kernel for several months (introduced in 4.13, fixed in 4.15-rc8) https://bugzilla.opensuse.org/show_bug.cgi?id=1075613 and regressions introduced by backports to old stable Leap kernels: https://bugzilla.opensuse.org/show_bug.cgi?id=1063570 yes, we could downgrade to older kernels in both cases, but then you don't have the latest security fixes which is not what you want either for systems you actually care about. -----BEGIN PGP SIGNATURE----- iF0EARECAB0WIQRk4KvQEtfG32NHprVJNgs7HfuhZAUCWmefmwAKCRBJNgs7Hfuh ZGY3AKCfwhQU2kwjt1YJmdpfq2Hc8yxTswCfbav7LABDJWE0pWnT5aVoWet7ArI= =Zdjl -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 23.01.2018 08:46, Hadrien Grasland wrote:
Tell that to the kernel maintainers next time they will break my video driver
If they break it, they fix it. Usually fast. Unless you insist on using drivers of questionable legal status.
or send someone's production system in an infinite bootloop in what was supposed to be a security update.
Well, the current (mainly) intel snafu is something you can hardly blame the Kernel guys for. Apart from than, I'm running Kernel:HEAD kernels (which means: always the latest after -rc2 or so) since almost 10 years wihtout such problems.
And yet, for some reason, we in the Linux world never had much issue building on top of that.
Actually, for the kernel, there is a very strong commitment to "never break userspace" with an update. If you find some userspace application that does no longer work after a kernel update, then Linus will make sure that the change is reverted. No matter what. Just one of the countless examples on lkml: https://lkml.org/lkml/2012/12/23/75 Well, unless the application has been using really *documented unstable* interfaces.
Compared to what major Linux infrastructure projects like the kernel, Mesa, or KDE will still periodically send people through
Please provide an example for the Linux kernel of what they "sent you through". And no, the illegal NVidia driver breaking does not count. Complain to NVidia about that. -- Stefan Seyfried "For a successful technology, reality must take precedence over public relations, for nature cannot be fooled." -- Richard Feynman -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 2018-01-22, John Paul Adrian Glaubitz <adrian.glaubitz@suse.com> wrote:
I'm not sure we're understanding each other here -- my point was that the *only* Rust project which has this policy for compiling new versions is the Rust compiler. No other Rust project requires this. That's what I meant by "exception, not the rule". So I agree with what you wrote, but it doesn't have much to do with what I was trying to say, which is that the following quote ...
So, you say it's guaranteed that only the Rust compiler will only ever use particular code that will be deprecated in release N+1 or only available in release N-1?
I'll be honest that I am not sure of the reason why the Rust compiler is special in this sense. However, from the discussions I've had with folks from the Rust community, it's related to the fact that the standard library is "special" (in the same way that Go's "runtime" and "internal" libraries are quite special) and it should not be possible for a crate (that is built on stable) to have the same properties as the Rust compiler.
I did build test it myself. I tried building Rust 1.22 with Rust 1.20 which failed with actual compiler errors, not just a warning that I have to use the proper version. And I think it's absolutely not unlikely that Rust project X will run into such a problem as well. What keeps Rust project X from using certain language features that were only recently added or removed?
The problem with Rust is simply the lack of stabilization. It's absolutely insane that they think it's ok to break compatibility in minor versions and it blows my mind that so many people find that acceptable.
As has been discussed in the rest of this thread, Rust has a very specific stability guarantee which is effectively following: "Any Rust crate which builds on a stable version of Rust today is guaranteed to build on a future stable version of Rust tomorrow. If it does not, it's a bug." You have given two examples where this was not true: * The Rust compiler, which as we've discussed is not a "Rust crate". * A Firefox version which was broken by the removal of some keyword. From what you describe this sounds like a bug, and a violation of their stability guarantee. Mistakes happen -- though this one seems to be quite large to not have been mentioned anywhere else -- and I hope that you submitted a bug report, because I'm sure that they would've resolved the problem. You mention that Go does a lot of testing to avoid regressions, *so does the Rust community*. They do a "crater run" (rebuild and unit-test of all crates on crates.io) on a regular cadence during development, when large features are being considered for merging, and for gating releases. If they find an issue they either fix it in the compiler (if it's a regression) or they go to the project itself and submit a patch (if it's actually a bug in the project). This is one of the things that is mentioned in literally every talk about the Rust development process. If they really did break Firefox, then I would like to see a post-mortem for why their regular crater runs didn't pick it up. But the right thing to do in this situation would've been to report a bug, rather than to conclude that Rust is unstable and move on without telling anyone.
Rust upstream lives in a universe where they think that distributions are an outdated concept. This is why they are shipping their own package manager and consider such breaking changes in minor releases acceptable.
I agree that their attitude toward distributions is a problem (for us obviously, but also for users IMO). But this is also an attitude that is shared by a very large number of languages and projects these days. I cannot help but think that the root cause of this problem is that we have not done a good job (as distribution developers in general) of convincing people of the benefits of having a distribution that has a global view of packaging of a system.
Finally, one problem with Rust that I ran into when working on the Rust compiler code itself is the high volatility of the language and the code. Things that used to build fine with Rust 1.19 would break with 1.20 and so on. From a distribution maintainer's point of view, this can be very tedious and annoying.
... is not accurate for any project other than the Rust compiler (and the reason for the Rust compiler having this requirement is so that they can use new language features in the compiler itself, not because of stability issues with the language). Any other Rust project must be able to build on 1.x and 1.(x+1) with no changes (and the Rust compiler team tests this quite heavily).
What keeps project X from using certain features of Rust? I have seen projects which would only build with Rust Nightly.
I think us trying to ship projects that use Rust Nightly would be absolute madness (I also think it's mad to depend on Rust Nightly features for a "stable" project, but that's a separate topic). However, we can avoid the nightly problem by not shipping things that depend on nightly (neither Firefox nor the library that spawned this discussion depend on Nightly). However, Rust Nightly != Rust Stable. The "certain features" you refer to are unstable features that cannot be used if you compile with the stable compiler.
So, if SLE wants to update to the next ESR release of Firefox, it will also have to include Rust in the same maintenance request.
We ship a new Go package with most new Docker releases as well (as they usually update the compiler they use every couple of releases, and in the past there were bugs where Docker would break if it was built with the wrong compiler version). This is not really a new thing.
I don't think you always need the latest version of Go for updating Docker. I have worked with both codebases myself and never ran into this issue.
Not always, but there have been cases (we're talking ~2 years ago) where a Go upgrade has broken Docker in subtle ways. This was before runc was a separate project, and they were still using "os/exec" for parts of container setup (which obviously proved to be a horrible idea, and now we have the whole nsexec magic that I'm tasked with maintaining upstream). These days most new Docker releases use a new Go version, and so we bundle a Go update "just in case" (and usually they do actually use newer language features).
Upstream Go always ensures that golang-go can be built with gcc-go. Rust has mrustc for that, but that one isn't supporting anything beyond x86 at the moment.
Hmmm, this is a policy that appears to have changed after the compiler rewrite to Go. I distinctly remember watching a talk where rsc described that they would require you to have version (n-1) of the compiler in order to build version n -- so that the compiler could take advantage of new language features -- and you would start the bootstrapping at go1.4.
Huh? If it changed after the compiler rewrite in Go, wouldn't that mean that before that the compiler wasn't written in Go which means that you didn't have that problem in the first place?
I meant that the policy they *planned* to have was the whole (n-1) bootstrap thing, and the *policy* appears to have changed after I looked into it. If you'd like to see what I mean, you can Google rsc's talks about the Go compiler rewrite from a few years ago. -- Aleksa Sarai Senior Software Engineer (Containers) SUSE Linux GmbH <https://www.cyphar.com/>
On 01/23/2018 12:14 PM, Aleksa Sarai wrote:> You mention that Go does a lot of testing to avoid regressions, *so does
the Rust community*. They do a "crater run" (rebuild and unit-test of all crates on crates.io) on a regular cadence during development, when large features are being considered for merging, and for gating releases.
They test on x86_64/x86 *only*. Is openSUSE x86_64/x86 only?
If they find an issue they either fix it in the compiler (if it's a regression) or they go to the project itself and submit a patch (if it's actually a bug in the project). This is one of the things that is mentioned in literally every talk about the Rust development process.
Cool. Can you fix Rust on mips* and powerc32, please? I have been trying to bootstrap it on these targets on and off for several months now. Rust upstream is super fast and busy fixing the compiler on non-x86 platforms. NOT.
Rust upstream lives in a universe where they think that distributions are an outdated concept. This is why they are shipping their own package manager and consider such breaking changes in minor releases acceptable.
I agree that their attitude toward distributions is a problem (for us obviously, but also for users IMO). But this is also an attitude that is shared by a very large number of languages and projects these days.
The other projects aren't trying to mess with core packages, Rust does. If NodeJS or Go blow up in your face, you aren't breaking half your distribution. That's a HUGE difference.
I cannot help but think that the root cause of this problem is that we have not done a good job (as distribution developers in general) of convincing people of the benefits of having a distribution that has a global view of packaging of a system.
I'm pretty sure we have done an excellent job at that. Otherwise we wouldn't have customers who are giving us money for that. You are making the mistake that you assume that people running bleeding edge software have a large relevance for what we do. They don't.
I think us trying to ship projects that use Rust Nightly would be absolute madness (I also think it's mad to depend on Rust Nightly features for a "stable" project, but that's a separate topic). However, we can avoid the nightly problem by not shipping things that depend on nightly (neither Firefox nor the library that spawned this discussion depend on Nightly).
I consider this absolute madness: glaubitz@suse-laptop:~> osc whatdependson openSUSE:Factory librsvg standard x86_64 | wc -l 314 glaubitz@suse-laptop:~> And this doesn't even include transitive dependencies.
I don't think you always need the latest version of Go for updating Docker. I have worked with both codebases myself and never ran into this issue.
Not always, but there have been cases (we're talking ~2 years ago) where a Go upgrade has broken Docker in subtle ways. This was before runc was a separate project, and they were still using "os/exec" for parts of container setup (which obviously proved to be a horrible idea, and now we have the whole nsexec magic that I'm tasked with maintaining upstream).
The difference is that Go isn't trying to push itself as a systems programming language trying to replace core components of a Linux distribution. If Go breaks, it doesn't potentially affect the whole distribution. In the case of Rust when replacing something as librsvg or coreutils with a Rust version, it does. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Le mardi 23 janvier 2018 à 13:26 +0100, John Paul Adrian Glaubitz a écrit :
On 01/23/2018 12:14 PM, Aleksa Sarai wrote:> You mention that Go does a lot of testing to avoid regressions, *so does
the Rust community*. They do a "crater run" (rebuild and unit-test of all crates on crates.io) on a regular cadence during development, when large features are being considered for merging, and for gating releases.
They test on x86_64/x86 *only*.
Is openSUSE x86_64/x86 only?
If they find an issue they either fix it in the compiler (if it's a regression) or they go to the project itself and submit a patch (if it's actually a bug in the project). This is one of the things that is mentioned in literally every talk about the Rust development process.
Cool. Can you fix Rust on mips* and powerc32, please? I have been trying to bootstrap it on these targets on and off for several months now.
Reminder. This is the *openSUSE Factory* mailing list. You are really out of topic here. Please move this discussion elsewhere.
I consider this absolute madness:
glaubitz@suse-laptop:~> osc whatdependson openSUSE:Factory librsvg standard x86_64 | wc -l 314 glaubitz@suse-laptop:~>
How about you discuss with librsvg upstream ? Hint: I wouldn't be surprised if Federico follows this mailing list.. -- Frederic Crozat Enterprise Desktop Release Manager SUSE -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/23/2018 01:46 PM, Frederic Crozat wrote:
I consider this absolute madness:
glaubitz@suse-laptop:~> osc whatdependson openSUSE:Factory librsvg standard x86_64 | wc -l 314 glaubitz@suse-laptop:~>
How about you discuss with librsvg upstream ?
Hint: I wouldn't be surprised if Federico follows this mailing list..
I tried. Answer: INVALID, WONTFIX [1]. :-) Adrian
[1] https://bugzilla.gnome.org/show_bug.cgi?id=777171 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tue, 2018-01-23 at 02:22 +1100, Aleksa Sarai wrote:
Finally, one problem with Rust that I ran into when working on the Rust compiler code itself is the high volatility of the language and the code. Things that used to build fine with Rust 1.19 would break with 1.20 and so on. From a distribution maintainer's point of view, this can be very tedious and annoying.
... is not accurate for any project other than the Rust compiler (and the reason for the Rust compiler having this requirement is so that they can use new language features in the compiler itself, not because of stability issues with the language). Any other Rust project must be able to build on 1.x and 1.(x+1) with no changes (and the Rust compiler team tests this quite heavily).
At least for the upgrade to rust 1.23, we needed to hunt a patch for Firefox, because FF 57 did not build otherwise https://build.opensuse.org/package/view_file/openSUSE:Factory/MozillaFi refox/mozilla-rust-1.23.patch?expand=1 With Rust 1.22, FF 57 built 'just fine' - so this just as a counter- example. But then, new warnings/errors in newer compiler versions is not unique to rust: gcc does that too :) (less common in 'minor version releases' though) Cheers Dominique
On 01/22/2018 04:47 PM, Dominique Leuenberger / DimStar wrote:
But then, new warnings/errors in newer compiler versions is not unique to rust: gcc does that too :) (less common in 'minor version releases' though)
Exactly. The difference is that gcc brings such changes in major releases, not in a new minor release after six weeks. And in the case of gcc it's usually only a new warning that was introduced that you could even turn off in the worst case. The problem with Rust that I ran into was that a certain keyword was no longer supported. So, they actually changed the language spec - in a minor release :O. In any case, Ubuntu, Fedora and Debian are still using the non-Rust version of librsvg. So far, only Arch and openSUSE seem to be using the Rust version. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Hi Adrian, Le 22/01/2018 à 16:54, John Paul Adrian Glaubitz a écrit :
On 01/22/2018 04:47 PM, Dominique Leuenberger / DimStar wrote:
But then, new warnings/errors in newer compiler versions is not unique to rust: gcc does that too :) (less common in 'minor version releases' though)
Exactly. The difference is that gcc brings such changes in major releases, not in a new minor release after six weeks. And in the case of gcc it's usually only a new warning that was introduced that you could even turn off in the worst case.
...that is, until you end up on a codebase which: * Has -Werror hardcoded deep into its build system. * Relies on code which does not follow the C/++ standard to get compiled by accident. * Relies on undefined behaviour, or another codegen characteristic that is considered an unimportant detail by GCC's optimizer. * Uses compiler version detection macros which have not been adapted to the new GCC release. * Causes an ICE in that specific release of GCC, which was introduced by accident in what was supposed to be a simple bugfix. Relying on UB especially happens more often than one would think, and is basically the kernel of truth behind the old "-O3 breaks code" meme. No matter which way one looks at it, compiler updates are unfortunately always a bit of a risky business from a Linux distribution maintainer's point of view.
The problem with Rust that I ran into was that a certain keyword was no longer supported. So, they actually changed the language spec - in a minor release :O.
If that happened to a stable Rust feature, again, this is a major bug, and you should report it. Which keyword was that? Cheers, Hadrien -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/22/2018 05:38 PM, Hadrien Grasland wrote:
...that is, until you end up on a codebase which:
* Has -Werror hardcoded deep into its build system.
That doesn't make any sense. You can always override/amend CFLAGS/CXXFLAGS. There is no such thing as "deeply hardcoded".
* Relies on code which does not follow the C/++ standard to get compiled by accident.
Never seen that.
* Relies on undefined behaviour, or another codegen characteristic that is considered an unimportant detail by GCC's optimizer.
Never seen that.
* Uses compiler version detection macros which have not been adapted to the new GCC release.
That would speak of a very poor build system. Yet, I don't think I have run into such a problem.
* Causes an ICE in that specific release of GCC, which was introduced by accident in what was supposed to be a simple bugfix.
Very rare. So far I have only seen such problems on less common architectures and it was always a breeze to get these things fixed with upstream.
Relying on UB especially happens more often than one would think, and is basically the kernel of truth behind the old "-O3 breaks code" meme.
No matter which way one looks at it, compiler updates are unfortunately always a bit of a risky business from a Linux distribution maintainer's point of view.
Thanks, but I have helped with several gcc transitions in Debian. I never saw anything there as close as with Rust. The changes in gcc actually made sense to me, as I said, I was always able to address them with either very simple patches or by just disabling a certain warning. What about the fact that Rust only considers x86/x86_64 to be a tier 1 architecture? We have just recently seen with Spectre and Meltdown how bad it is to merely focus on x86. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Le 22/01/2018 à 18:16, John Paul Adrian Glaubitz a écrit :
On 01/22/2018 05:38 PM, Hadrien Grasland wrote:
...that is, until you end up on a codebase which:
* Has -Werror hardcoded deep into its build system.
That doesn't make any sense. You can always override/amend CFLAGS/CXXFLAGS. There is no such thing as "deeply hardcoded".
* Relies on code which does not follow the C/++ standard to get compiled by accident.
Never seen that.
* Relies on undefined behaviour, or another codegen characteristic that is considered an unimportant detail by GCC's optimizer.
Never seen that.
* Uses compiler version detection macros which have not been adapted to the new GCC release.
That would speak of a very poor build system. Yet, I don't think I have run into such a problem.
* Causes an ICE in that specific release of GCC, which was introduced by accident in what was supposed to be a simple bugfix.
Very rare. So far I have only seen such problems on less common architectures and it was always a breeze to get these things fixed with upstream.
It looks like you have enjoyed pretty well-written and unambitious C/++ code so far, then. Lucky you! Where I work, broken build systems, code and compilers are a relatively common sight, I'd say we deal with them every other month or so, and that is with a package base is much smaller than the repos of SuSE or Debian!
Relying on UB especially happens more often than one would think, and is basically the kernel of truth behind the old "-O3 breaks code" meme.
No matter which way one looks at it, compiler updates are unfortunately always a bit of a risky business from a Linux distribution maintainer's point of view.
Thanks, but I have helped with several gcc transitions in Debian. I never saw anything there as close as with Rust. The changes in gcc actually made sense to me, as I said, I was always able to address them with either very simple patches or by just disabling a certain warning.
What about the fact that Rust only considers x86/x86_64 to be a tier 1 architecture?
In Mozilla's terminology, "tier 2" means "guaranteed to build" and "tier 1" means "and in addition, all automated tests were run". The reason why you would want to only run the build is that running tests is much more trouble than building, because you can build for any architecture from x86 using a cross-compiler, whereas you need real hardware on the target architecture in order to perform serious testing (as emulators are usually too slow to be practical in intensive testing scenarios, and too "clean" to expose real hardware quirks). Assuming you wanted to build yourself a cross-architecture test farm, capable of withstanding the full traffic of Rust's high-volume CI system, what you would soon discover is that most hardware architectures do not address this need very well. It is trivial to find a hardware reseller who will build you a good x86-based rack at a fair price, whereas other architectures often do not provide hardware in a standard rack form factor at all, or only sell hardware at a crazy premium like IBM does with Power. Moreover, embedded architectures also often restrict themselves to cheaper and slower hardware which is not powerful enough for intensive continuous testing, meaning that you need to pile up tons of un-rackable junk before you get enough processing power for this kind of use case... Add to this that keeping a highly heterogeneous hardware base running is very cumbersome, and that some of Rust's tier 2 architectures do not even provide the required capabilities for running a test server (e.g. asmjs/wasm is too limited, Fuschia is too immature, and iOS is too much locked down), and hopefully you will get a fair picture of how much of an undertaking this all really is. Now, this isn't to say that it cannot be done, of course. Nor that it would not be very worthwhile. There are some awesome multi-architecture test beds out there, like Debian's package QA test bed or Microsoft's driver compatibility torture test farm, and I'm pretty sure Novell also have some cool stuff around for testing SuSE too. But I think that level of QA sophistication may be a bit much to expect from a relatively small team inside of a money-starved nonprofit organization. If someone is ready to donate or lend Mozilla the required infrastructure, great, but if not, I would not expect them to build it on their own...
We have just recently seen with Spectre and Meltdown how bad it is to merely focus on x86.
I think you may want to check the latest developments of the Meltdown/Spectre saga here. Meltdown, it turns out, goes beyond Intel processors (AMD remaining unaffected) and also hits some high-end ARM processors. And Spectre attacks have been demonstrated on pretty much every modern CPU which has a cache and speculative execution features. It is not an x86 versus the rest of the world thing, almost every popular high-performance CPU architecture has been demonstrated to be vulnerable to these attacks in some way, and all high-performance CPU manufacturers now needs to reflect upon these events and figure out how to build a more secure product next time... Cheers, Hadrien -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Monday 2018-01-22 21:30, Hadrien Grasland wrote:
Le 22/01/2018 à 18:16, John Paul Adrian Glaubitz a écrit :
* Causes an ICE in that specific release of GCC, which was introduced by accident in what was supposed to be a simple bugfix.
Very rare. So far I have only seen such problems on less common architectures and it was always a breeze to get these things fixed with upstream.
It looks like you have enjoyed pretty well-written and unambitious C/++ code so far, then. Lucky you! Where I work, broken build systems, code and compilers are a relatively common sight, I'd say we deal with them every other month or so, and that is with a package base is much smaller than the repos of SuSE or Debian!
Are you working in academia/science? There's quite a bit of bad code coming out of that field. But, a distribution is comprised of less than 20% (number pulled out of thin air) packages from obs://science, so indeed we don't experience as many WPM[1] as you may be ;-) [1] http://www.osnews.com/story/19266/WTFs_m -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Le 22/01/2018 à 23:28, Jan Engelhardt a écrit :
On Monday 2018-01-22 21:30, Hadrien Grasland wrote:
Le 22/01/2018 à 18:16, John Paul Adrian Glaubitz a écrit :
* Causes an ICE in that specific release of GCC, which was introduced by accident in what was supposed to be a simple bugfix. Very rare. So far I have only seen such problems on less common architectures and it was always a breeze to get these things fixed with upstream. It looks like you have enjoyed pretty well-written and unambitious C/++ code so far, then. Lucky you! Where I work, broken build systems, code and compilers are a relatively common sight, I'd say we deal with them every other month or so, and that is with a package base is much smaller than the repos of SuSE or Debian! Are you working in academia/science? There's quite a bit of bad code coming out of that field. But, a distribution is comprised of less than 20% (number pulled out of thin air) packages from obs://science, so indeed we don't experience as many WPM[1] as you may be ;-)
You're right, it's probably that :) My largest package management experience comes from HEP experiments, where we try to run quite poorly written and bleeding edge (C++14/17) software on very... ahem... *stable* RedHat releases, and it doesn't end very well. We basically end up maintaining everything but the kernel and libc ourselves in order to get sufficiently recent compilers and libs. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/23/2018 07:08 AM, Hadrien Grasland wrote:
You're right, it's probably that :) My largest package management experience comes from HEP experiments, where we try to run quite poorly written and bleeding edge (C++14/17) software on very... ahem... *stable* RedHat releases, and it doesn't end very well. We basically end up maintaining everything but the kernel and libc ourselves in order to get sufficiently recent compilers and libs.
And that's not the background experience you should base your statements regarding distribution maintenance on. I am a physicist myself and I have worked with scientific code. Lots of that stuff causes heart attacks when reading through the code or build system. Using such packages for reference to argue that C/C++ is not a stable language and eco system, is dishonest, to say the least. C/C++, on the other hand, is still much better supported across the open source ecosystem. The Linux kernel supports more than 30 architectures, all of them are supported by gcc as well. And this portability was one of the key features why Linux is so successful these days. If we start porting many core packages to Rust, we are severely limiting the portability of Linux which is a very bad decision, in my opinion. Linux has effectively zero relevance on the desktop, but it's the dominating platform in embedded systems. Pushing a language which has no or limited support for the majority of embedded architectures (x86 is basically not existing in this market) is not a very good idea. And who tells me that the "starving organization" Mozilla is going to support Linux in the future? The vast majority of Firefox users are on Windows. Mozilla supporting Linux is not very viable from an economic point of view, so I would not be surprised if they dropped Linux support for this very reason. To quote long-time Linux kernel developer Geert Uytterhoeven: "There's lots of Linux beyond ia32" Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 01/22/2018 09:30 PM, Hadrien Grasland wrote:
In Mozilla's terminology, "tier 2" means "guaranteed to build" and "tier 1" means "and in addition, all automated tests were run". The reason why you would want to only run the build is that running tests is much more trouble than building, because you can build for any architecture from x86 using a cross-compiler, whereas you need real hardware on the target architecture in order to perform serious testing (as emulators are usually too slow to be practical in intensive testing scenarios, and too "clean" to expose real hardware quirks).
See, and that is the difference with downstream distributions. They use real hardware to build **and** run the testsuite.. In Debian, cross-building, for example, cross-building is not allowed for the supported release architectures for very good reasons. What you are saying is not a justification, it's merely an explanation. Go upstream, for example, runs the builds and testsuites on all their supported platforms natively. They do not accept ports for which this criteria cannot be met: https://build.golang.org/ Any supported target which is unable to pass the testsuites for a longer period is dropped from Golang. They take testing and integration much more seriously than Rust does.
Assuming you wanted to build yourself a cross-architecture test farm, capable of withstanding the full traffic of Rust's high-volume CI system, what you would soon discover is that most hardware architectures do not address this need very well. It is trivial to find a hardware reseller who will build you a good x86-based rack at a fair price, whereas other architectures often do not provide hardware in a standard rack form factor at all, or only sell hardware at a crazy premium like IBM does with Power. Moreover, embedded architectures also often restrict themselves to cheaper and slower hardware which is not powerful enough for intensive continuous testing, meaning that you need to pile up tons of un-rackable junk before you get enough processing power for this kind of use case...
We have that in Debian. I'm not sure why you are trying to educate me here.
Add to this that keeping a highly heterogeneous hardware base running is very cumbersome, and that some of Rust's tier 2 architectures do not even provide the required capabilities for running a test server (e.g. asmjs/wasm is too limited, Fuschia is too immature, and iOS is too much locked down), and hopefully you will get a fair picture of how much of an undertaking this all really is.
Again, look at this:
https://build.golang.org/# https://jenkins.debian.net/view/rebootstrap/ https://buildd.debian.org/status/package.php?p=gcc-7&suite=sid
You are talking to someone who is working as a build engineer at Debian.
Now, this isn't to say that it cannot be done, of course. Nor that it would not be very worthwhile. There are some awesome multi-architecture test beds out there, like Debian's package QA test bed or Microsoft's driver compatibility torture test farm, and I'm pretty sure Novell also have some cool stuff around for testing SuSE too. But I think that level of QA sophistication may be a bit much to expect from a relatively small team inside of a money-starved nonprofit organization. If someone is ready to donate or lend Mozilla the required infrastructure, great, but if not, I would not expect them to build it on their own...
Maybe Mozilla should understand at some point then that they're not Google.
We have just recently seen with Spectre and Meltdown how bad it is to merely focus on x86.
I think you may want to check the latest developments of the Meltdown/Spectre saga here. Meltdown, it turns out, goes beyond Intel processors (AMD remaining unaffected) and also hits some high-end ARM processors. And Spectre attacks have been demonstrated on pretty much every modern CPU which has a cache and speculative execution features. It is not an x86 versus the rest of the world thing, almost every popular high-performance CPU architecture has been demonstrated to be vulnerable to these attacks in some way, and all high-performance CPU manufacturers now needs to reflect upon these events and figure out how to build a more secure product next time...
https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-to-spectre...
PS: I have patches both in Rust upstream and Mozilla (around 30 patches) upstream before you again are trying to paint me as someone uneducated on the subject. Adrian -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
participants (26)
-
Alberto Planas Dominguez
-
Aleksa Sarai
-
Andreas Schwab
-
Andrei Borzenkov
-
Bernhard M. Wiedemann
-
Bruno Friedmann
-
Carlos E. R.
-
Cris70
-
Daniele
-
Dominique Leuenberger / DimStar
-
Frederic Crozat
-
H.Merijn Brand
-
Hadrien Grasland
-
Jan Engelhardt
-
John Paul Adrian Glaubitz
-
Marco Calistri
-
Matthias Gerstner
-
Michal Kubecek
-
Mykola Krachkovsky
-
Patrick Shanahan
-
Richard Brown
-
Robert Munteanu
-
Roger Oberholtzer
-
Stefan Seyfried
-
Stephan Kulow
-
Thorsten Kukuk