[opensuse-factory] O Factory - Where art Thou?
Hi It's Thanksgiving today and we had a great harvest - "Bottle" rocks. But today is also a good time to plant new seeds for even better openSUSE releases. Factory has a healthy growth, but we have to make sure it's growing in the right direction and so the openSUSE team at SUSE had an on-and-off discussion basically since 12.3 on how to improve things. But first let me give you some background, that you might not be aware of: 10.3 had 3334 packages, 11.1 3746, 3605 for 11.2, 3807 for 11.3, 4784 in 12.1, 5710 in 12.2, 6246 in 12.3, 6678 in 13.1, 6800 right now in Factory. If you need a picture, look at http://s.kulow.org/packages Integrating these to make a good distribution is real work. And one of my favourite songs (in that context) goes: No one said it would be easy But no one said it'd be this hard No one said it would be easy No one thought we'd come this far In that song Sheryl Crow sings "It's just a question of eliminating obstacles", so what did we in the openSUSE Team do to help? We focused on getting a grip on testing by improving openqa (http://s.kulow.org/openqa-blog), but we soon found out that it was not good enough to test Factory ISOs. Factory is broken often enough not to produce ISOs at all, ISOs can't be installed and once these problems are sorted out, we found in openqa very basic things to be broken, but it was too late to protect factory users to run into them. One thing I tried was to setup "rings" to help easing the very painful staging projects (with 6800 packages, every staging project as we use them is a monster). That experiment has shown rings to be worthy way to check, but they won't work as I thought with the OBS as it is. We need to think bigger. So we tried to come up with an idea on how to improve the factory development process that includes a more clever way to utilize staging projects and openQA. As this development process is a bit hard to explain in email, Alberto and Ancor prepared an interactive diagram: https://progress.opensuse.org/workflow/factory-proposal.html We basically want to put the pressure on the submitting packager not the user. Using factory should be safe, for this we want to revive a thing that has been lost on the way: Bernhard's factory-tested project. And we want to open another submission path: from staging projects. Consider a situation where recent updates to GNOME and automake 'clash', causing problems when they're installed together. Right now we throw both into Factory, breaking both for our Factory users until we solve the issues. Instead, we think working on these issues in a separate 'staging project' could be the solution. None of us knows how *exactly* it will look alike because we need to get a conversation with the OBS team going. But the basic idea is: - GNOME:Factory stays the devel project of things that relate to GNOME updates - devel:tools stays the devel project of things that relate to automake updates - on automake updates, we open up a new devel project that stages GNOME packages for the new automake. In there, GNOME devs and automake experts work together to fix them and updates in there are submitted either back to GNOME:Factory and are integrated right into Factory. Or if that's not possible, the automake update is "grouped" with various GNOME package updates and these updates end in Factory together. There are several problems with the current "everything through devel project" approach we need to solve. Our ideas are just ideas, but I had several discussions in various places and nobody offered a better idea. So we really would like to start with it and I would like to hear your concerns so they can be part of the final solution. We have more ideas, but we can only achieve that if we get help, so let me finish with another favourite of mine What would you think if I sang out of tune? Would you stand up and walk out on me? Lend me your ears and I'll sing you a song And I'll try not to sing out of key Oh, I get by with a little help from my friends Greetings, Stephan -- Ignorance is when you don't know anything and somebody finds it out. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Hi Coolo, good article and interesting idea. I think I also commented original mail about rings. I am not sure if it is right direction how it should go, because there is still too much human interaction. I think that in optimal case all work should do machines and humans only solve exceptions or special cases. Of course it depends if we have enough build power to keep all work to machines. My idea is much simplier: 1) for every package submission to factory create own COW copy of factory where try to build package and all of its dependencies ( so quick for leaf package and slow for core package ) 2f) if this fail reject submission, then create from copy staging project and fix all problems related to failure 2o) if everything is fine, then create ISO and test it in openQA and follow to 3 3) all automatic tests passes, all requiring manual check is done, then repeat automatic test before real submit to minimalize race conditions this way factory should always pass openQA, so have at least basic quality. For problem with related commits I propose simple solution - express dependency in requires and have bot, that check if factory copy can satisfy dependencies and if not, search all submit request if there is new enough version and merge submits together. This way you have proper dependencies and need no manual human intervention. I think it is important to handle so many package in factory to have way how to automatize as much as possible. ( I think that some of my ideas requires writing a code or modify BS code, but for future it is more important then introduce more human interaction ). Example is e.g. in Yast which have almost 1M lines of source codes and without automatization we cannot keep it up to date and find problems quick enough. So we work quite hard to making things working without our interaction and only if problem is found we make action. Josef On Thu, 28 Nov 2013 14:49:32 +0100 Stephan Kulow <coolo@suse.de> wrote:
Hi
It's Thanksgiving today and we had a great harvest - "Bottle" rocks. But today is also a good time to plant new seeds for even better openSUSE releases. Factory has a healthy growth, but we have to make sure it's growing in the right direction and so the openSUSE team at SUSE had an on-and-off discussion basically since 12.3 on how to improve things.
But first let me give you some background, that you might not be aware of: 10.3 had 3334 packages, 11.1 3746, 3605 for 11.2, 3807 for 11.3, 4784 in 12.1, 5710 in 12.2, 6246 in 12.3, 6678 in 13.1, 6800 right now in Factory. If you need a picture, look at http://s.kulow.org/packages
Integrating these to make a good distribution is real work. And one of my favourite songs (in that context) goes:
No one said it would be easy But no one said it'd be this hard No one said it would be easy No one thought we'd come this far
In that song Sheryl Crow sings "It's just a question of eliminating obstacles", so what did we in the openSUSE Team do to help? We focused on getting a grip on testing by improving openqa (http://s.kulow.org/openqa-blog), but we soon found out that it was not good enough to test Factory ISOs. Factory is broken often enough not to produce ISOs at all, ISOs can't be installed and once these problems are sorted out, we found in openqa very basic things to be broken, but it was too late to protect factory users to run into them.
One thing I tried was to setup "rings" to help easing the very painful staging projects (with 6800 packages, every staging project as we use them is a monster). That experiment has shown rings to be worthy way to check, but they won't work as I thought with the OBS as it is. We need to think bigger. So we tried to come up with an idea on how to improve the factory development process that includes a more clever way to utilize staging projects and openQA.
As this development process is a bit hard to explain in email, Alberto and Ancor prepared an interactive diagram:
https://progress.opensuse.org/workflow/factory-proposal.html
We basically want to put the pressure on the submitting packager not the user. Using factory should be safe, for this we want to revive a thing that has been lost on the way: Bernhard's factory-tested project.
And we want to open another submission path: from staging projects. Consider a situation where recent updates to GNOME and automake 'clash', causing problems when they're installed together. Right now we throw both into Factory, breaking both for our Factory users until we solve the issues. Instead, we think working on these issues in a separate 'staging project' could be the solution. None of us knows how *exactly* it will look alike because we need to get a conversation with the OBS team going. But the basic idea is:
- GNOME:Factory stays the devel project of things that relate to GNOME updates - devel:tools stays the devel project of things that relate to automake updates - on automake updates, we open up a new devel project that stages GNOME packages for the new automake. In there, GNOME devs and automake experts work together to fix them and updates in there are submitted either back to GNOME:Factory and are integrated right into Factory. Or if that's not possible, the automake update is "grouped" with various GNOME package updates and these updates end in Factory together.
There are several problems with the current "everything through devel project" approach we need to solve. Our ideas are just ideas, but I had several discussions in various places and nobody offered a better idea. So we really would like to start with it and I would like to hear your concerns so they can be part of the final solution.
We have more ideas, but we can only achieve that if we get help, so let me finish with another favourite of mine
What would you think if I sang out of tune? Would you stand up and walk out on me? Lend me your ears and I'll sing you a song And I'll try not to sing out of key Oh, I get by with a little help from my friends
Greetings, Stephan
-- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Quoting Josef Reidinger <jreidinger@suse.cz>:
Hi Coolo, good article and interesting idea. I think I also commented original mail about rings. I am not sure if it is right direction how it should go, because there is still too much human interaction. I think that in optimal case all work should do machines and humans only solve exceptions or special cases. Of course it depends if we have enough build power to keep all work to machines.
My idea is much simplier:
1) for every package submission to factory create own COW copy of factory where try to build package and all of its dependencies ( so quick for leaf package and slow for core package )
2f) if this fail reject submission, then create from copy staging project and fix all problems related to failure
2o) if everything is fine, then create ISO and test it in openQA and follow to 3
Very similar to what we are proposing so far! But this stage can be avoided IMHO. We do not need ISOs to test GCC, kernel or systemd. The ISO is needed to test the installation process: YaST, KIWI or whatever tool is used in the medium. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, 28 Nov 2013 16:24:09 +0100 Alberto Planas Dominguez <aplanas@suse.de> wrote:
Quoting Josef Reidinger <jreidinger@suse.cz>:
Hi Coolo, good article and interesting idea. I think I also commented original mail about rings. I am not sure if it is right direction how it should go, because there is still too much human interaction. I think that in optimal case all work should do machines and humans only solve exceptions or special cases. Of course it depends if we have enough build power to keep all work to machines.
My idea is much simplier:
1) for every package submission to factory create own COW copy of factory where try to build package and all of its dependencies ( so quick for leaf package and slow for core package )
2f) if this fail reject submission, then create from copy staging project and fix all problems related to failure
2o) if everything is fine, then create ISO and test it in openQA and follow to 3
Very similar to what we are proposing so far!
But this stage can be avoided IMHO. We do not need ISOs to test GCC, kernel or systemd. The ISO is needed to test the installation process: YaST, KIWI or whatever tool is used in the medium.
Sorry, but I completelly disagree. Maybe it is because I am in Yast team, but you can be surprised how many Yast "bugs" is caused by problems in underlaying layers. Yast start bunch of various scripts, try to manage services in systemd and a lot of similar things. So if systemd is broken or even if they incompatible change behavior Yast start failing without any changes in Yast code. And in fact it is often source of broken Yast code, that noone told us that something change in system and we do not adapt Yast code to this change. Josef -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 28.11.2013 16:53, Josef Reidinger wrote:
But this stage can be avoided IMHO. We do not need ISOs to test GCC, kernel or systemd. The ISO is needed to test the installation process: YaST, KIWI or whatever tool is used in the medium.
Sorry, but I completelly disagree. Maybe it is because I am in Yast team, but you can be surprised how many Yast "bugs" is caused by problems in underlaying layers. Yast start bunch of various scripts, try to manage services in systemd and a lot of similar things. So if systemd is broken or even if they incompatible change behavior Yast start failing without any changes in Yast code. And in fact it is often source of broken Yast code, that noone told us that something change in system and we do not adapt Yast code to this change.
Yeah, I agree Alberto's examples are wrong. But there are updates that are indeed hard to break yast with, e.g. chromium updates. We don't need ISOs for *that* :) Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, 28 Nov 2013 16:56:24 +0100 Stephan Kulow <coolo@suse.de> wrote:
On 28.11.2013 16:53, Josef Reidinger wrote:
But this stage can be avoided IMHO. We do not need ISOs to test GCC, kernel or systemd. The ISO is needed to test the installation process: YaST, KIWI or whatever tool is used in the medium.
Sorry, but I completelly disagree. Maybe it is because I am in Yast team, but you can be surprised how many Yast "bugs" is caused by problems in underlaying layers. Yast start bunch of various scripts, try to manage services in systemd and a lot of similar things. So if systemd is broken or even if they incompatible change behavior Yast start failing without any changes in Yast code. And in fact it is often source of broken Yast code, that noone told us that something change in system and we do not adapt Yast code to this change.
Yeah, I agree Alberto's examples are wrong. But there are updates that are indeed hard to break yast with, e.g. chromium updates. We don't need ISOs for *that* :)
Greetings, Stephan
Yes, iso is not needed. But it would be nice if we have also automatic test for it ( of course I expect that we start with important packages to cover ). At least simple install and run it if some window appear and try to show opensuse.org page can be enough. I agree that iso for stuff that do not affect installation is not needed. Still I think it should be covered. In past I play with something similar and having cloud for this task and midnight snapshot of latest factory is make task much easier. You simply say that you want machine with the latest snapshot, run some tests there ( like update of package, start it etc. ) and then kill machine. For non-trivial testing including browser, synchronization of repositories and creating rpm repository metadata it took around 1 hour, which is still for me acceptable time. Josef -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Josef Reidinger - 17:17 28.11.13 wrote:
On Thu, 28 Nov 2013 16:56:24 +0100 Stephan Kulow <coolo@suse.de> wrote:
On 28.11.2013 16:53, Josef Reidinger wrote:
But this stage can be avoided IMHO. We do not need ISOs to test GCC, kernel or systemd. The ISO is needed to test the installation process: YaST, KIWI or whatever tool is used in the medium.
Sorry, but I completelly disagree. Maybe it is because I am in Yast team, but you can be surprised how many Yast "bugs" is caused by problems in underlaying layers. Yast start bunch of various scripts, try to manage services in systemd and a lot of similar things. So if systemd is broken or even if they incompatible change behavior Yast start failing without any changes in Yast code. And in fact it is often source of broken Yast code, that noone told us that something change in system and we do not adapt Yast code to this change.
Yeah, I agree Alberto's examples are wrong. But there are updates that are indeed hard to break yast with, e.g. chromium updates. We don't need ISOs for *that* :)
Greetings, Stephan
Yes, iso is not needed. But it would be nice if we have also automatic test for it ( of course I expect that we start with important packages to cover ). At least simple install and run it if some window appear and try to show opensuse.org page can be enough.
Well, we have automatic test for this - openQA ;-)
I agree that iso for stuff that do not affect installation is not needed. Still I think it should be covered. In past I play with something similar and having cloud for this task and midnight snapshot of latest factory is make task much easier. You simply say that you want machine with the latest snapshot, run some tests there ( like update of package, start it etc. ) and then kill machine. For non-trivial testing including browser, synchronization of repositories and creating rpm repository metadata it took around 1 hour, which is still for me acceptable time.
We hope to use openQA for something like this as well in the future... -- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, Nov 28, Josef Reidinger wrote:
1) for every package submission to factory create own COW copy of factory where try to build package and all of its dependencies ( so quick for leaf package and slow for core package )
This can not work because even today a new SR is not built against the target project. Instead the result of the origin project is used as reference whether the new package at least builds. Perhaps it should be verified how many resources it would take in practice f at least the SR is built just against the target. And maybe the result of that build-it-once-more could be used right away when the SR is accepted? Olaf -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Stephan, Thanks a lot for sharing your thoughts with us. Quoting Stephan Kulow <coolo@suse.de>:
Hi
But first let me give you some background, that you might not be aware of: 10.3 had 3334 packages, 11.1 3746, 3605 for 11.2, 3807 for 11.3, 4784 in 12.1, 5710 in 12.2, 6246 in 12.3, 6678 in 13.1, 6800 right now in Factory. If you need a picture, look at http://s.kulow.org/packages
impressive... And we're talking source packages here, not binary packages.
https://progress.opensuse.org/workflow/factory-proposal.html
We basically want to put the pressure on the submitting packager not the user. Using factory should be safe, for this we want to revive a thing that has been lost on the way: Bernhard's factory-tested project.
That looks interesting.. with two, for me, undefined 'dark processes' with huge impact: - Needs staging? - Needs QA? Are there already ideas on how to implement/formulate this decision tree?
And we want to open another submission path: from staging projects. Consider a situation where recent updates to GNOME and automake 'clash', causing problems when they're installed together. Right now we throw both into Factory, breaking both for our Factory users until we solve the issues. Instead, we think working on these issues in a separate 'staging project' could be the solution. None of us knows how *exactly* it will look alike because we need to get a conversation with the OBS team going. But the basic idea is:
ok, I think THAT partially answers above question with a 'no'
- GNOME:Factory stays the devel project of things that relate to GNOME updates - devel:tools stays the devel project of things that relate to automake updates - on automake updates, we open up a new devel project that stages GNOME packages for the new automake. In there, GNOME devs and automake experts work together to fix them and updates in there are submitted either back to GNOME:Factory and are integrated right into Factory. Or if that's not possible, the automake update is "grouped" with various GNOME package updates and these updates end in Factory together.
In some cases, those breakages are easy identified.. in others, they are not. Staying with the GNOME examples (where I happen to also have some good insight): once G:F is being submitted to Factory, it is known to work on top of what is Factory 'at that moment' There are very few submits 'auto forwarded' (the team decides that on a case-by-case.. usually stable dot releases are considered safe to forward) But as you say, an 'incoming automake' at the same time would of course 'change' what underlying Factory was, potentially break it (automake upgrades are known for that and well in our minds). so, in this case, I'd consider 'automake' to be the critical component asking for staging. But what in other cases, where it's less obvious? Like GLIB 2.32 (back in the days) when the 'include' style was changed to 'force' devs to do what upstream originally intended: only #include <glib/glib.h>, no other headers. Inside G:Factory this was obviously sorted out prior to submission to Factory. And I believe we worked well enough in identifying the problem of this, knowing that other stuff relies heavily on glib, triggering a staging project (where I myself fixed a bunch of packages myself). What though, if we'd not have identified this 'subtle' change as critical? Who would be responsible to 'detect' such things? (ok, let's not do a RACI for that.. latest with 'A' we will fail to assign this to anybody). So far it's in 'trying' to detect those cases early enough, Stephan does a great job in this, his endless experience is a great asset for him being able to do so. Also, we need to be able to define an exit trigger from a staging project: when do we consider it 'ready'. This does not forcibly mean 100% 'builds' against the offending package (imho), as this could potentially block a kernel from entering due to some low-hanging fruit just not having support for that kernel.. critical for the users of that module, sure, but critical enough to block a kernel update? (just to show, this is not black/white magic.. and describing this in a way that it's clear and not 'subject to the mood of Stephan' is difficult, but should be a goal to strive for). Dominique -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Dne Čt 28. listopadu 2013 14:32:54, Dominique Leuenberger a.k.a. Dimstar napsal(a): Hello Dominique,
https://progress.opensuse.org/workflow/factory-proposal.html
We basically want to put the pressure on the submitting packager not the user. Using factory should be safe, for this we want to revive a thing that has been lost on the way: Bernhard's factory-tested project.
That looks interesting.. with two, for me, undefined 'dark processes' with huge impact:
- Needs staging? - Needs QA?
Yep they are now dark because we plan to sent the detailed per-parts as we have something to show up to community or as we start working on it. It for sure won't be on/off switch but rather slowly we will adapt the tools and then enable enable it as we roll and see the features ready. :)
Are there already ideas on how to implement/formulate this decision tree?
There are some really really rough things which are not good for posting right now, but rather allow us to think on it internaly and when we start working on it as I said above, it will all be sent here for discussion and optimalizations. Basically now the decision for both of those are done by coolo or me with mixed results. So we will formalize it and give the power to technical review as you see the influx of new stuff in there hopefully better. And leave us to actually improve the tooling around rather than just work with them :)
... ok, I think THAT partially answers above question with a 'no'
Eeexactly viz above :P *snip*
So far it's in 'trying' to detect those cases early enough, Stephan does a great job in this, his endless experience is a great asset for him being able to do so.
Also, we need to be able to define an exit trigger from a staging project: when do we consider it 'ready'. This does not forcibly mean 100% 'builds' against the offending package (imho), as this could potentially block a kernel from entering due to some low-hanging fruit just not having support for that kernel.. critical for the users of that module, sure, but critical enough to block a kernel update?
(just to show, this is not black/white magic.. and describing this in a way that it's clear and not 'subject to the mood of Stephan' is difficult, but should be a goal to strive for).
Yes I wholeheartly agree that currently it is just a magic we do there. One of the options on the table was "lets force staging projects on everything" but sadly that is too much on the OBS and we would just be boiling it with all the rebuilds. In the end we go with above way which is technically possible and we can learn and improve it on the way wrt detecting culprits. Basically Review team will have update SR reviewing tools with possibility to group, review, and mark for qa/staging on any submission with some possibility to edit the severity/f**kupablity of the package so they can do the right decision. But even tho I am now interested for sure to talk about it, lets wait till we work on that area. Have no fear that one of the goals of this is to at worst keep the workload on reviewers the same, prefferably reduce it, which again will be expanded later :) HTH Tom
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 28.11.2013 16:01, Tomáš Chvátal wrote:
One of the options on the table was "lets force staging projects on everything" but sadly that is too much on the OBS and we would just be boiling it with all the rebuilds. In the end we go with above way which is technically possible and we can learn and improve it on the way wrt detecting culprits.
Basically Review team will have update SR reviewing tools with possibility to group, review, and mark for qa/staging on any submission with some possibility to edit the severity/f**kupablity of the package so they can do the right decision.
But even tho I am now interested for sure to talk about it, lets wait till we
As I said: we have to talk to the OBS team about possible solutions for the "too much on the OBS" problem. Perhaps our assumption is wrong and we can do indeed what Josef suggests: create a staging project for every submission. Then the whole discussion on who reviews and decides what will be wasted time. But for now we assume certain limits on the resources we have and that means more humans required :) Greetings, Stephan -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iD8DBQFSl15QwFSBhlBjoJYRAo1yAJ0UyR/1P6y806Y9j52Dhn9vNrhPswCffifv Yn8It6fPxPYl1ZhmyNfTY90= =D5nG -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Dne Čt 28. listopadu 2013 16:16:32, Stephan Kulow napsal(a):
As I said: we have to talk to the OBS team about possible solutions for the "too much on the OBS" problem. Perhaps our assumption is wrong and we can do indeed what Josef suggests: create a staging project for every submission. Then the whole discussion on who reviews and decides what will be wasted time.
Indeed that would be best solution. But really lets see at the point we get to it. For now I agree it is pointless to discuss it in more depth. Tom
Am Donnerstag, 28. November 2013, 16:16:32 schrieb Stephan Kulow:
On 28.11.2013 16:01, Tomáš Chvátal wrote:
One of the options on the table was "lets force staging projects on everything" but sadly that is too much on the OBS and we would just be boiling it with all the rebuilds. In the end we go with above way which is technically possible and we can learn and improve it on the way wrt detecting culprits.
Basically Review team will have update SR reviewing tools with possibility to group, review, and mark for qa/staging on any submission with some possibility to edit the severity/f**kupablity of the package so they can do the right decision.
But even tho I am now interested for sure to talk about it, lets wait till we
As I said: we have to talk to the OBS team about possible solutions for the "too much on the OBS" problem. Perhaps our assumption is wrong and we can do indeed what Josef suggests: create a staging project for every submission. Then the whole discussion on who reviews and decides what will be wasted time.
we can try at least. At least as long we do not publish it and our priority handling ensures that they do not block all other people from using OBS. -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, 28 Nov 2013 16:22:41 +0100 Adrian Schröter <adrian@suse.de> wrote:
Am Donnerstag, 28. November 2013, 16:16:32 schrieb Stephan Kulow:
On 28.11.2013 16:01, Tomáš Chvátal wrote:
One of the options on the table was "lets force staging projects on everything" but sadly that is too much on the OBS and we would just be boiling it with all the rebuilds. In the end we go with above way which is technically possible and we can learn and improve it on the way wrt detecting culprits.
Basically Review team will have update SR reviewing tools with possibility to group, review, and mark for qa/staging on any submission with some possibility to edit the severity/f**kupablity of the package so they can do the right decision.
But even tho I am now interested for sure to talk about it, lets wait till we
As I said: we have to talk to the OBS team about possible solutions for the "too much on the OBS" problem. Perhaps our assumption is wrong and we can do indeed what Josef suggests: create a staging project for every submission. Then the whole discussion on who reviews and decides what will be wasted time.
we can try at least. At least as long we do not publish it and our priority handling ensures that they do not block all other people from using OBS.
I think that only tricky part of my suggestion is COW, because now if I create new repo with copy of another one everything is rebuilded, but I want just copy what is there and modify only one package, that can possible trigger another build, but not whole 6k packages rebuild if not needed. Josef -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am Donnerstag, 28. November 2013, 16:31:38 schrieb Josef Reidinger:
On Thu, 28 Nov 2013 16:22:41 +0100 Adrian Schröter <adrian@suse.de> wrote:
Am Donnerstag, 28. November 2013, 16:16:32 schrieb Stephan Kulow:
On 28.11.2013 16:01, Tomáš Chvátal wrote:
One of the options on the table was "lets force staging projects on everything" but sadly that is too much on the OBS and we would just be boiling it with all the rebuilds. In the end we go with above way which is technically possible and we can learn and improve it on the way wrt detecting culprits.
Basically Review team will have update SR reviewing tools with possibility to group, review, and mark for qa/staging on any submission with some possibility to edit the severity/f**kupablity of the package so they can do the right decision.
But even tho I am now interested for sure to talk about it, lets wait till we
As I said: we have to talk to the OBS team about possible solutions for the "too much on the OBS" problem. Perhaps our assumption is wrong and we can do indeed what Josef suggests: create a staging project for every submission. Then the whole discussion on who reviews and decides what will be wasted time.
we can try at least. At least as long we do not publish it and our priority handling ensures that they do not block all other people from using OBS.
I think that only tricky part of my suggestion is COW, because now if I create new repo with copy of another one everything is rebuilded, but I want just copy what is there and modify only one package, that can possible trigger another build, but not whole 6k packages rebuild if not needed.
we have the trigger="localdep" setting. Means OBS will only rebuild packages which depend on that one. However, on the long run, this only tells if something compiles. It does not say if it works actually. So, with splitting it up, we do also need a way better automated QA since no one will be able to test all the staging projects anymore. -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, 28 Nov 2013 16:46:35 +0100 Adrian Schröter <adrian@suse.de> wrote:
Am Donnerstag, 28. November 2013, 16:31:38 schrieb Josef Reidinger:
On Thu, 28 Nov 2013 16:22:41 +0100 Adrian Schröter <adrian@suse.de> wrote:
Am Donnerstag, 28. November 2013, 16:16:32 schrieb Stephan Kulow:
On 28.11.2013 16:01, Tomáš Chvátal wrote:
One of the options on the table was "lets force staging projects on everything" but sadly that is too much on the OBS and we would just be boiling it with all the rebuilds. In the end we go with above way which is technically possible and we can learn and improve it on the way wrt detecting culprits.
Basically Review team will have update SR reviewing tools with possibility to group, review, and mark for qa/staging on any submission with some possibility to edit the severity/f**kupablity of the package so they can do the right decision.
But even tho I am now interested for sure to talk about it, lets wait till we
As I said: we have to talk to the OBS team about possible solutions for the "too much on the OBS" problem. Perhaps our assumption is wrong and we can do indeed what Josef suggests: create a staging project for every submission. Then the whole discussion on who reviews and decides what will be wasted time.
we can try at least. At least as long we do not publish it and our priority handling ensures that they do not block all other people from using OBS.
I think that only tricky part of my suggestion is COW, because now if I create new repo with copy of another one everything is rebuilded, but I want just copy what is there and modify only one package, that can possible trigger another build, but not whole 6k packages rebuild if not needed.
we have the trigger="localdep" setting. Means OBS will only rebuild packages which depend on that one.
However, on the long run, this only tells if something compiles. It does not say if it works actually. So, with splitting it up, we do also need a way better automated QA since no one will be able to test all the staging projects anymore.
That is reason why I have in my proposal using openQA. Of course we can have more sets of tests and run only subset if any package affecting it changed. I hope that in future we can more focus on automatic testing of results, then on manual review of source code changes. <corporate>Because results matter</corporate> :) I think that even simply try that start software and verify basic behavior can prevent us a bunch of packaging and integration problems. Of course it cannot detect advanced problems, but I think it is not goal of factory. Josef -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Hello, Am Donnerstag, 28. November 2013 schrieb Josef Reidinger:
I hope that in future we can more focus on automatic testing of results, then on manual review of source code changes. <corporate>Because results matter</corporate> :)
I have to disagree ;-) I'm fine with adding automatic testing, but the manual code review should stay in place. A good example would be code like test "$USER" == 'jreidinger' && rm -fr ~jreidinger If something like this is added, a human reviewer will easily notice it. However, the automatic test (which is running as a different user and therefore skips the rm command) will tell you: Test succeeded - everything is fine! Results matter, right? ;-) Another example where reading the code is the better choice are changes in rarely used code paths - it happened to me more than once that I found interesting[tm] (and buggy) code when reading the sources of various programs. Sometimes those code sections were "hidden" from most users, for example in an error handling section or in code that only 1% of the users use. It might be possible to write tests for the "only used by 1% of the users" code, but testing all error handling (and reproducing the errors to trigger this code path) is probably harder than proofreading the source code ;-) Regards, Christian Boltz -- I've already burnt my fingers with upstream patches ... now let's see whats happen and let's wait that my fingers will cool down. [Werner Fink in https://bugzilla.novell.com/show_bug.cgi?id=752422] -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, 28 Nov 2013 21:07:14 +0100 Christian Boltz <opensuse@cboltz.de> wrote:
Hello,
Am Donnerstag, 28. November 2013 schrieb Josef Reidinger:
I hope that in future we can more focus on automatic testing of results, then on manual review of source code changes. <corporate>Because results matter</corporate> :)
I have to disagree ;-)
I'm fine with adding automatic testing, but the manual code review should stay in place.
Tricky part is that for manual code review you need time and with increased number of packages it is not possible unless manpower is increased. I agree with you that code review is really important and for example in Yast we see really good impact on code quality, but experience is that for product quality automatic testing is more important.
A good example would be code like test "$USER" == 'jreidinger' && rm -fr ~jreidinger
If something like this is added, a human reviewer will easily notice it. However, the automatic test (which is running as a different user and therefore skips the rm command) will tell you:
Test succeeded - everything is fine!
Results matter, right? ;-)
Well, what you use is malicous code and if I want to get it to factory, I think I can add it there even if nobody notice it ( OK, I expect 95% to make it unnotice ). See the most famous /usr removal code - https://github.com/MrMEEE/bumblebee-Old-and-abbandoned/commit/a047be85247755... and it is not intentional malicious code, so if you hide it in some variable like and spread it into long shell script, then it is really hard to find it.
Another example where reading the code is the better choice are changes in rarely used code paths - it happened to me more than once that I found interesting[tm] (and buggy) code when reading the sources of various programs. Sometimes those code sections were "hidden" from most users, for example in an error handling section or in code that only 1% of the users use.
I agree, but it is bug in software. Do you have idea how much time can take detail code review for all software in factory? even diffs for projects like KDE or GNOME is really HUGE! I make statistics for Yast and Yast itself contain almost 1 000 000 lines of source code, similar for kernel. I think that we should only review our own patches and keep main software on upstream. So if there is no our patches or patch is removed, then I think no manual review is needed ( checking checksum of tarball should be automatic ).
It might be possible to write tests for the "only used by 1% of the users" code, but testing all error handling (and reproducing the errors to trigger this code path) is probably harder than proofreading the source code ;-)
Yes, I agree that 100% test coverage is utopia. But same way I don't expect that we catch all errors in factory. I think that goal is to have stable factory that at least ensure that application still can run and do basic functionality, for corner cases and specific conditions it is fine for me if it contain bug, that we fix after user find it or upstream fix it and upstream create test case in perfect case :)
Regards,
Christian Boltz
Thanks for your reply, it contain some interesting ideas that allows me to more think about my own solution Josef -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 29.11.2013 10:24, Josef Reidinger wrote:
Tricky part is that for manual code review you need time and with increased number of packages it is not possible unless manpower is increased. I agree with you that code review is really important and for example in Yast we see really good impact on code quality, but experience is that for product quality automatic testing is more important.
We thought about that too and it's indeed a limiting factor, but IMO having reviews is so valuable that I wouldn't want to throw it away just because it's hard. Right now reviews have no priority at all, so the more packages we have the easier it is to get lost. And there is no support for sharing work in reviews the webui interface to do reviews is suboptimal too (it's dominated by the diff and the discussion is very much hidden). On the other hand we do have enough experienced packagers who can do reviews, but at the moment being a reviewer or not is black & white. So if you volunteer to review because you know how to package perl packages well, you end up being part of the same team as the one reviewing kernel submissions. So possibly we have do have the reviews done by more specific groups instead of one big group? That would leave us close to the Signed-Off tags kernel patches bear, but the OBS has no support for marking patches as reviewed by $Josef. Yet another idea how to improve things. Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am 29.11.2013 11:12, schrieb Stephan Kulow:
On 29.11.2013 10:24, Josef Reidinger wrote:
Tricky part is that for manual code review you need time and with increased number of packages it is not possible unless manpower is increased. I agree with you that code review is really important and for example in Yast we see really good impact on code quality, but experience is that for product quality automatic testing is more important.
We thought about that too and it's indeed a limiting factor, but IMO having reviews is so valuable that I wouldn't want to throw it away just because it's hard.
Right now reviews have no priority at all, so the more packages we have the easier it is to get lost.
I agree. It's also quite a task to keep up with recent development and decisions.
And there is no support for sharing work in reviews the webui interface to do reviews is suboptimal too (it's dominated by the diff and the discussion is very much hidden).
The request view could hava (ajax-powered) tabs, where the first one only shows the discussion, the most important facts (like the review header) and the history (in a meaningful way). I would also put the build results and rpmlint issues on the first tab. Second tab would be the diff in all it's beauty. When build results and rpmlint issues moved to the first tab, the diff can take up the full width and people need to scroll less. Form controls (accept/decline/add-review) would have to be shown as a seperate bento box (OBS code insider) down below as it is today. Github users will find this vaguely familiar.
On the other hand we do have enough experienced packagers who can do reviews, but at the moment being a reviewer or not is black & white. So if you volunteer to review because you know how to package perl packages well, you end up being part of the same team as the one reviewing kernel submissions. So possibly we have do have the reviews done by more specific groups instead of one big group?
So far, the review team has been a bunch of generalists of which most have a rather heavy footprint in the distro. This helped to keep up with a general level of quality. However, quality reviews take a considerable amount of time. Time which most of us volunteers don't always can afford. Dedicated topic groups are a neat idea, but I consider this the (sole) responsibility of the devel project. Instead of just accepting all random <strike>crap</strike>hot stuff into their devel project, maintainers should discuss and weigh the pros and cons. This is happen to different degrees in different projects. However, hoping that the review team will just catch everything is definitely the wrong mood. But my estimation is that ~75% devel project maintainer do it that way (maybe not knowingly). So rather than "fixing" the review team, we should fix the devel projects maintainer mindset. Rob and I started this a while ago under the "maintainer model cleanup" moniker. Ultimately (and due to their lack of special knowledge), the review team members have to trust the devel project maintainers that they do the right thing. Reviewers can only catch general issues or stuff that went in unnoticed. So every time I do reviews, I try to check opensuse-factory/-packaging for news (like tirp/krb5 ATM). But even though I try to do as good as I can, I won't dig thru all the new code, all upstream bug-trackers and ML discussions. I simply trust the devel project maintainer that this happened already. Otherwise, I would have to spend ~1 month per Kernel submit request ;-) That's why reviews emphasize on the packaging side of things. That's really what upstream devs pay little attention. Therefore we have countless policies that distill years of experience, middle-grounds and common agreement. Of course they aren't perfect and sometimes outdated. But this is an important part of the "quality" of our distro. Otherwise we would just ship upstream tarballs.
That would leave us close to the Signed-Off tags kernel patches bear, but the OBS has no support for marking patches as reviewed by $Josef. Yet another idea how to improve things.
This partly works today. Everybody is free to leave his comment on each Factory submit request. But I agree, the integration and tooling has room for improvement :-) -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Le jeudi 28 novembre 2013, à 16:01 +0100, Tomáš Chvátal a écrit :
Dne Čt 28. listopadu 2013 14:32:54, Dominique Leuenberger a.k.a. Dimstar napsal(a):
(just to show, this is not black/white magic.. and describing this in a way that it's clear and not 'subject to the mood of Stephan' is difficult, but should be a goal to strive for).
Yes I wholeheartly agree that currently it is just a magic we do there.
For the record, I don't think we need to have a perfect solution at the very beginning. It's just a matter of having the review team learn by experience what might break and have that take the long way instead of the short way. Once we learn about the breakage patterns (and I'm pretty sure we know quite a few of them ;-)), and we use the proposal for them, we'll see less breakage and we can identify the next batch of breakage patterns. (+ it's always possible to revert in Factory if a breakage is found) Vincent -- Les gens heureux ne sont pas pressés. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am 29.11.2013 09:34, schrieb Vincent Untz:
Le jeudi 28 novembre 2013, à 16:01 +0100, Tomáš Chvátal a écrit :
Dne Čt 28. listopadu 2013 14:32:54, Dominique Leuenberger a.k.a. Dimstar napsal(a):
(just to show, this is not black/white magic.. and describing this in a way that it's clear and not 'subject to the mood of Stephan' is difficult, but should be a goal to strive for).
Yes I wholeheartly agree that currently it is just a magic we do there.
For the record, I don't think we need to have a perfect solution at the very beginning. It's just a matter of having the review team learn by experience what might break and have that take the long way instead of the short way.
Once we learn about the breakage patterns (and I'm pretty sure we know quite a few of them ;-)), and we use the proposal for them, we'll see less breakage and we can identify the next batch of breakage patterns.
I don't think it should be the responsibility of the review team to check that. The review team doesn't integrate everything into a distro, that's the Factory maintainers team. Before we implemented reviews in the OBS, the reviewers had to make sure when to accept which package set at what point in time. But this task (rightfully) moved to the Factory maintainers set of responsibilities. While I'm particularly cautious when I see glibc/autotools/systemd/... submissions, my (and any other reviewers) main concern is not what it may break. It's if the submission itself and alone is correct. Whether the submission should go through a staging project or not is to be discussed between F. maintainers and responsible devel project maintainers. But this isn't black and white either and just asking people usually works best.
(+ it's always possible to revert in Factory if a breakage is found)
An often forgotten option! -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, 28 Nov 2013, Stephan Kulow wrote:
Hi
It's Thanksgiving today and we had a great harvest - "Bottle" rocks. But today is also a good time to plant new seeds for even better openSUSE releases. Factory has a healthy growth, but we have to make sure it's growing in the right direction and so the openSUSE team at SUSE had an on-and-off discussion basically since 12.3 on how to improve things.
But first let me give you some background, that you might not be aware of: 10.3 had 3334 packages, 11.1 3746, 3605 for 11.2, 3807 for 11.3, 4784 in 12.1, 5710 in 12.2, 6246 in 12.3, 6678 in 13.1, 6800 right now in Factory. If you need a picture, look at http://s.kulow.org/packages
Integrating these to make a good distribution is real work. And one of my favourite songs (in that context) goes:
No one said it would be easy But no one said it'd be this hard No one said it would be easy No one thought we'd come this far
In that song Sheryl Crow sings "It's just a question of eliminating obstacles", so what did we in the openSUSE Team do to help? We focused on getting a grip on testing by improving openqa (http://s.kulow.org/openqa-blog), but we soon found out that it was not good enough to test Factory ISOs. Factory is broken often enough not to produce ISOs at all, ISOs can't be installed and once these problems are sorted out, we found in openqa very basic things to be broken, but it was too late to protect factory users to run into them.
One thing I tried was to setup "rings" to help easing the very painful staging projects (with 6800 packages, every staging project as we use them is a monster). That experiment has shown rings to be worthy way to check, but they won't work as I thought with the OBS as it is. We need to think bigger. So we tried to come up with an idea on how to improve the factory development process that includes a more clever way to utilize staging projects and openQA.
As this development process is a bit hard to explain in email, Alberto and Ancor prepared an interactive diagram:
https://progress.opensuse.org/workflow/factory-proposal.html
We basically want to put the pressure on the submitting packager not the user. Using factory should be safe, for this we want to revive a thing that has been lost on the way: Bernhard's factory-tested project.
And we want to open another submission path: from staging projects. Consider a situation where recent updates to GNOME and automake 'clash', causing problems when they're installed together. Right now we throw both into Factory, breaking both for our Factory users until we solve the issues. Instead, we think working on these issues in a separate 'staging project' could be the solution. None of us knows how *exactly* it will look alike because we need to get a conversation with the OBS team going. But the basic idea is:
- GNOME:Factory stays the devel project of things that relate to GNOME updates - devel:tools stays the devel project of things that relate to automake updates - on automake updates, we open up a new devel project that stages GNOME packages for the new automake. In there, GNOME devs and automake experts work together to fix them and updates in there are submitted either back to GNOME:Factory and are integrated right into Factory. Or if that's not possible, the automake update is "grouped" with various GNOME package updates and these updates end in Factory together.
There are several problems with the current "everything through devel project" approach we need to solve. Our ideas are just ideas, but I had several discussions in various places and nobody offered a better idea. So we really would like to start with it and I would like to hear your concerns so they can be part of the final solution.
We have more ideas, but we can only achieve that if we get help, so let me finish with another favourite of mine
What would you think if I sang out of tune? Would you stand up and walk out on me? Lend me your ears and I'll sing you a song And I'll try not to sing out of key Oh, I get by with a little help from my friends
Greetings, Stephan
Back in the old times some people inside SUSE thought of following the Debian testing way. The road block then was the inability to track bugs filed against a package. To recap how Debian testing works - a package update gets pushed to "unstable", then, if no bugs against it appear for $X time, it gets automatically pushed to "testing" (if all dependencies it has at this point can be satisfied there - mind that Debian has a lot more versioned dependencies). We still don't have a "package" field in bugzilla, so copying this scheme 1:1 doesn't work. But we don't have to re-invent the wheel, no? The closest thing to Debian testing we have is openSUSE Tumbleweed though it's a very manual process there. But isn't the proposal to have a Tumbleweed for Factory? Thanks, Richard. -- Richard Biener <rguenther@suse.de> SUSE / SUSE Labs SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746 GF: Jeff Hawn, Jennifer Guild, Felix Imend"orffer -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Richard Biener - 15:45 28.11.13 wrote:
On Thu, 28 Nov 2013, Stephan Kulow wrote:
Hi
It's Thanksgiving today and we had a great harvest - "Bottle" rocks. But today is also a good time to plant new seeds for even better openSUSE releases. Factory has a healthy growth, but we have to make sure it's growing in the right direction and so the openSUSE team at SUSE had an on-and-off discussion basically since 12.3 on how to improve things.
But first let me give you some background, that you might not be aware of: 10.3 had 3334 packages, 11.1 3746, 3605 for 11.2, 3807 for 11.3, 4784 in 12.1, 5710 in 12.2, 6246 in 12.3, 6678 in 13.1, 6800 right now in Factory. If you need a picture, look at http://s.kulow.org/packages
Integrating these to make a good distribution is real work. And one of my favourite songs (in that context) goes:
No one said it would be easy But no one said it'd be this hard No one said it would be easy No one thought we'd come this far
In that song Sheryl Crow sings "It's just a question of eliminating obstacles", so what did we in the openSUSE Team do to help? We focused on getting a grip on testing by improving openqa (http://s.kulow.org/openqa-blog), but we soon found out that it was not good enough to test Factory ISOs. Factory is broken often enough not to produce ISOs at all, ISOs can't be installed and once these problems are sorted out, we found in openqa very basic things to be broken, but it was too late to protect factory users to run into them.
One thing I tried was to setup "rings" to help easing the very painful staging projects (with 6800 packages, every staging project as we use them is a monster). That experiment has shown rings to be worthy way to check, but they won't work as I thought with the OBS as it is. We need to think bigger. So we tried to come up with an idea on how to improve the factory development process that includes a more clever way to utilize staging projects and openQA.
As this development process is a bit hard to explain in email, Alberto and Ancor prepared an interactive diagram:
https://progress.opensuse.org/workflow/factory-proposal.html
We basically want to put the pressure on the submitting packager not the user. Using factory should be safe, for this we want to revive a thing that has been lost on the way: Bernhard's factory-tested project.
And we want to open another submission path: from staging projects. Consider a situation where recent updates to GNOME and automake 'clash', causing problems when they're installed together. Right now we throw both into Factory, breaking both for our Factory users until we solve the issues. Instead, we think working on these issues in a separate 'staging project' could be the solution. None of us knows how *exactly* it will look alike because we need to get a conversation with the OBS team going. But the basic idea is:
- GNOME:Factory stays the devel project of things that relate to GNOME updates - devel:tools stays the devel project of things that relate to automake updates - on automake updates, we open up a new devel project that stages GNOME packages for the new automake. In there, GNOME devs and automake experts work together to fix them and updates in there are submitted either back to GNOME:Factory and are integrated right into Factory. Or if that's not possible, the automake update is "grouped" with various GNOME package updates and these updates end in Factory together.
There are several problems with the current "everything through devel project" approach we need to solve. Our ideas are just ideas, but I had several discussions in various places and nobody offered a better idea. So we really would like to start with it and I would like to hear your concerns so they can be part of the final solution.
We have more ideas, but we can only achieve that if we get help, so let me finish with another favourite of mine
What would you think if I sang out of tune? Would you stand up and walk out on me? Lend me your ears and I'll sing you a song And I'll try not to sing out of key Oh, I get by with a little help from my friends
Greetings, Stephan
Back in the old times some people inside SUSE thought of following the Debian testing way. The road block then was the inability to track bugs filed against a package. To recap how Debian testing works - a package update gets pushed to "unstable", then, if no bugs against it appear for $X time, it gets automatically pushed to "testing" (if all dependencies it has at this point can be satisfied there - mind that Debian has a lot more versioned dependencies). We still don't have a "package" field in bugzilla, so copying this scheme 1:1 doesn't work.
But we don't have to re-invent the wheel, no?
The closest thing to Debian testing we have is openSUSE Tumbleweed though it's a very manual process there.
But isn't the proposal to have a Tumbleweed for Factory?
Don't really get the last question, but let me share my opinions. Tumbleweed is great, but it is a workaround for not stable enough Factory. If Factory would have been stable enough for everyday use (with only few minor hiccups), we wouldn't have needed Tumbleweed IMHO. -- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, 28 Nov 2013, Michal Hrusecky wrote:
Richard Biener - 15:45 28.11.13 wrote:
On Thu, 28 Nov 2013, Stephan Kulow wrote:
Hi
It's Thanksgiving today and we had a great harvest - "Bottle" rocks. But today is also a good time to plant new seeds for even better openSUSE releases. Factory has a healthy growth, but we have to make sure it's growing in the right direction and so the openSUSE team at SUSE had an on-and-off discussion basically since 12.3 on how to improve things.
But first let me give you some background, that you might not be aware of: 10.3 had 3334 packages, 11.1 3746, 3605 for 11.2, 3807 for 11.3, 4784 in 12.1, 5710 in 12.2, 6246 in 12.3, 6678 in 13.1, 6800 right now in Factory. If you need a picture, look at http://s.kulow.org/packages
Integrating these to make a good distribution is real work. And one of my favourite songs (in that context) goes:
No one said it would be easy But no one said it'd be this hard No one said it would be easy No one thought we'd come this far
In that song Sheryl Crow sings "It's just a question of eliminating obstacles", so what did we in the openSUSE Team do to help? We focused on getting a grip on testing by improving openqa (http://s.kulow.org/openqa-blog), but we soon found out that it was not good enough to test Factory ISOs. Factory is broken often enough not to produce ISOs at all, ISOs can't be installed and once these problems are sorted out, we found in openqa very basic things to be broken, but it was too late to protect factory users to run into them.
One thing I tried was to setup "rings" to help easing the very painful staging projects (with 6800 packages, every staging project as we use them is a monster). That experiment has shown rings to be worthy way to check, but they won't work as I thought with the OBS as it is. We need to think bigger. So we tried to come up with an idea on how to improve the factory development process that includes a more clever way to utilize staging projects and openQA.
As this development process is a bit hard to explain in email, Alberto and Ancor prepared an interactive diagram:
https://progress.opensuse.org/workflow/factory-proposal.html
We basically want to put the pressure on the submitting packager not the user. Using factory should be safe, for this we want to revive a thing that has been lost on the way: Bernhard's factory-tested project.
And we want to open another submission path: from staging projects. Consider a situation where recent updates to GNOME and automake 'clash', causing problems when they're installed together. Right now we throw both into Factory, breaking both for our Factory users until we solve the issues. Instead, we think working on these issues in a separate 'staging project' could be the solution. None of us knows how *exactly* it will look alike because we need to get a conversation with the OBS team going. But the basic idea is:
- GNOME:Factory stays the devel project of things that relate to GNOME updates - devel:tools stays the devel project of things that relate to automake updates - on automake updates, we open up a new devel project that stages GNOME packages for the new automake. In there, GNOME devs and automake experts work together to fix them and updates in there are submitted either back to GNOME:Factory and are integrated right into Factory. Or if that's not possible, the automake update is "grouped" with various GNOME package updates and these updates end in Factory together.
There are several problems with the current "everything through devel project" approach we need to solve. Our ideas are just ideas, but I had several discussions in various places and nobody offered a better idea. So we really would like to start with it and I would like to hear your concerns so they can be part of the final solution.
We have more ideas, but we can only achieve that if we get help, so let me finish with another favourite of mine
What would you think if I sang out of tune? Would you stand up and walk out on me? Lend me your ears and I'll sing you a song And I'll try not to sing out of key Oh, I get by with a little help from my friends
Greetings, Stephan
Back in the old times some people inside SUSE thought of following the Debian testing way. The road block then was the inability to track bugs filed against a package. To recap how Debian testing works - a package update gets pushed to "unstable", then, if no bugs against it appear for $X time, it gets automatically pushed to "testing" (if all dependencies it has at this point can be satisfied there - mind that Debian has a lot more versioned dependencies). We still don't have a "package" field in bugzilla, so copying this scheme 1:1 doesn't work.
But we don't have to re-invent the wheel, no?
The closest thing to Debian testing we have is openSUSE Tumbleweed though it's a very manual process there.
But isn't the proposal to have a Tumbleweed for Factory?
Don't really get the last question, but let me share my opinions. Tumbleweed is great, but it is a workaround for not stable enough Factory. If Factory would have been stable enough for everyday use (with only few minor hiccups), we wouldn't have needed Tumbleweed IMHO.
IMHO we would have "auto"-Factory which simply aggregates from all devel projects for example and "Factory" which is a Tumbleweed of that. But wait ... it already works that way. I don't think there will be a good solution without "splitting" Factory. What you then call Factory is up to you. With the "Tumbleweed for Factory" I was simply proposing that in additional to Factory as we have it today we have the "stable" Factory that is operated in "Tumbleweed" mode - pick up changes from Factory in a more controlled manner (aka when they work). Richard. -- Richard Biener <rguenther@suse.de> SUSE / SUSE Labs SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746 GF: Jeff Hawn, Jennifer Guild, Felix Imend"orffer -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 11/28/2013 09:56 AM, Michal Hrusecky wrote:
Richard Biener - 15:45 28.11.13 wrote:
On Thu, 28 Nov 2013, Stephan Kulow wrote:
<snip>
But we don't have to re-invent the wheel, no?
The closest thing to Debian testing we have is openSUSE Tumbleweed though it's a very manual process there.
But isn't the proposal to have a Tumbleweed for Factory?
Don't really get the last question, but let me share my opinions. Tumbleweed is great, but it is a workaround for not stable enough Factory. If Factory would have been stable enough for everyday use (with only few minor hiccups), we wouldn't have needed Tumbleweed IMHO.
If Factory is stable enough for everyday use one could argue we should be able to release anytime. Thus the discussion about changing the release cycle would become moot. Yes, yes I know that even if factory is stable enough, i.e. releasable any time, there would be additional testing marketing etc for a "release proper". However with at usable factory the testing effort for final release should be drastically reduced, thus the burden is significantly lower. Additionally one pose the question whether or not the exceptional quality we have achieved with 12.3 and 13.1 is necessary to begin with? I am not saying we should release a crappy distro. Everything is a compromise. The questions we shold possibly ask are - Does the improved quality in 12.3 and 13.1 over previous releases, 11.4, 12.1, 12.2 justify the additional effort? - Does this improve quality benefit our goal of increasing the user base? - Does the improved quality help us gain more contributors? Basically are we getting enough bang for the buck? Yes, it was nice to upgrade a bunch of my machines to 13.1 with RC1 and RC2, but despite the exceptional effort in testing we still missed a good chunk of stuff. I had two upgrades of machines that were still running grub fail miserably (there is one bug filed.) The point here is that testing utopia is not achievable, we all know this, even 100% of test coverage does not catch all bugs, and even the exceptional test effort produced for 12.3 and 13.1 let some pretty annoying stuff slip through. Therefore the for 12.3 and 13.1 increased burden of testing may not be justifiable for future releases especially if we are aiming for a "more usable factory" model. Thus the release cycle discussion may just be an unnecessary side show. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Robert Schweikert - 7:22 1.12.13 wrote:
...
If Factory is stable enough for everyday use one could argue we should be able to release anytime. Thus the discussion about changing the release cycle would become moot.
Well, even if we stabilize Factory well enough for geek to use it everyday, it doesn't necessarily mean, that Joes grandma will be happy with it. There is this installer thing and first time setup. There is this where did my menu go, it was right here yesterday, my icon changed, everything is broken. And stable enough for everyday use for geeks might not be stable enough for grandma - aka this configuration changed, you need to edit this and that, we renamed your interfaces, software X no longer provides feature Y, ... -- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 28.11.2013 15:45, Richard Biener wrote:
Back in the old times some people inside SUSE thought of following the Debian testing way. The road block then was the inability to track bugs filed against a package. To recap how Debian testing works - a package update gets pushed to "unstable", then, if no bugs against it appear for $X time, it gets automatically pushed to "testing" (if all dependencies it has at this point can be satisfied there - mind that Debian has a lot more versioned dependencies). We still don't have a "package" field in bugzilla, so copying this scheme 1:1 doesn't work.
You of all should know best that a bug free gcc version is *not* ready to be integrated into Factory, so I don't see this as a good approach.
But we don't have to re-invent the wheel, no?
The closest thing to Debian testing we have is openSUSE Tumbleweed though it's a very manual process there.
But isn't the proposal to have a Tumbleweed for Factory?
You either misunderstand what Tumbleweed does or Factory's role in general. Tumbleweed is an addon repo for a stable release that has several uptodate components that are updated frequently while Factory is a rolling distribution integrating new technology all the time. So a "Tumbleweed for Factory" does not make sense in my world. Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Stephan Kulow schrieb:
There are several problems with the current "everything through devel project" approach we need to solve. Our ideas are just ideas, but I had several discussions in various places and nobody offered a better idea. So we really would like to start with it and I would like to hear your concerns so they can be part of the final solution.
Whenever discussions like this come up, I wonder if pieces from the Mozilla process that I'm so immersed in might apply. In this case, what we're doing with Nightly might have some merit (though of course it's a different scale and not 1:1 anyhow): What Mozilla has that is similar to Factory is the mozilla-central repository, it's the "spinal column" of development where all patches come together and on to integrated testing. We have a few integration repositories now where the actual patches land and automated tests are run against that set of code, let's compare them to the devel repos that feed into Factory. Anything landing there that breaks any automated tests (unit tests, perf tests, etc.) is being "backed out", i.e. reverted, so we get back to a clean state. So only a state of patches that was tested against the whole rest of the product can "stick" and be merged into the main mozilla-central repo. And if anything breaks with the merge, I guess the whole merge will be backed out (due to the set of code, this rarely happens, though - I'd guess that to happen more often with Factory and devel projects). Also, we generate builds the whole time and our automation is rigged to create Nightly builds (I guess that would match the Factory ISOs here) only from the most recent state that passed tests successfully. I'd guess here this would mean that Foo:devel would only get pulled into Factory if it compiles successfully against the current Factory (can we know beforehand if any dependencies on it compile successfully as well?) and gets reverted if it actually does break things when being pulled in (is that possible?). And Bar:devel will only be able to get into Factory when the Foo:devel "merge" has been cleared in either way. That said, I guess merges with a lot of dependencies and breakages there probably will need all the dependency updates to be staged and tested together before they can be merged all together, in a similar way like you described, possibly. This is just from my POV as someone who has been immersed in the Mozilla process for a long time, but I have never done distro stuff and I see how 1) Mozilla's release engineering / CI system is probably unmatched in scale (even though OBS is awesome) and 2) a distro has a completely different scale in terms of the amount of code and build times involved. Robert Kaiser -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 28.11.2013 16:04, Robert Kaiser wrote:
Stephan Kulow schrieb:
There are several problems with the current "everything through devel project" approach we need to solve. Our ideas are just ideas, but I had several discussions in various places and nobody offered a better idea. So we really would like to start with it and I would like to hear your concerns so they can be part of the final solution.
Whenever discussions like this come up, I wonder if pieces from the Mozilla process that I'm so immersed in might apply.
Hi Robert, I'm having a bit of a problem following your thoughts, do you have some URLs describing that process?
In this case, what we're doing with Nightly might have some merit (though of course it's a different scale and not 1:1 anyhow): What Mozilla has that is similar to Factory is the mozilla-central repository, it's the "spinal column" of development where all patches come together and on to integrated testing. We have a few integration repositories now where the actual patches land and automated tests are run against that set of code, let's compare them to the devel repos that feed into Factory. Anything landing there that breaks any automated tests (unit tests, perf tests, etc.) is being "backed out", i.e. reverted, so we get back to a clean state. So only a state of patches that was tested against the whole rest of the product can "stick" and be merged into the main mozilla-central repo. And if anything breaks with the merge, I guess the whole merge will be backed out (due to the set of code, this rarely happens, though - I'd guess that to happen more often with Factory and devel projects). Also, we generate builds the whole time and our automation is rigged to create Nightly builds (I guess that would match the Factory ISOs here) only from the most recent state that passed tests successfully. I'd guess here this would mean that Foo:devel would only get pulled into Factory if it compiles successfully against the current Factory (can we know beforehand if any dependencies on it compile successfully as well?) and gets reverted if it actually does break things when being pulled in (is that possible?). And Bar:devel will only be able to get into Factory when the Foo:devel "merge" has been cleared in either way. That said, I guess merges with a lot of dependencies and breakages there probably will need all the dependency updates to be staged and tested together before they can be merged all together, in a similar way like you described, possibly. The merging of things is at the moment exactly our problem. A gcc update is not a patch you can take in and out, so you have to be *really* careful and try somewhere else. But from what I understand, our goal is not that far away.
Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Stephan Kulow schrieb:
On 28.11.2013 16:04, Robert Kaiser wrote:
Whenever discussions like this come up, I wonder if pieces from the Mozilla process that I'm so immersed in might apply.
I'm having a bit of a problem following your thoughts, do you have some URLs describing that process?
There's a lot of documentation about all kinds of process at Mozilla on developer.mozilla.org but I couldn't find a document that describes very well what I wanted to point to here. That's why I tried to describe things in my message here. Which parts are hard for you to understand from my explanation?
The merging of things is at the moment exactly our problem. A gcc update is not a patch you can take in and out, so you have to be *really* careful and try somewhere else. But from what I understand, our goal is not that far away.
I think that what would fit my description best would be to run compilation and tests for any dependencies of any package (or set of packages) that is to be pulled into Factory and only do make this "merge" effective when those succeed. I guess for many packages that would hopefully be a small set that needs to be run, on things like gcc it could be huge, of course. The question is then if the infrastructure is able to deal with that load - otherwise, I guess some compromise is needed. Robert Kaiser -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am 29.11.2013 22:42, schrieb Robert Kaiser:
The merging of things is at the moment exactly our problem. A gcc update is not a patch you can take in and out, so you have to be *really* careful and try somewhere else. But from what I understand, our goal is not that far away.
I think that what would fit my description best would be to run compilation and tests for any dependencies of any package (or set of packages) that is to be pulled into Factory and only do make this "merge" effective when those succeed. I guess for many packages that would hopefully be a small set that needs to be run, on things like gcc it could be huge, of course. The question is then if the infrastructure is able to deal with that load - otherwise, I guess some compromise is needed.
This is basically what Josef describes too. We'll have to try Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am Donnerstag, 28. November 2013, 14:49:32 schrieb Stephan Kulow: ...
We basically want to put the pressure on the submitting packager not the user. Using factory should be safe, for this we want to revive a thing that has been lost on the way: Bernhard's factory-tested project.
While I would re-phrase this, I 100% support this. I think we always wanted to have a always usable factory and in total this helps all packagers. The devel projects are still a good idea, but I agree 100% that it is not enough to validate a new submission. Great that you picked up the make-factory-always-usable approach :) My hope is that we can developer some kind of objective QA measurements in future for this. Eg. X server is not only compiling, but even starting :) Or bluez update does not break the tools for one of the desktops like it is even the case for openSUSE:13.1 released version.
And we want to open another submission path: from staging projects. Consider a situation where recent updates to GNOME and automake 'clash', causing problems when they're installed together. Right now we throw both into Factory, breaking both for our Factory users until we solve the issues. Instead, we think working on these issues in a separate 'staging project' could be the solution. None of us knows how *exactly* it will look alike because we need to get a conversation with the OBS team going. But the basic idea is:
- GNOME:Factory stays the devel project of things that relate to GNOME updates - devel:tools stays the devel project of things that relate to automake updates - on automake updates, we open up a new devel project that stages GNOME packages for the new automake. In there, GNOME devs and automake experts work together to fix them and updates in there are submitted either back to GNOME:Factory and are integrated right into Factory. Or if that's not possible, the automake update is "grouped" with various GNOME package updates and these updates end in Factory together.
There are several problems with the current "everything through devel project" approach we need to solve.
I just want to point out that this is a process limitation of Factory. The OBS source link system was actually designed that you could submit it from any project and the devel project would get it as well. With current .changes files it will most likely get into broken state for manually merging. This could be improved though, but it is not a big problem if this happens anyway. Or, alternative solution, you just put source links pointing to the devel projects into the staging projects. Afterwards developers needs fix source in their devel project, but monitor results in the staging project as well. Submission can be created then from either the devel projects or from the staging project. It will not lead to merge problems with this setup. -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Quoting Adrian Schröter <adrian@suse.de>:
My hope is that we can developer some kind of objective QA measurements in future for this. Eg. X server is not only compiling, but even starting :) Or bluez update does not break the tools for one of the desktops like it is even the case for openSUSE:13.1 released version.
Please, stop using this example! This was CONSCIOUSLY decided, HERE on the Factory mailing lists. Dominique -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am Donnerstag, 28. November 2013, 16:18:57 schrieb Dominique Leuenberger a.k.a. Dimstar:
Quoting Adrian Schröter <adrian@suse.de>:
My hope is that we can developer some kind of objective QA measurements in future for this. Eg. X server is not only compiling, but even starting :) Or bluez update does not break the tools for one of the desktops like it is even the case for openSUSE:13.1 released version.
Please, stop using this example! This was CONSCIOUSLY decided, HERE on the Factory mailing lists.
Well, I do not want to spoil this discussion with that, but it comes to the point where we need to define our quality criterias before and follow them. And such criterias must be objective and should not be driven by current technical problems if we want to have the user acceptance. bye adrian PS: seriously, can you imagine that an Android ships breaking sound for the users? No, it would just not ship, because it does not meet the QA criteria. That is why I really recommend everbody of my friends to use android instead of classic GNU/Linux these days. And together with the new PA breakages I have to use two command line every few minutes, just because it looses connections while it worked before. Even worse, I do not care about to create bugreports anymore (well I did for bluez), because it is fighting against wind mills. But with a working QA system, I would actually provided test cases in such situations, because I would believe that my work is not void tomorrow. Yes, long way to go... -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Quoting Adrian Schröter <adrian@suse.de>:
Am Donnerstag, 28. November 2013, 16:18:57 schrieb Dominique Leuenberger a.k.a. Dimstar:
Quoting Adrian Schröter <adrian@suse.de>:
My hope is that we can developer some kind of objective QA measurements in future for this. Eg. X server is not only compiling, but even starting :) Or bluez update does not break the tools for one of the desktops like it is even the case for openSUSE:13.1 released version.
Please, stop using this example! This was CONSCIOUSLY decided, HERE on the Factory mailing lists.
Well, I do not want to spoil this discussion with that, but it comes to the point where we need to define our quality criterias before and follow them.
And such criterias must be objective and should not be driven by current technical problems if we want to have the user acceptance.
I didn't say it was necessary the *right* decision.. I only stated it was a conscious one. And as such defeated by all those automatics. Dominique -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thursday 28 November 2013 16:33:19 Dominique Leuenberger a.k.a. Dimstar wrote:
Quoting Adrian Schröter <adrian@suse.de>:
Am Donnerstag, 28. November 2013, 16:18:57 schrieb Dominique
Leuenberger a.k.a. Dimstar:
Quoting Adrian Schröter <adrian@suse.de>:
My hope is that we can developer some kind of objective QA measurements in future for this. Eg. X server is not only compiling, but even
starting :)
Or bluez update does not break the tools for one of the desktops like it is even the case for openSUSE:13.1 released version.
Please, stop using this example! This was CONSCIOUSLY decided, HERE on the Factory mailing lists.
Well, I do not want to spoil this discussion with that, but it comes to the point where we need to define our quality criterias before and follow them.
And such criterias must be objective and should not be driven by current technical problems if we want to have the user acceptance.
I didn't say it was necessary the *right* decision.. I only stated it was a conscious one. And as such defeated by all those automatics.
Yeap, and while it might have been painful, it is woefully off-topic here, so let's move on indeed ;-) These threads are already growing big and hairy as-is... /J
Dominique
Stephan Kulow <coolo@suse.de> writes:
Integrating these to make a good distribution is real work. And one of my favourite songs (in that context) goes:
No one said it would be easy But no one said it'd be this hard No one said it would be easy No one thought we'd come this far
:D
https://progress.opensuse.org/workflow/factory-proposal.html
Cool diagram, extremely helpful to convey the idea!
We basically want to put the pressure on the submitting packager not the user.
yay :) !
There are several problems with the current "everything through devel project" approach we need to solve. Our ideas are just ideas, but I had several discussions in various places and nobody offered a better idea. So we really would like to start with it and I would like to hear your concerns so they can be part of the final solution.
This will be a major improvement to what we did so far :) ! There is one thing that I'm missing, but that probably would need other changes on obs than using it differently and adding submit request groups. The build server is what it says: a *build* server. We use it as an integration server, quite successfully, and it comes close, but it is not explicitely targeted at integration. So it is missing a few features to better support integration: * Tools like git support 'merge' tracking of changes in branches back to mainline and from progress in mainline back to the branch. This then also allows to bisect regressions to the integration issue. * Integration means testing, and testing may be a gate/decision point whether further builds make sense at all (think rings). This tracking of test status is not in the tool. And tests should gate further work based on test status. And tests, automatic or manual, have a smart and a stupid order doing them. * There is one thing in which the current proposal will reduce the problem for openSUSE:Factory. Integration means integration into some baseline, to create a new, improved baseline, that should be stable at all times. You want to catch problems early. And for that you contain changes until you are confident they are good for wider release. That's what the current proposal does address. What it will not solve is that you need the very same containment when you target several baselines with the same update, the other thing obs is used for, besides Factory and openSUSE. There, too you want to contain changes until they are 'good enough' to be available for the 'next ring' of things, until you approve them for the whole target baseline. Our current projects simply scope package builds based on what is in the project, not on what is affected in the baseline, let alone what is already tested. You need to 'know' and 'pull in' 'the right packages' to fully cover all dependent packages. And when you use 'links', your project may break because the baseline changes then 'push' into your project. So your project, unless it is accepted as Factory project, will continue to break at random times. What I like with the current proposal is that it is setting a great course for openSUSE Factory, and we need to move towards a more stable factory, a smoother flow. However how will this help dependent multi-target projects (like gnome, or kde or databases or d:lang:*) to likewise be stable at all^wmost times? It looks to me like the flow that is proposed here continues to break projects that build for both factory and other released distributions. S. -- Susanne Oberhauser SUSE LINUX Products GmbH +49-911-74053-574 Maxfeldstraße 5 Processes and Infrastructure 90409 Nürnberg GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 16746 (AG Nürnberg) -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am 28.11.2013 19:32, schrieb Susanne Oberhauser-Hirschoff:
However how will this help dependent multi-target projects (like gnome, or kde or databases or d:lang:*) to likewise be stable at all^wmost times?
It looks to me like the flow that is proposed here continues to break projects that build for both factory and other released distributions.
Hi Susanne, Do you know how I solve a rubik's cube? Layer by layer! Not because it's faster, not because it's easier, but because I know how to solve it that way. Does that solution work for chess puzzles too? No, it doesn't. That's why I don't solve chess puzzles! Multi-target projects are the chess puzzles in above - in case that wasn't clear yet ;) Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Stephan Kulow <coolo@suse.de> writes:
Am 28.11.2013 19:32, schrieb Susanne Oberhauser-Hirschoff:
However how will this help dependent multi-target projects (like gnome, or kde or databases or d:lang:*) to likewise be stable at all^wmost times?
It looks to me like the flow that is proposed here continues to break projects that build for both factory and other released distributions.
Hi Susanne,
Do you know how I solve a rubik's cube? Layer by layer! Not because it's faster, not because it's easier, but because I know how to solve it that way.
Does that solution work for chess puzzles too? No, it doesn't. That's why I don't solve chess puzzles!
Hi Stephan, thanks for the nice metaphors --- I did both, rubik's cube and chess, extensively. And even in chess, you do one move at a time. That said, you have answered my question, thank you. S. -- Susanne Oberhauser SUSE LINUX Products GmbH +49-911-74053-574 Maxfeldstraße 5 Processes and Infrastructure 90409 Nürnberg GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 16746 (AG Nürnberg) -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Susanne Oberhauser-Hirschoff - 18:32 28.11.13 wrote:
...
There is one thing that I'm missing, but that probably would need other changes on obs than using it differently and adding submit request groups.
The build server is what it says: a *build* server.
We use it as an integration server, quite successfully, and it comes close, but it is not explicitely targeted at integration.
So it is missing a few features to better support integration:
* Tools like git support 'merge' tracking of changes in branches back to mainline and from progress in mainline back to the branch. This then also allows to bisect regressions to the integration issue.
Well, this would be feature request for OBS team. I think it was discussed in the past many times already - that if during initial obs design instead of reinventing the wheel git would be used to do version control system, things would be simpler now... But hard to revert that although some tried over the years...
* Integration means testing, and testing may be a gate/decision point whether further builds make sense at all (think rings). This tracking of test status is not in the tool. And tests should gate further work based on test status. And tests, automatic or manual, have a smart and a stupid order doing them.
So you would like to see better integration between openQA and OBS?
...
However how will this help dependent multi-target projects (like gnome, or kde or databases or d:lang:*) to likewise be stable at all^wmost times?
It looks to me like the flow that is proposed here continues to break projects that build for both factory and other released distributions.
Few comments: * It will not break stable versions * Changes done in devel project should be send to Factory anyway ** If changes are in Factory, base is not going to change to break everything without fix provided Sooo, the solution is simple, if you want your stuff not getting broken as nobody know what you have, submit your changes to Factory and don't keep them to yourself :-) Devel projects are packages on the way to Factory anyway ;-) -- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Michal Hrusecky <mhrusecky@suse.cz> writes:
Susanne Oberhauser-Hirschoff - 18:32 28.11.13 wrote:
* Integration means testing, and testing may be a gate/decision point whether further builds make sense at all (think rings). This tracking of test status is not in the tool. And tests should gate further work based on test status. And tests, automatic or manual, have a smart and a stupid order doing them.
So you would like to see better integration between openQA and OBS?
openQA imnsho is just another flavour of build (as in rpm, kiwi, deb, openQA) I'm talking about 1. support _manual_ steps in the flow: hardware tests in the community are manual, not automatic. bluez audio comes to mind. 2. clearly distinguish test from build: a failing test may still mean an ok package, just only 'gold', not 'platinum'
However how will this help dependent multi-target projects (like gnome, or kde or databases or d:lang:*) to likewise be stable at all^wmost times?
It looks to me like the flow that is proposed here continues to break projects that build for both factory and other released distributions.
Few comments:
* It will not break stable versions * Changes done in devel project should be send to Factory anyway ** If changes are in Factory, base is not going to change to break everything without fix provided
Sooo, the solution is simple, if you want your stuff not getting broken as nobody know what you have, submit your changes to Factory and don't keep them to yourself :-) Devel projects are packages on the way to Factory anyway ;-)
Like an upstream project at random times would push their stuff into my working branch and break it "because I'll merge it upstream later anyhow" ;-) sigh. There was barely a way to leapfrog svn and cvs on the way from rcs to git. It even needed some proprietary things before and in between. Maybe this is also true for the way from autobuild to a distributed integration server. S. -- Susanne Oberhauser SUSE LINUX Products GmbH +49-911-74053-574 Maxfeldstraße 5 Processes and Infrastructure 90409 Nürnberg GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 16746 (AG Nürnberg) -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Susanne Oberhauser-Hirschoff - 21:06 28.11.13 wrote:
Michal Hrusecky <mhrusecky@suse.cz> writes:
Susanne Oberhauser-Hirschoff - 18:32 28.11.13 wrote:
* Integration means testing, and testing may be a gate/decision point whether further builds make sense at all (think rings). This tracking of test status is not in the tool. And tests should gate further work based on test status. And tests, automatic or manual, have a smart and a stupid order doing them.
So you would like to see better integration between openQA and OBS?
openQA imnsho is just another flavour of build (as in rpm, kiwi, deb, openQA)
Not now, but probably could be integrated that way.
I'm talking about
1. support _manual_ steps in the flow: hardware tests in the community are manual, not automatic. bluez audio comes to mind.
Hmmm, looking at the chart, we really have pretty much automatic everywhere. Maybe we should weaken it so it wouldn't look like manual testing is not possible. But manual testing is slow and if we want to get packages through fast enough and with limited resources, everything should be as automatized as much as possible. And most of the manual testing happens once it gets at least to Factory integration part.
2. clearly distinguish test from build: a failing test may still mean an ok package, just only 'gold', not 'platinum'
Currently it is. It's independent service. That's also why there is a QA team in workflow (teams can overlap in real world). To interpret/fix results.
However how will this help dependent multi-target projects (like gnome, or kde or databases or d:lang:*) to likewise be stable at all^wmost times?
It looks to me like the flow that is proposed here continues to break projects that build for both factory and other released distributions.
Few comments:
* It will not break stable versions * Changes done in devel project should be send to Factory anyway ** If changes are in Factory, base is not going to change to break everything without fix provided
Sooo, the solution is simple, if you want your stuff not getting broken as nobody know what you have, submit your changes to Factory and don't keep them to yourself :-) Devel projects are packages on the way to Factory anyway ;-)
Like an upstream project at random times would push their stuff into my working branch and break it "because I'll merge it upstream later anyhow" ;-)
All changes to Factory would still have to go through devel project. So nobody can break your stuff. There might be updates of some (build) required packages, but in that case, it shouldn't brake your previous version and when is the better time to know about new incompatibility than when you are updating package already anyway. -- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am Freitag, 29. November 2013, 00:12:40 schrieb Michal Hrusecky:
Susanne Oberhauser-Hirschoff - 21:06 28.11.13 wrote:
Michal Hrusecky <mhrusecky@suse.cz> writes:
Susanne Oberhauser-Hirschoff - 18:32 28.11.13 wrote:
* Integration means testing, and testing may be a gate/decision point whether further builds make sense at all (think rings). This tracking of test status is not in the tool. And tests should gate further work based on test status. And tests, automatic or manual, have a smart and a stupid order doing them.
So you would like to see better integration between openQA and OBS?
openQA imnsho is just another flavour of build (as in rpm, kiwi, deb, openQA)
Not now, but probably could be integrated that way.
yes, but it would be just one little part of automated QA. Also, it could be also kept standalone, but scheduled and reporting back in a transparent way with OBS. Anyway, the entire QA stuff would be actually multiple huge projects. Yes, we should approach them, but the more low hanging fruit is IMHO the wanted workflow changes regarding the staging projects. They can used also as input for all kind auf QA systems. So, from what I hear and read we need to decide about * Do we always want to enforce staging projects for all submissions? => Means we need some support to set this up automatically as part of the workflow? * How should it work? Eg. a staging project openSUSE:Factory:Stage:$NUMBER_OR_SRING 1) contains a number of unversioned links to the devel package? + on submission time to factory no merge conflicts will happen - stage project may not finish, because people in devel packe are not aware of it. 2) Or "accepting" a submit request. maybe with multiple actions, into a stage project. Keep working there, maybe directly together with the original submitter and transfer changes from there to factory with another request? + devel package changes can happen independend + we have a similar model already with the maintenance incidents. Just that we would not move binaries, only sources. And we would not need the .openSUSE_X.y extensions for package names. - it introduces a temporary third version of the package sources, which may lead to more merge work. do you see another model? -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, Nov 28, 2013 at 07:55:20PM +0100, Michal Hrusecky wrote:
Well, this would be feature request for OBS team. I think it was discussed in the past many times already - that if during initial obs design instead of reinventing the wheel git would be used to do version control system, things would be simpler now...
During initial obs design git did not exist. Both git and OBS were started in 2005. And we didn't reinvent the wheel but simply reused the old autobuild srcrep code. M. -- Michael Schroeder mls@suse.de SUSE LINUX Products GmbH, GF Jeff Hawn, HRB 16746 AG Nuernberg main(_){while(_=~getchar())putchar(~_-1/(~(_|32)/13*2-11)*13);} -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
"Susanne Oberhauser-Hirschoff" <froh@suse.com> writes:
Michael Schroeder 11/29/13 9:45 AM
the old autobuild srcrep code.
source repository that is. no insults here.
He didn't write srcrap :-) Andreas. -- Andreas Schwab, SUSE Labs, schwab@suse.de GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE 1748 E4D4 88E3 0EEA B9D7 "And now for something completely different." -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 11/28/2013 01:55 PM, Michal Hrusecky wrote:
Susanne Oberhauser-Hirschoff - 18:32 28.11.13 wrote:
<snip>
Few comments:
* It will not break stable versions * Changes done in devel project should be send to Factory anyway ** If changes are in Factory, base is not going to change to break everything without fix provided
Sooo, the solution is simple, if you want your stuff not getting broken as nobody know what you have, submit your changes to Factory and don't keep them to yourself :-) Devel projects are packages on the way to Factory anyway ;-)
Well, that is not necessarily true and maybe that needs to change. If we had all of devel:lang:* packages in factory we'd be way over the 6800 mark. What happened over time is that packages that people do not care to maintain in Factory have migrated to Devel: projects as an indicator of "good enough" i.e. a step above a Home: project. So maybe we need to get back to the original intend, things that are in a devel project are on their way to factory or are already in, with some time limits. Following this train of thought would leave a lot of packages "home less" and thus we'd have to create yet another layer, that I am not certain would actually be productive and helpful. Anyway, the point is that we are far away from the original intend of devel projects and we need to recognize this fact. Later, Robert
-- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, 28 Nov 2013 18:32:56 +0000 Susanne Oberhauser-Hirschoff <froh@suse.com> wrote:
Stephan Kulow <coolo@suse.de> writes:
The build server is what it says: a *build* server.
We use it as an integration server, quite successfully, and it comes close, but it is not explicitely targeted at integration.
So it is missing a few features to better support integration:
* Tools like git support 'merge' tracking of changes in branches back to mainline and from progress in mainline back to the branch. This then also allows to bisect regressions to the integration issue.
+1, I really miss some features from git and easy merging is one of it.
* Integration means testing, and testing may be a gate/decision point whether further builds make sense at all (think rings). This tracking of test status is not in the tool. And tests should gate further work based on test status. And tests, automatic or manual, have a smart and a stupid order doing them.
What would be really nice here, is to have hooks in BS like github. If new pull request is created, then there is hook that can told it to CI like https://travis-ci.org/ or code quality metter like https://codeclimate.com/ which in response set status for such request so you immediatelly see if request passes tests or if quality of code goes up or down ( I think it will be really useful for e.g. rpmlint warnings, now I don't see if submit request increase or decrease number of warnings ). Now it is partially done in BS itself with their own check if rpm build. And if it is generic, then we can easily add different services in future like attach to project security scanner and check if in new version is new security warning. I prefer component system before monolitic application.
S.
Josef -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am Freitag, 29. November 2013, 10:36:38 schrieb Josef Reidinger:
On Thu, 28 Nov 2013 18:32:56 +0000 Susanne Oberhauser-Hirschoff <froh@suse.com> wrote:
Stephan Kulow <coolo@suse.de> writes:
The build server is what it says: a *build* server.
We use it as an integration server, quite successfully, and it comes close, but it is not explicitely targeted at integration.
So it is missing a few features to better support integration:
* Tools like git support 'merge' tracking of changes in branches back to mainline and from progress in mainline back to the branch. This then also allows to bisect regressions to the integration issue.
+1, I really miss some features from git and easy merging is one of it.
a git backend is there, but not finished. git can not replace our source server entirely due various issues, but we could offer also to store sources in a git repo on the server.
* Integration means testing, and testing may be a gate/decision point whether further builds make sense at all (think rings). This tracking of test status is not in the tool. And tests should gate further work based on test status. And tests, automatic or manual, have a smart and a stupid order doing them.
What would be really nice here, is to have hooks in BS like github. If new pull request is created, then there is hook that can told it to CI like https://travis-ci.org/
http://openbuildservice.org/2013/11/22/Source-Update-Via_Token/ ?
or code quality metter like https://codeclimate.com/ which in response set status for such request so you immediatelly see if request passes tests or if quality of code goes up or down ( I think it will be really useful for e.g. rpmlint warnings, now I don't see if submit request increase or decrease number of warnings ). Now it is partially done in BS itself with their own check if rpm build. And if it is generic, then we can easily add different services in future like attach to project security scanner and check if in new version is new security warning. I prefer component system before monolitic application.
Yes, my idea here is to offer also such a trigger mechanism as described in the blog URL above to handle review states in requests. So, an external tool can be used more easily without the need to store user credentials on that system. Would this be a solution? -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 29.11.2013 11:26, Adrian Schröter wrote:
* Integration means testing, and testing may be a gate/decision point whether further builds make sense at all (think rings). This tracking of test status is not in the tool. And tests should gate further work based on test status. And tests, automatic or manual, have a smart and a stupid order doing them.
What would be really nice here, is to have hooks in BS like github. If new pull request is created, then there is hook that can told it to CI like https://travis-ci.org/
http://openbuildservice.org/2013/11/22/Source-Update-Via_Token/
?
I knew you would jump on it but hooks for github are something else than hooks like github ;) Our solution for that are at the moment automatic reviewers that poll for new reviews to do and then queue something. So if you do a new SR factory-auto, legal-auto, repo-checker run on it and do their CI. We can extend that heavily - but nothing of that will be "Source update" ;) Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am Freitag, 29. November 2013, 11:43:54 schrieb Stephan Kulow:
On 29.11.2013 11:26, Adrian Schröter wrote:
* Integration means testing, and testing may be a gate/decision point whether further builds make sense at all (think rings). This tracking of test status is not in the tool. And tests should gate further work based on test status. And tests, automatic or manual, have a smart and a stupid order doing them.
What would be really nice here, is to have hooks in BS like github. If new pull request is created, then there is hook that can told it to CI like https://travis-ci.org/
http://openbuildservice.org/2013/11/22/Source-Update-Via_Token/
?
I knew you would jump on it but hooks for github are something else than hooks like github ;)
When offering also this for reviews, these review hooks can run everywhere. We can also discuss to run them within OBS, but I think other possibilities, like rpmlint or source services are the better approach then.
Our solution for that are at the moment automatic reviewers that poll for new reviews to do and then queue something. So if you do a new SR factory-auto, legal-auto, repo-checker run on it and do their CI. We can extend that heavily - but nothing of that will be "Source update" ;)
right, but the /trigger route could also be used to do other defined tasks. For example like changing the review state. That was what I proposed here. -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Fri, 29 Nov 2013 11:49:24 +0100 Adrian Schröter <adrian@suse.de> wrote:
Am Freitag, 29. November 2013, 11:43:54 schrieb Stephan Kulow:
On 29.11.2013 11:26, Adrian Schröter wrote:
* Integration means testing, and testing may be a gate/decision point whether further builds make sense at all (think rings). This tracking of test status is not in the tool. And tests should gate further work based on test status. And tests, automatic or manual, have a smart and a stupid order doing them.
What would be really nice here, is to have hooks in BS like github. If new pull request is created, then there is hook that can told it to CI like https://travis-ci.org/
http://openbuildservice.org/2013/11/22/Source-Update-Via_Token/
?
I knew you would jump on it but hooks for github are something else than hooks like github ;)
When offering also this for reviews, these review hooks can run everywhere.
We can also discuss to run them within OBS, but I think other possibilities, like rpmlint or source services are the better approach then.
I mean it other way - submit requestion on BS -> call hoooks -> get back result so something like factory_auto and others. I don't know how easy or hard is to add such reviewers, if I want to use it outside of factory for my own project. From reviews at http://en.opensuse.org/openSUSE:Build_Service_Concept_Review it looks like you need to create fake user for it.
Our solution for that are at the moment automatic reviewers that poll for new reviews to do and then queue something. So if you do a new SR factory-auto, legal-auto, repo-checker run on it and do their CI. We can extend that heavily - but nothing of that will be "Source update" ;)
right, but the /trigger route could also be used to do other defined tasks. For example like changing the review state. That was what I proposed here.
So if I get right how reviews work, then if we have e.g. security scanner, then if new thing appear he must open review for someone to manual check new detected issue, right? -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 29.11.2013 12:34, Josef Reidinger wrote:
So if I get right how reviews work, then if we have e.g. security scanner, then if new thing appear he must open review for someone to manual check new detected issue, right?
Yes, that would be the model we use for legal-auto -> legal-team. legal-auto either declines (very rare), accepts or forwards to legal-team with detected issues. Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am Freitag, 29. November 2013, 12:34:59 schrieb Josef Reidinger:
On Fri, 29 Nov 2013 11:49:24 +0100 Adrian Schröter <adrian@suse.de> wrote:
Am Freitag, 29. November 2013, 11:43:54 schrieb Stephan Kulow:
On 29.11.2013 11:26, Adrian Schröter wrote:
* Integration means testing, and testing may be a gate/decision point whether further builds make sense at all (think rings). This tracking of test status is not in the tool. And tests should gate further work based on test status. And tests, automatic or manual, have a smart and a stupid order doing them.
What would be really nice here, is to have hooks in BS like github. If new pull request is created, then there is hook that can told it to CI like https://travis-ci.org/
http://openbuildservice.org/2013/11/22/Source-Update-Via_Token/
?
I knew you would jump on it but hooks for github are something else than hooks like github ;)
When offering also this for reviews, these review hooks can run everywhere.
We can also discuss to run them within OBS, but I think other possibilities, like rpmlint or source services are the better approach then.
I mean it other way - submit requestion on BS -> call hoooks -> get back result so something like factory_auto and others. I don't know how easy or hard is to add such reviewers, if I want to use it outside of factory for my own project.
You can define default reviewer in any project or package. Means, a new request for this project/package gets this reviewer by default. When your external
From reviews at http://en.opensuse.org/openSUSE:Build_Service_Concept_Review it looks like you need to create fake user for it.
or a group. this group would define which users can act on behalf of the group. We could also add the token/trigger system there.
Our solution for that are at the moment automatic reviewers that poll for new reviews to do and then queue something. So if you do a new SR factory-auto, legal-auto, repo-checker run on it and do their CI. We can extend that heavily - but nothing of that will be "Source update" ;)
right, but the /trigger route could also be used to do other defined tasks. For example like changing the review state. That was what I proposed here.
So if I get right how reviews work, then if we have e.g. security scanner, then if new thing appear he must open review for someone to manual check new detected issue, right?
That is one way to do it. -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Am 29.11.2013 11:26, schrieb Adrian Schröter:
Am Freitag, 29. November 2013, 10:36:38 schrieb Josef Reidinger:
On Thu, 28 Nov 2013 18:32:56 +0000 Susanne Oberhauser-Hirschoff <froh@suse.com> wrote:
Stephan Kulow <coolo@suse.de> writes:
The build server is what it says: a *build* server.
We use it as an integration server, quite successfully, and it comes close, but it is not explicitely targeted at integration.
So it is missing a few features to better support integration:
* Tools like git support 'merge' tracking of changes in branches back to mainline and from progress in mainline back to the branch. This then also allows to bisect regressions to the integration issue.
+1, I really miss some features from git and easy merging is one of it.
a git backend is there, but not finished. git can not replace our source server entirely due various issues, but we could offer also to store sources in a git repo on the server.
* Integration means testing, and testing may be a gate/decision point whether further builds make sense at all (think rings). This tracking of test status is not in the tool. And tests should gate further work based on test status. And tests, automatic or manual, have a smart and a stupid order doing them.
What would be really nice here, is to have hooks in BS like github. If new pull request is created, then there is hook that can told it to CI like https://travis-ci.org/
http://openbuildservice.org/2013/11/22/Source-Update-Via_Token/
?
or code quality metter like https://codeclimate.com/ which in response set status for such request so you immediatelly see if request passes tests or if quality of code goes up or down ( I think it will be really useful for e.g. rpmlint warnings, now I don't see if submit request increase or decrease number of warnings ). Now it is partially done in BS itself with their own check if rpm build. And if it is generic, then we can easily add different services in future like attach to project security scanner and check if in new version is new security warning. I prefer component system before monolitic application.
Yes, my idea here is to offer also such a trigger mechanism as described in the blog URL above to handle review states in requests.
So, an external tool can be used more easily without the need to store user credentials on that system.
Would this be a solution?
It would be good, if OBS could trigger webhooks (that is send out a HTTP GET request to one or more predefined URLs with some JSON details). This would be useful for many events that were signalled through hermes. e.g. * publish of a repo * SR new or changed state * build finished * source checkin ... you could even rewrite hermes as a receiver of such webhooks. Ciao Bernhard M. -----BEGIN PGP SIGNATURE----- Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlKvZmIACgkQSTYLOx37oWTkVQCfcuAeD8p/bwqU7XNTogf45x+3 i9YAoKAnHaGmxs/JzrQRlc0B/4e1KUBG =RoHX -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 11/28/2013 08:49 AM, Stephan Kulow wrote:
Hi
It's Thanksgiving today and we had a great harvest - "Bottle" rocks. But today is also a good time to plant new seeds for even better openSUSE releases. Factory has a healthy growth, but we have to make sure it's growing in the right direction and so the openSUSE team at SUSE had an on-and-off discussion basically since 12.3 on how to improve things.
But first let me give you some background, that you might not be aware of: 10.3 had 3334 packages, 11.1 3746, 3605 for 11.2, 3807 for 11.3, 4784 in 12.1, 5710 in 12.2, 6246 in 12.3, 6678 in 13.1, 6800 right now in Factory. If you need a picture, look at http://s.kulow.org/packages
Integrating these to make a good distribution is real work. And one of my favourite songs (in that context) goes:
No one said it would be easy But no one said it'd be this hard No one said it would be easy No one thought we'd come this far
In that song Sheryl Crow sings "It's just a question of eliminating obstacles", so what did we in the openSUSE Team do to help? We focused on getting a grip on testing by improving openqa (http://s.kulow.org/openqa-blog), but we soon found out that it was not good enough to test Factory ISOs. Factory is broken often enough not to produce ISOs at all, ISOs can't be installed and once these problems are sorted out, we found in openqa very basic things to be broken, but it was too late to protect factory users to run into them.
One thing I tried was to setup "rings" to help easing the very painful staging projects (with 6800 packages, every staging project as we use them is a monster). That experiment has shown rings to be worthy way to check, but they won't work as I thought with the OBS as it is. We need to think bigger. So we tried to come up with an idea on how to improve the factory development process that includes a more clever way to utilize staging projects and openQA.
As this development process is a bit hard to explain in email, Alberto and Ancor prepared an interactive diagram:
https://progress.opensuse.org/workflow/factory-proposal.html
We basically want to put the pressure on the submitting packager not the user. Using factory should be safe, for this we want to revive a thing that has been lost on the way: Bernhard's factory-tested project.
I have two primary concerns, one with the "put pressure on the submitter" and two with the "staging approach". 1.) Putting pressure on the submitter In principal this is probably a good idea, however in practice this is very difficult to follow through. When submission A breaks package B and the submitter A is no responsible for fixing the problem in B than the submitter - must be notified that B broke (not even easy in staging architecture) - the submitter must have some knowledge of B, the code inside of B or acquire that knowledge Putting pressure on the submitter is a good concept to avoid "dump and run scenarios", i.e. put you code in and everyone has to fix the fall out. However, stated as such the submitter can very easily feel overwhelmed and left alone, thus the submission may never take place. What I think we need is a process/environment that holds the submitter sufficiently responsible to avoid "dump and run" while at the same time providing enough support such that the submitter does not feel left alone and overwhelmed. In a staging model I have no idea how to get there. 2.) The staging approach I can only speak from experience and thus this might sound a little lame, sorry. I have seen two implementations of the staging model in action at companies that produce large software suites. In both cases I consider the approach as a failure. The problem in both cases is that the number of staging trees/branches/projects has an ever increasing slope, thus consuming ever more manpower to manage the ever increasing number of staging projects. While the original problem of "how do we deal with unknown adverse interactions between updates" remains unresolved. The "solution" to this problem taken in one case was to have intermediate staging trees where "known risky updates" were tested together. Yes, staging trees upon staging trees. But this only solves the problem superficially as the target tree will move ahead and thus the staging tree by definition is always out of date. Unless the target tree is frozen until a particular staging tree is merged. Anyway it is a maze that requires potentially a lot of people. The other problem with the staging model is that the "potentially risky interactions" knowledge is an implicit set of interactions that the staging tree managers happen to know. This is not expressed anywhere and thus makes it difficult for other people to learn. We have this problem today and from my point of view this will not be resolved with more staging trees. The staging model will not catch adverse interactions reliably. The reason is that by definition the staging tree is always out of date, unless the target tree is frozen and after one staging tree is accepted all other staging trees get rebuilt. This is not conducive to parallel development. What do I have to offer other than concerns? There is the component model that I had proposed a while back. The component model may merge well with the idea of rings, that's something that could be explored. Anyway, I do not necessarily need to revive that discussion, we can of course if people are interested. My main point was to express my concerns about the from my point of view to primary changes in the model - more pressure on submitters - more staging trees Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Hello, Am Sonntag, 1. Dezember 2013 schrieb Robert Schweikert:
1.) Putting pressure on the submitter
Putting pressure on the submitter is a good concept to avoid "dump and run scenarios", i.e. put you code in and everyone has to fix the fall out. However, stated as such the submitter can very easily feel overwhelmed and left alone, thus the submission may never take place. What I think we need is a process/environment that holds the submitter sufficiently responsible to avoid "dump and run" while at the same time providing enough support such that the submitter does not feel left alone and overwhelmed.
While "the submitter fixes everything he breaks" would be ideal, I'd define the goal as: The submitter has to coordinate fixing everything he breaks. This can mean: - the submitter fixes it - the submitter works with the maintainers of the broken packages to get them fixed - the submitter asks for help on the factory ML We should also have a rule saying (assuming it didn't happen in a staging project): If the fixes don't happen in a reasonable time, revert the commit that caused the breakage. I know "reasonable time" is vague, but we'll probably need it that way because it depends on the number of broken packages, time until the release etc.
In a staging model I have no idea how to get there.
That's easy - the package can move from staging to factory after the fallout is fixed ;-)
2.) The staging approach
staging trees upon staging trees. But this only solves the problem superficially as the target tree will move ahead and thus the staging tree by definition is always out of date. Unless the target tree is frozen until a particular staging tree is merged.
The staging tree should link all packages from factory (except the changed package), so it can't be outdated. It just needs to be rebuild. We'll see if we have enough build power ;-) (For speedup, copying the factory binaries to the staging project and only rebuilding the updated package and its dependencies might save some time.)
The other problem with the staging model is that the "potentially risky interactions" knowledge is an implicit set of interactions that the staging tree managers happen to know. This is not expressed anywhere and thus makes it difficult for other people to learn. We have this problem today and from my point of view this will not be resolved with more staging trees.
Except for "staging tree for everything" - but I'm quite sure we'll overload OBS if we create a staging project for every typo fix in a README file ;-) So yes, knowing what needs a staging tree is something you have to learn over time. That's not nice, but I don't see another realistic option. Of course if a package enters factory without staging and breaks something, this is a clear sign that a staging tree would be useful ;-)
There is the component model that I had proposed a while back. The component model may merge well with the idea of rings, that's something that could be explored.
I'm somehow afraid that the component/ring model is difficult to implement because of cross-dependencies - which makes the split into several components difficult. Regards, Christian Boltz --
Feel free to close this bug as accidently ;-) fixed... Nothing happens here by accident ;-)... [> Christian Boltz and Stephan Binner, https://bugzilla.novell.com/show_bug.cgi?id=433239]
-- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Sun, 1 Dec 2013, Christian Boltz wrote:
Hello,
Am Sonntag, 1. Dezember 2013 schrieb Robert Schweikert:
1.) Putting pressure on the submitter
Putting pressure on the submitter is a good concept to avoid "dump and run scenarios", i.e. put you code in and everyone has to fix the fall out. However, stated as such the submitter can very easily feel overwhelmed and left alone, thus the submission may never take place. What I think we need is a process/environment that holds the submitter sufficiently responsible to avoid "dump and run" while at the same time providing enough support such that the submitter does not feel left alone and overwhelmed.
While "the submitter fixes everything he breaks" would be ideal, I'd define the goal as:
The submitter has to coordinate fixing everything he breaks.
This can mean: - the submitter fixes it - the submitter works with the maintainers of the broken packages to get them fixed - the submitter asks for help on the factory ML
We should also have a rule saying (assuming it didn't happen in a staging project):
If the fixes don't happen in a reasonable time, revert the commit that caused the breakage.
I know "reasonable time" is vague, but we'll probably need it that way because it depends on the number of broken packages, time until the release etc.
In a staging model I have no idea how to get there.
That's easy - the package can move from staging to factory after the fallout is fixed ;-)
2.) The staging approach
staging trees upon staging trees. But this only solves the problem superficially as the target tree will move ahead and thus the staging tree by definition is always out of date. Unless the target tree is frozen until a particular staging tree is merged.
The staging tree should link all packages from factory (except the changed package), so it can't be outdated. It just needs to be rebuild.
We'll see if we have enough build power ;-) (For speedup, copying the factory binaries to the staging project and only rebuilding the updated package and its dependencies might save some time.)
This seems to concentrate fully on build-time fallout. While that's certainly part of the quality of Factory a failed build cannot break my running system because I cannot install a failed-to-build package. Yes, with enough build power we can re-build the world for every tiny change. But what does it really mean if the tiny change causes something to no longer build? It means that our dependencies are too weak (foo requires bar-devel instead of bar-devel = 10.2) or incomplete? Or that the now failing packages are simply broken? That said, how does ensuring everything builds enhance the user experience when you have Factory installed? With the Debian 'testing' distribution approach you scale QA by making people using 'unstable' (aka Factory) do testing and file bugs which blocks packages from migrating from 'unstable' to 'testing' unless they are fixed. So to throw in another name (than the appearantly misleading Tumbleweed), 'testing' is a rolling release for 'unstable'. Do we want a rolling-released-Factory? Richard. -- Richard Biener <rguenther@suse.de> SUSE / SUSE Labs SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746 GF: Jeff Hawn, Jennifer Guild, Felix Imend"orffer -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 02.12.2013 11:39, Richard Biener wrote:
With the Debian 'testing' distribution approach you scale QA by making people using 'unstable' (aka Factory) do testing and file bugs which blocks packages from migrating from 'unstable' to 'testing' unless they are fixed. So to throw in another name (than the appearantly misleading Tumbleweed), 'testing' is a rolling release for 'unstable'. Do we want a rolling-released-Factory? Our testing are devel projects. I bet some use devel:gcc to test gcc and report bugs before you send it to Factory. But not enough.
Debian unstable's user base might justify such a split - taking that debian stable is really not for developers from what I've been told. Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Mon, 2013-12-02 at 11:50 +0100, Stephan Kulow wrote:
Our testing are devel projects. I bet some use devel:gcc to test gcc and report bugs before you send it to Factory. But not enough.
That is exactly how I prefer to operate when testing/maintaining GNOME Using GNOME:Factory (the gnome devel repo) with Factory gives me a way of testing all of the 'incoming' GNOME changes. If something in our GNOME devel repo fouls up, I'm a simple 'zypper removerepo' and 'zypper dup' away from getting my machine back to 'usable' so I can get working on figuring out what broke in GNOME The only reason I don't use this approach 24/7, 365 days a year is that with our current approach, Factory isn't 'stable' enough. Or to put it another way, there's about as much chance (if not more) every time I 'zypper up' my machine that recent updates to Factory are going to break my system, as the recent updates to our GNOME devel project.. I think the suggestions so far take us huge strides towards making Factory usable in exactly the kind of way *I* need it.. which is why I'm so excited by them. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Quoting Richard Brown <RBrownCCB@opensuse.org>:
On Mon, 2013-12-02 at 11:50 +0100, Stephan Kulow wrote:
Our testing are devel projects. I bet some use devel:gcc to test gcc and report bugs before you send it to Factory. But not enough.
That is exactly how I prefer to operate when testing/maintaining GNOME
Using GNOME:Factory (the gnome devel repo) with Factory gives me a way of testing all of the 'incoming' GNOME changes.
If something in our GNOME devel repo fouls up, I'm a simple 'zypper removerepo' and 'zypper dup' away from getting my machine back to 'usable' so I can get working on figuring out what broke in GNOME
Just some food for thought here: * if the goal is an always 'stable' Factory, I'm not sure at what time we are now best to submit GNOME:Factory to openSUSE:Factory. There are two conflicting approaches: * Release Often / Release early (approach used so far) * Release stable (new ?) Stable indicates no GNOME 3.11.x, as those are 'expected' to have bugs and introduce changes mid-way. So in order to satisfy the '2nd' entry, I'd argue that no submits should be done until at least RC1 of GNOME (in this case). This is different to what was done so far, where GNOME Team made sure it 'works', but was a bit more 'lose' on accepting to forward with minor issues here and there (knowing that Factory was dev / integration project and some minor instabilities were acceptable). Dominique -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Mon, 2013-12-02 at 11:53 +0000, Dominique Leuenberger a.k.a. Dimstar wrote:
Quoting Richard Brown <RBrownCCB@opensuse.org>:
On Mon, 2013-12-02 at 11:50 +0100, Stephan Kulow wrote:
Our testing are devel projects. I bet some use devel:gcc to test gcc and report bugs before you send it to Factory. But not enough.
That is exactly how I prefer to operate when testing/maintaining GNOME
Using GNOME:Factory (the gnome devel repo) with Factory gives me a way of testing all of the 'incoming' GNOME changes.
If something in our GNOME devel repo fouls up, I'm a simple 'zypper removerepo' and 'zypper dup' away from getting my machine back to 'usable' so I can get working on figuring out what broke in GNOME
Just some food for thought here: * if the goal is an always 'stable' Factory, I'm not sure at what time we are now best to submit GNOME:Factory to openSUSE:Factory.
There are two conflicting approaches: * Release Often / Release early (approach used so far) * Release stable (new ?)
Stable indicates no GNOME 3.11.x, as those are 'expected' to have bugs and introduce changes mid-way. So in order to satisfy the '2nd' entry, I'd argue that no submits should be done until at least RC1 of GNOME (in this case). This is different to what was done so far, where GNOME Team made sure it 'works', but was a bit more 'lose' on accepting to forward with minor issues here and there (knowing that Factory was dev / integration project and some minor instabilities were acceptable).
Dominique
A good point.. for the purposes of "New Factory" I would suggest we define stable/acceptable in Factory as either A) 'this has been tested (ideally by both human and automated means) and is confirmed by those tests to be functioning' or B) 'this has been tested (ideally by both human and automated means), confirmed to be working, and is felt to be 'stable' by the maintainers of that project' I personally prefer B) as it encourages maintainers to make a judgement call based on their knowledge of the upstream project/packages, like the one you describe. I can also think of other recent cases (ATI drivers) where one of our maintainers has made a call that an upstream 'beta' package was good enough for our users. With either A or B, we'd still be able to strive for "Release Often / Release early" while also making sure we only "Release working" -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Richard Brown schrieb:
A) 'this has been tested (ideally by both human and automated means) and is confirmed by those tests to be functioning'
or
B) 'this has been tested (ideally by both human and automated means), confirmed to be working, and is felt to be 'stable' by the maintainers of that project'
I disagree on B because Factory ought to be the development distro, not a "fully stable" distro. There can be bugs, that's OK, as long as it's in a state where it's well enough usable for day-to-day operation and any significant bugs are worked on and fixed fast. So, I'd replace "stable" with "well-usable in daily operation" in your definition of B and then I feel it matches what Factory should be. What I would add in though is that is should not apply to "this" in terms of the particular package only, but "the result of this change", which should include what it does to all dependencies that are in Factory (in most cases, this will not make a huge difference, but in cases like gcc it can be huge). Robert Kaiser -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Quoting Robert Kaiser <KaiRo@kairo.at>:
Richard Brown schrieb:
A) 'this has been tested (ideally by both human and automated means) and is confirmed by those tests to be functioning'
or
B) 'this has been tested (ideally by both human and automated means), confirmed to be working, and is felt to be 'stable' by the maintainers of that project'
I disagree on B because Factory ought to be the development distro, not a "fully stable" distro. There can be bugs, that's OK, as long as it's in a state where it's well enough usable for day-to-day operation and any significant bugs are worked on and fixed fast.
So, I'd replace "stable" with "well-usable in daily operation" in your definition of B and then I feel it matches what Factory should be. What I would add in though is that is should not apply to "this" in terms of the particular package only, but "the result of this change", which should include what it does to all dependencies that are in Factory (in most cases, this will not make a huge difference, but in cases like gcc it can be huge).
Valid definition... and something that can be agreed. But it will still mean, I will not submit GNOME 3.11.2 to Factory; there are surely bugs (that's why upstream does not release it stable branch) and I don't know how fast any of the given bugs can be fixed... and I'm not willing to chase down 250 git commits until the next snapshot release comes (probably in 4 - 6 weeks). So, as it stands, I will keep on waiting for 3.12.0 to become available, at which point I probably can't submit it due to none of the integration work being done / prepared (the upgrade will be too large for all other packages to absorb it... and the GNOME Team will not have time to run after everything, as we'd want 3.12.1 in there soon too). so far, Factory was a 'integration' project, with the aim to be usable.. new it shall be a usable project, with the aim to integrate new stuff. A small, but subtle difference. And the ultimate target to have it stabilized was the release. Dominique -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 02.12.2013 15:08, Dominique Leuenberger a.k.a. Dimstar wrote:
Valid definition... and something that can be agreed.
But it will still mean, I will not submit GNOME 3.11.2 to Factory; there are surely bugs (that's why upstream does not release it stable branch) and I don't know how fast any of the given bugs can be fixed... and I'm not willing to chase down 250 git commits until the next snapshot release comes (probably in 4 - 6 weeks).
So, as it stands, I will keep on waiting for 3.12.0 to become available, at which point I probably can't submit it due to none of the integration work being done / prepared (the upgrade will be too large for all other packages to absorb it... and the GNOME Team will not have time to run after everything, as we'd want 3.12.1 in there soon too).
so far, Factory was a 'integration' project, with the aim to be usable.. new it shall be a usable project, with the aim to integrate new stuff. A small, but subtle difference. And the ultimate target to have it stabilized was the release.
Yeah, there are several ideas floating around to get you back that always-broken project. Like Richard's suggestion to aggregate all devel projects somewhere to get the experimental repo. Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 12/02/2013 09:15 AM, Stephan Kulow wrote:
On 02.12.2013 15:08, Dominique Leuenberger a.k.a. Dimstar wrote:
Valid definition... and something that can be agreed.
But it will still mean, I will not submit GNOME 3.11.2 to Factory; there are surely bugs (that's why upstream does not release it stable branch) and I don't know how fast any of the given bugs can be fixed... and I'm not willing to chase down 250 git commits until the next snapshot release comes (probably in 4 - 6 weeks).
So, as it stands, I will keep on waiting for 3.12.0 to become available, at which point I probably can't submit it due to none of the integration work being done / prepared (the upgrade will be too large for all other packages to absorb it... and the GNOME Team will not have time to run after everything, as we'd want 3.12.1 in there soon too).
so far, Factory was a 'integration' project, with the aim to be usable.. new it shall be a usable project, with the aim to integrate new stuff. A small, but subtle difference. And the ultimate target to have it stabilized was the release.
Yeah, there are several ideas floating around to get you back that always-broken project.
I don't think people are looking for "always broken", but more for "mostly functioning a good chunk of the time". The problem will be, if we have such a project we will need to find (a) new shepherd(s) for that project. Now every one depends mostly on coolo and that and will no longer be able to do in the future. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am Montag, 2. Dezember 2013, 15:15:19 schrieb Stephan Kulow:
On 02.12.2013 15:08, Dominique Leuenberger a.k.a. Dimstar wrote:
Valid definition... and something that can be agreed.
But it will still mean, I will not submit GNOME 3.11.2 to Factory; there are surely bugs (that's why upstream does not release it stable branch) and I don't know how fast any of the given bugs can be fixed... and I'm not willing to chase down 250 git commits until the next snapshot release comes (probably in 4 - 6 weeks).
So, as it stands, I will keep on waiting for 3.12.0 to become available, at which point I probably can't submit it due to none of the integration work being done / prepared (the upgrade will be too large for all other packages to absorb it... and the GNOME Team will not have time to run after everything, as we'd want 3.12.1 in there soon too).
so far, Factory was a 'integration' project, with the aim to be usable.. new it shall be a usable project, with the aim to integrate new stuff. A small, but subtle difference. And the ultimate target to have it stabilized was the release.
Yeah, there are several ideas floating around to get you back that always-broken project. Like Richard's suggestion to aggregate all devel projects somewhere to get the experimental repo.
Since we have currently only a limited number of devel projects this could be easily done by project linking of these projects. The risk will be that it never finishes to build because of permanent changes though. So it might get never published .... But we could try if it works. -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 12/02/2013 09:08 AM, Dominique Leuenberger a.k.a. Dimstar wrote:
Quoting Robert Kaiser <KaiRo@kairo.at>:
Richard Brown schrieb:
A) 'this has been tested (ideally by both human and automated means) and is confirmed by those tests to be functioning'
or
B) 'this has been tested (ideally by both human and automated means), confirmed to be working, and is felt to be 'stable' by the maintainers of that project'
I disagree on B because Factory ought to be the development distro, not a "fully stable" distro. There can be bugs, that's OK, as long as it's in a state where it's well enough usable for day-to-day operation and any significant bugs are worked on and fixed fast.
So, I'd replace "stable" with "well-usable in daily operation" in your definition of B and then I feel it matches what Factory should be. What I would add in though is that is should not apply to "this" in terms of the particular package only, but "the result of this change", which should include what it does to all dependencies that are in Factory (in most cases, this will not make a huge difference, but in cases like gcc it can be huge).
Valid definition... and something that can be agreed.
But it will still mean, I will not submit GNOME 3.11.2 to Factory; there are surely bugs (that's why upstream does not release it stable branch) and I don't know how fast any of the given bugs can be fixed... and I'm not willing to chase down 250 git commits until the next snapshot release comes (probably in 4 - 6 weeks).
So, as it stands, I will keep on waiting for 3.12.0 to become available, at which point I probably can't submit it due to none of the integration work being done / prepared (the upgrade will be too large for all other packages to absorb it... and the GNOME Team will not have time to run after everything, as we'd want 3.12.1 in there soon too).
so far, Factory was a 'integration' project, with the aim to be usable.. new it shall be a usable project, with the aim to integrate new stuff. A small, but subtle difference. And the ultimate target to have it stabilized was the release.
I think GNOME is a good example but the same problem applies to many other devel projects. I also think that neither A or B as proposed by Richard are solutions to the problem described by Dominique. The changes in wording from "stable" to "usable" would definitely be necessary, but this is also a very slippery slope. A bug that breaks functionality in a given way may not effect one person and thus the system is "usable" while it will leave another person completely stuck, thus the system is "not usable". Therefore, even the "usable" definition appears to open up a can of worms. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Quoting Robert Schweikert <rjschwei@suse.com>:
I think GNOME is a good example but the same problem applies to many other devel projects.
Of course this applies to many other areas.. I mainly use GNOME as I have most knowledge in this area and can mostly assess on how bad it behaves / breaks other stuff. Any project that has development branches and stable branches will be similarly to be handled.
I also think that neither A or B as proposed by Richard are solutions to the problem described by Dominique.
The changes in wording from "stable" to "usable" would definitely be necessary, but this is also a very slippery slope. A bug that breaks functionality in a given way may not effect one person and thus the system is "usable" while it will leave another person completely stuck, thus the system is "not usable". Therefore, even the "usable" definition appears to open up a can of worms.
Agree... after all, there is always a number of bugs being reported after a different userbase starts using it. So despite group '1' not having had issues and considering 'usable' (or even stable), group '2' starts working on it, experiences crashes, hangs and general failures.. Dominique -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 12/02/2013 06:25 AM, Richard Brown wrote:
On Mon, 2013-12-02 at 11:50 +0100, Stephan Kulow wrote:
Our testing are devel projects. I bet some use devel:gcc to test gcc and report bugs before you send it to Factory. But not enough.
That is exactly how I prefer to operate when testing/maintaining GNOME
Using GNOME:Factory (the gnome devel repo) with Factory gives me a way of testing all of the 'incoming' GNOME changes.
If something in our GNOME devel repo fouls up, I'm a simple 'zypper removerepo' and 'zypper dup' away from getting my machine back to 'usable' so I can get working on figuring out what broke in GNOME
The only reason I don't use this approach 24/7, 365 days a year is that with our current approach, Factory isn't 'stable' enough. Or to put it another way, there's about as much chance (if not more) every time I 'zypper up' my machine that recent updates to Factory are going to break my system, as the recent updates to our GNOME devel project..
I think the suggestions so far take us huge strides towards making Factory usable in exactly the kind of way *I* need it.. which is why I'm so excited by them.
Well I HATE rebooting my machine, sitting on top of Factory would imply kernel updates on a more regular basis and that doesn't work for me. Rebooting 2 or 3 times during the 8 month release cycle is more than enough for me. If we do not chase the kernel in the new Factory model, but only update the Factory kernel every 3 months or so when there is a new upstream release then I am OK with it, but if we have RC releases in Factory and I would end up having to reboot every two weeks as the kernel release cycle is approaching a new release that is just not going to work for me. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Robert Schweikert - 9:41 2.12.13 wrote:
On 12/02/2013 06:25 AM, Richard Brown wrote:
On Mon, 2013-12-02 at 11:50 +0100, Stephan Kulow wrote:
Our testing are devel projects. I bet some use devel:gcc to test gcc and report bugs before you send it to Factory. But not enough.
That is exactly how I prefer to operate when testing/maintaining GNOME
Using GNOME:Factory (the gnome devel repo) with Factory gives me a way of testing all of the 'incoming' GNOME changes.
If something in our GNOME devel repo fouls up, I'm a simple 'zypper removerepo' and 'zypper dup' away from getting my machine back to 'usable' so I can get working on figuring out what broke in GNOME
The only reason I don't use this approach 24/7, 365 days a year is that with our current approach, Factory isn't 'stable' enough. Or to put it another way, there's about as much chance (if not more) every time I 'zypper up' my machine that recent updates to Factory are going to break my system, as the recent updates to our GNOME devel project..
I think the suggestions so far take us huge strides towards making Factory usable in exactly the kind of way *I* need it.. which is why I'm so excited by them.
Well I HATE rebooting my machine, sitting on top of Factory would imply kernel updates on a more regular basis and that doesn't work for me. Rebooting 2 or 3 times during the 8 month release cycle is more than enough for me.
If we do not chase the kernel in the new Factory model, but only update the Factory kernel every 3 months or so when there is a new upstream release then I am OK with it, but if we have RC releases in Factory and I would end up having to reboot every two weeks as the kernel release cycle is approaching a new release that is just not going to work for me.
You don't have to. Zypper supports multiversion at least for kernels very well. You can keep your current kernel, install the latest ones and reboot when you feel like it ;-) -- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 02.12.2013 15:41, Robert Schweikert wrote:
Well I HATE rebooting my machine, sitting on top of Factory would imply kernel updates on a more regular basis and that doesn't work for me. Rebooting 2 or 3 times during the 8 month release cycle is more than enough for me.
If we do not chase the kernel in the new Factory model, but only update the Factory kernel every 3 months or so when there is a new upstream release then I am OK with it, but if we have RC releases in Factory and I would end up having to reboot every two weeks as the kernel release cycle is approaching a new release that is just not going to work for me.
The fun part about rolling release you can decide yourself when to jump on it. There is no reason to update daily. Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 12/02/2013 10:17 AM, Stephan Kulow wrote:
On 02.12.2013 15:41, Robert Schweikert wrote:
Well I HATE rebooting my machine, sitting on top of Factory would imply kernel updates on a more regular basis and that doesn't work for me. Rebooting 2 or 3 times during the 8 month release cycle is more than enough for me.
If we do not chase the kernel in the new Factory model, but only update the Factory kernel every 3 months or so when there is a new upstream release then I am OK with it, but if we have RC releases in Factory and I would end up having to reboot every two weeks as the kernel release cycle is approaching a new release that is just not going to work for me.
The fun part about rolling release you can decide yourself when to jump on it. There is no reason to update daily.
Theoretically yes, and I guess that'll be part of pushing Factory toward a "more usable all the time direction". Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Monday 02 December 2013 16:17:26 Stephan Kulow wrote:
On 02.12.2013 15:41, Robert Schweikert wrote:
Well I HATE rebooting my machine, sitting on top of Factory would imply kernel updates on a more regular basis and that doesn't work for me. Rebooting 2 or 3 times during the 8 month release cycle is more than enough for me.
If we do not chase the kernel in the new Factory model, but only update the Factory kernel every 3 months or so when there is a new upstream release then I am OK with it, but if we have RC releases in Factory and I would end up having to reboot every two weeks as the kernel release cycle is approaching a new release that is just not going to work for me.
The fun part about rolling release you can decide yourself when to jump on it. There is no reason to update daily.
Quite frankly, I consider this still the best option to have a stable Factory. It's brain-dead simple and may just require a little social interaction: 0) If you are not subscribed to opensuse-factory@ (and probably packaging / buildservice), you should not run Factory. 1) So check opensuse-factory@ for issues regularly. If there was a mail two weeks ago about udev (random choice of mine) being heavily broken and no fix was pushed meanwhile, it may be a good hint to not update today :-) 2) Asking your colleagues or community friends for their recent experience often helps to discover such things. When I ask people, I either get "bah, X is broken again" or "nope, things have been running smooth lately". Of course there's a certain chance that things will go unnoticed, but the ugly stuff will. 3) As coolo said, there's no need to update every morning, once every two weeks is more than enough. 4) Lastly, if zypper tells you some dependencies can't be satisfied, think twice if you want to update. Of course that depends on what is broken and how important it is to you. What we can do though is to make this kind of information more accessible. If dunno if it the amount of bugs filed against Factory per week would be a good indicator, but it could be tried. Add the count of failing packages and some more and maybe we can come up with some kind of "Factory Health Report". This could be published somewhere, maybe even sent to the factory ML every Monday. Just an idea... ----- So I am running Factory as of yesterday. I don't experience any issues besides plasma-networkmanager being unable to load. NetworkManager works, so all green. -- With kind regards, Sascha Peilicke SUSE Linux GmbH, Maxfeldstr. 5, D-90409 Nuernberg, Germany GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer HRB 16746 (AG Nürnberg)
Sascha Peilicke - 10:02 3.12.13 wrote:
...
Quite frankly, I consider this still the best option to have a stable Factory. It's brain-dead simple and may just require a little social interaction:
0) If you are not subscribed to opensuse-factory@ (and probably packaging / buildservice), you should not run Factory.
1) So check opensuse-factory@ for issues regularly. If there was a mail two weeks ago about udev (random choice of mine) being heavily broken and no fix was pushed meanwhile, it may be a good hint to not update today :-)
2) Asking your colleagues or community friends for their recent experience often helps to discover such things. When I ask people, I either get "bah, X is broken again" or "nope, things have been running smooth lately". Of course there's a certain chance that things will go unnoticed, but the ugly stuff will.
Sounds like a lot of work just to make sure that it is safe to update, which overall means that Factory is not stable enough to be trusted, which I think should be changed (not in the perception part but in reality)
3) As coolo said, there's no need to update every morning, once every two weeks is more than enough.
When I used to run Gentoo I used to update every other day, but sometimes I skipped longer periods of times because of vacation or something. Both should be possible.
4) Lastly, if zypper tells you some dependencies can't be satisfied, think twice if you want to update. Of course that depends on what is broken and how important it is to you.
This should be something that we can fix automatically.
What we can do though is to make this kind of information more accessible. If dunno if it the amount of bugs filed against Factory per week would be a good indicator, but it could be tried. Add the count of failing packages and some more and maybe we can come up with some kind of "Factory Health Report". This could be published somewhere, maybe even sent to the factory ML every Monday. Just an idea...
Well, we should strive for no failing packages :-) But having some automatic metrics on how calm factory is right know sound like a good idea :-) -- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Tuesday 03 December 2013 10:20:13 Michal Hrusecky wrote:
Sascha Peilicke - 10:02 3.12.13 wrote:
...
Quite frankly, I consider this still the best option to have a stable Factory. It's brain-dead simple and may just require a little social interaction:
0) If you are not subscribed to opensuse-factory@ (and probably packaging / buildservice), you should not run Factory.
1) So check opensuse-factory@ for issues regularly. If there was a mail two weeks ago about udev (random choice of mine) being heavily broken and no fix was pushed meanwhile, it may be a good hint to not update today :-)
2) Asking your colleagues or community friends for their recent experience often helps to discover such things. When I ask people, I either get "bah, X is broken again" or "nope, things have been running smooth lately". Of course there's a certain chance that things will go unnoticed, but the ugly stuff will.
Sounds like a lot of work just to make sure that it is safe to update,
That's where the Factory Health Report could help.
which overall means that Factory is not stable enough to be trusted, which I think should be changed (not in the perception part but in reality)
It depends on the definition of "stable enough" . But I assume we'll have a shared one once we draw conclusions out of this big thread.
3) As coolo said, there's no need to update every morning, once every two weeks is more than enough.
When I used to run Gentoo I used to update every other day, but sometimes I skipped longer periods of times because of vacation or something. Both should be possible.
4) Lastly, if zypper tells you some dependencies can't be satisfied, think twice if you want to update. Of course that depends on what is broken and how important it is to you.
This should be something that we can fix automatically.
What we can do though is to make this kind of information more accessible. If dunno if it the amount of bugs filed against Factory per week would be a good indicator, but it could be tried. Add the count of failing packages and some more and maybe we can come up with some kind of "Factory Health Report". This could be published somewhere, maybe even sent to the factory ML every Monday. Just an idea...
Well, we should strive for no failing packages :-) But having some automatic metrics on how calm factory is right know sound like a good idea :-)
-- With kind regards, Sascha Peilicke SUSE Linux GmbH, Maxfeldstr. 5, D-90409 Nuernberg, Germany GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer HRB 16746 (AG Nürnberg)
Sascha Peilicke wrote:
On Monday 02 December 2013 16:17:26 Stephan Kulow wrote:
The fun part about rolling release you can decide yourself when to jump on it. There is no reason to update daily.
Quite frankly, I consider this still the best option to have a stable Factory. It's brain-dead simple and may just require a little social interaction:
0) If you are not subscribed to opensuse-factory@ (and probably packaging / buildservice), you should not run Factory.
1) So check opensuse-factory@ for issues regularly. If there was a mail two weeks ago about udev (random choice of mine) being heavily broken and no fix was pushed meanwhile, it may be a good hint to not update today :-)
Coming back from vacation and sick leave I just experience again how much it sucks to find relevant information in old email threads :-) Maybe package meta data could be made to carry some kind of good/bad score. If a certain build of a package is rated down by people, zypper could display a warning. Maybe the solver could even try to come up with a solution that does not upgrade to the faulty build. cu Ludwig -- (o_ Ludwig Nussel //\ V_/_ http://www.suse.de/ SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 16746 (AG Nürnberg) -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, Dec 05, 2013 at 03:03:04PM +0100, Ludwig Nussel wrote:
Coming back from vacation and sick leave I just experience again how much it sucks to find relevant information in old email threads :-) Maybe package meta data could be made to carry some kind of good/bad score. If a certain build of a package is rated down by people, zypper could display a warning. Maybe the solver could even try to come up with a solution that does not upgrade to the faulty build.
Sorry, I'm very much against the idea of the solver choosing packages by some number the user does not see. In other words: I can implement that, but then I'll assign all bug reports about strange solver results to you. Cheers, Michael. -- Michael Schroeder mls@suse.de SUSE LINUX Products GmbH, GF Jeff Hawn, HRB 16746 AG Nuernberg main(_){while(_=~getchar())putchar(~_-1/(~(_|32)/13*2-11)*13);} -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, 5 Dec 2013, Ludwig Nussel wrote:
Sascha Peilicke wrote:
On Monday 02 December 2013 16:17:26 Stephan Kulow wrote:
The fun part about rolling release you can decide yourself when to jump on it. There is no reason to update daily.
Quite frankly, I consider this still the best option to have a stable Factory. It's brain-dead simple and may just require a little social interaction:
0) If you are not subscribed to opensuse-factory@ (and probably packaging / buildservice), you should not run Factory.
1) So check opensuse-factory@ for issues regularly. If there was a mail two weeks ago about udev (random choice of mine) being heavily broken and no fix was pushed meanwhile, it may be a good hint to not update today :-)
Coming back from vacation and sick leave I just experience again how much it sucks to find relevant information in old email threads :-) Maybe package meta data could be made to carry some kind of good/bad score. If a certain build of a package is rated down by people, zypper could display a warning. Maybe the solver could even try to come up with a solution that does not upgrade to the faulty build.
If we have package metadata amendable by users can we at least record (open) bugreports there? OTOH bugreports should be eventually visible to the maintainer so he can do the association as well. Then you could implement
zypper show-bugs-I-get-when-dup
Richard. -- Richard Biener <rguenther@suse.de> SUSE / SUSE Labs SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746 GF: Jeff Hawn, Jennifer Guild, Felix Imend"orffer -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, 5 Dec 2013, Richard Biener wrote:
On Thu, 5 Dec 2013, Ludwig Nussel wrote:
Sascha Peilicke wrote:
On Monday 02 December 2013 16:17:26 Stephan Kulow wrote:
The fun part about rolling release you can decide yourself when to jump on it. There is no reason to update daily.
Quite frankly, I consider this still the best option to have a stable Factory. It's brain-dead simple and may just require a little social interaction:
0) If you are not subscribed to opensuse-factory@ (and probably packaging / buildservice), you should not run Factory.
1) So check opensuse-factory@ for issues regularly. If there was a mail two weeks ago about udev (random choice of mine) being heavily broken and no fix was pushed meanwhile, it may be a good hint to not update today :-)
Coming back from vacation and sick leave I just experience again how much it sucks to find relevant information in old email threads :-) Maybe package meta data could be made to carry some kind of good/bad score. If a certain build of a package is rated down by people, zypper could display a warning. Maybe the solver could even try to come up with a solution that does not upgrade to the faulty build.
If we have package metadata amendable by users can we at least record (open) bugreports there? OTOH bugreports should be eventually visible to the maintainer so he can do the association as well.
Then you could implement
zypper show-bugs-I-get-when-dup
New bugs, of course. And I'd rather restrict metadata to contain (important) regressions. You can then pin the packages you don't want to regress and the solver should be able to update the rest as far as possible. Richard. -- Richard Biener <rguenther@suse.de> SUSE / SUSE Labs SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746 GF: Jeff Hawn, Jennifer Guild, Felix Imend"orffer -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am Donnerstag, 5. Dezember 2013, 16:01:23 schrieb Richard Biener:
On Thu, 5 Dec 2013, Ludwig Nussel wrote:
Sascha Peilicke wrote:
On Monday 02 December 2013 16:17:26 Stephan Kulow wrote:
The fun part about rolling release you can decide yourself when to jump on it. There is no reason to update daily.
Quite frankly, I consider this still the best option to have a stable Factory. It's brain-dead simple and may just require a little social interaction:
0) If you are not subscribed to opensuse-factory@ (and probably packaging / buildservice), you should not run Factory.
1) So check opensuse-factory@ for issues regularly. If there was a mail two weeks ago about udev (random choice of mine) being heavily broken and no fix was pushed meanwhile, it may be a good hint to not update today :-)
Coming back from vacation and sick leave I just experience again how much it sucks to find relevant information in old email threads :-) Maybe package meta data could be made to carry some kind of good/bad score. If a certain build of a package is rated down by people, zypper could display a warning. Maybe the solver could even try to come up with a solution that does not upgrade to the faulty build.
If we have package metadata amendable by users can we at least record (open) bugreports there? OTOH bugreports should be eventually visible to the maintainer so he can do the association as well.
Then you could implement
zypper show-bugs-I-get-when-dup
Richard.
Argh, that means we need to re-generate the meta data on each change in bugzilla? You infrastructure murder! -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, 5 Dec 2013, Adrian Schr?ter wrote:
Am Donnerstag, 5. Dezember 2013, 16:01:23 schrieb Richard Biener:
On Thu, 5 Dec 2013, Ludwig Nussel wrote:
Sascha Peilicke wrote:
On Monday 02 December 2013 16:17:26 Stephan Kulow wrote:
The fun part about rolling release you can decide yourself when to jump on it. There is no reason to update daily.
Quite frankly, I consider this still the best option to have a stable Factory. It's brain-dead simple and may just require a little social interaction:
0) If you are not subscribed to opensuse-factory@ (and probably packaging / buildservice), you should not run Factory.
1) So check opensuse-factory@ for issues regularly. If there was a mail two weeks ago about udev (random choice of mine) being heavily broken and no fix was pushed meanwhile, it may be a good hint to not update today :-)
Coming back from vacation and sick leave I just experience again how much it sucks to find relevant information in old email threads :-) Maybe package meta data could be made to carry some kind of good/bad score. If a certain build of a package is rated down by people, zypper could display a warning. Maybe the solver could even try to come up with a solution that does not upgrade to the faulty build.
If we have package metadata amendable by users can we at least record (open) bugreports there? OTOH bugreports should be eventually visible to the maintainer so he can do the association as well.
Then you could implement
zypper show-bugs-I-get-when-dup
Richard.
Argh, that means we need to re-generate the meta data on each change in bugzilla?
You infrastructure murder!
Well, as it's probably not possible to automate this via bugzilla I'd rather add a osc addbug prj pkg but yes, you probably need to re-generate the meta data. Richard. -- Richard Biener <rguenther@suse.de> SUSE / SUSE Labs SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746 GF: Jeff Hawn, Jennifer Guild, Felix Imend"orffer -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Quoting Richard Biener <rguenther@suse.de>:
Well, as it's probably not possible to automate this via bugzilla I'd rather add a
osc addbug prj pkg
There must of course be a way to mark them solved.. In fact we'd want it removed on checkin.. so osc rq accept should have the logic that bugs, that are mentioned in the .changes, should be removed again... Dominique -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am Donnerstag, 5. Dezember 2013, 15:20:02 schrieb Dominique Leuenberger a.k.a. Dimstar:
Quoting Richard Biener <rguenther@suse.de>:
Well, as it's probably not possible to automate this via bugzilla I'd rather add a
osc addbug prj pkg
There must of course be a way to mark them solved..
no, we do track already bugs in package sources. We do parse the .changes files, eg check osc api /source/openSUSE:Tools/build?view=issues These are of course only bugs which are already handled to some degree, because they are mentioned in the sources.
In fact we'd want it removed on checkin.. so osc rq accept should have the logic that bugs, that are mentioned in the .changes, should be removed again...
We do track bugs, which we want to solve in _patchinfo files. They exist usually in maintenance projects to define what needs to be fixed. But it does not depend on concrete packages, because it is often enough not clear in which package the fix must be applied. You know that usually only after you have fixed it :) -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Mon, 2 Dec 2013, Stephan Kulow wrote:
On 02.12.2013 11:39, Richard Biener wrote:
With the Debian 'testing' distribution approach you scale QA by making people using 'unstable' (aka Factory) do testing and file bugs which blocks packages from migrating from 'unstable' to 'testing' unless they are fixed. So to throw in another name (than the appearantly misleading Tumbleweed), 'testing' is a rolling release for 'unstable'. Do we want a rolling-released-Factory?
Our testing are devel projects. I bet some use devel:gcc to test gcc and report bugs before you send it to Factory. But not enough.
Probably. I thought of devel projects as 'experimental', because the devel projects are where development happens, not where candidates for Factory reside, no? Which immediately makes me think that we can automate this by immediately accepting submitrequests to Factory into a single Factory:staging project (without "review"). Or that it would make sense to provide an aggregate of all devel projects to be able to easily test the future distribution status? [for example Base:build is more like QA for devel projects as it links to devel project sources rather than factory]
Debian unstable's user base might justify such a split - taking that debian stable is really not for developers from what I've been told.
That's true - even the just released stable has a very old subversion version ... stable is really stable, after all ;) They'd better call it "mature". Richard. -- Richard Biener <rguenther@suse.de> SUSE / SUSE Labs SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746 GF: Jeff Hawn, Jennifer Guild, Felix Imend"orffer -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 12/02/2013 08:32 AM, Richard Biener wrote:
On Mon, 2 Dec 2013, Stephan Kulow wrote:
On 02.12.2013 11:39, Richard Biener wrote:
With the Debian 'testing' distribution approach you scale QA by making people using 'unstable' (aka Factory) do testing and file bugs which blocks packages from migrating from 'unstable' to 'testing' unless they are fixed. So to throw in another name (than the appearantly misleading Tumbleweed), 'testing' is a rolling release for 'unstable'. Do we want a rolling-released-Factory?
Our testing are devel projects. I bet some use devel:gcc to test gcc and report bugs before you send it to Factory. But not enough.
Probably. I thought of devel projects as 'experimental', because the devel projects are where development happens, not where candidates for Factory reside, no?
Well, yes an no. As Michal, I think, pointed out yesterday and I am paraphrasing "devel projects are for packages on their way to factory." But, while this may have been the original intend, in practice devel projects are being used for packages on their way to factory as well as packages people would like to have visible in a devel project but do not want to maintain in the "distribution proper." THus devel projects today serve at least 2 purposes.
Which immediately makes me think that we can automate this by immediately accepting submitrequests to Factory into a single Factory:staging project (without "review").
Every problem in computer science can be solved by a level of indirection ;) Thus factory:staging turns into what factory is today. Or one can go the other way as has also been discussed and say factory stays as it is and we use factory:tested as the "rolling factory release". The latter may be more amenable to the picture people already have in there heads. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am Sonntag, 1. Dezember 2013, 19:04:11 schrieb Christian Boltz:
Hello,
Am Sonntag, 1. Dezember 2013 schrieb Robert Schweikert: ...
The other problem with the staging model is that the "potentially risky interactions" knowledge is an implicit set of interactions that the staging tree managers happen to know. This is not expressed anywhere and thus makes it difficult for other people to learn. We have this problem today and from my point of view this will not be resolved with more staging trees.
Except for "staging tree for everything" - but I'm quite sure we'll overload OBS if we create a staging project for every typo fix in a README file ;-)
I do not see the need for this actually. It is perfectly fine to run the staging approach, but replace more then one package in it. We may should have a staging project for collecting such small changes. But we should release them altogether to Factory after validating it with one rebuild (plus possible QA checks if we have some in future). -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 02.12.2013 13:14, Adrian Schröter wrote:
I do not see the need for this actually. It is perfectly fine to run the staging approach, but replace more then one package in it.
We may should have a staging project for collecting such small changes. But we should release them altogether to Factory after validating it with one rebuild (plus possible QA checks if we have some in future).
That sounds like a very sensible thing to do. I wonder how to manage it though. Any ideas? Let's assume we have a limited set of staging projects, we'd need to put a change into one of them "where it fits best". Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 12/02/2013 07:38 AM, Stephan Kulow wrote:
On 02.12.2013 13:14, Adrian Schröter wrote:
I do not see the need for this actually. It is perfectly fine to run the staging approach, but replace more then one package in it.
We may should have a staging project for collecting such small changes. But we should release them altogether to Factory after validating it with one rebuild (plus possible QA checks if we have some in future).
That sounds like a very sensible thing to do. I wonder how to manage it though. Any ideas?
Let's assume we have a limited set of staging projects, we'd need to put a change into one of them "where it fits best".
One option may be to collect changes for a certain period of time and then run a "final" or "master" build of that staging project. If successful it moves to factory. For example there could be a staging project for GNOME, KDE, other DEs. One could collect all the scripting language stuff Perl, Python, Go, PHP..... into one staging project as they are reasonably disconnected and people will not step on each others feet. The problem with that is that if Perl is broken changes for Python cannot migrate to factory either. Then it becomes a people management problem and in the end is the root cause for the "ever growing number of staging trees problem" I mentioned. Also the time factor needs to be debated as a contributor can only get changes into the "distribution proper" based on the cadence of the staging tree he/she works in. Whatever the solution, I think the following basic premisses apply: - People interested in contribution to the distribution, i.e. having packages in factory want to see their changes as soon as possible in factory after submission - People are not very tolerant about having their changes blocked by apparently unrelated "broken" changes. Not very tolerant means that people loose interest quickly when they cannot get their changes into factory because something out of their area of interest is broken. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am Montag, 2. Dezember 2013, 13:38:52 schrieb Stephan Kulow:
On 02.12.2013 13:14, Adrian Schröter wrote:
I do not see the need for this actually. It is perfectly fine to run the staging approach, but replace more then one package in it.
We may should have a staging project for collecting such small changes. But we should release them altogether to Factory after validating it with one rebuild (plus possible QA checks if we have some in future).
That sounds like a very sensible thing to do. I wonder how to manage it though. Any ideas?
we could start with creating a staging project for large submissions like entire KDE update. And then have some collection projects, where we collect everything, which did not fit somewhere else for, let's say 2 weeks. After 2 weeks we do open the next staging project and continue to fix the former one until it settles. What do you think? -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am 04.12.2013 18:34, schrieb Adrian Schröter:
Am Montag, 2. Dezember 2013, 13:38:52 schrieb Stephan Kulow:
On 02.12.2013 13:14, Adrian Schröter wrote:
I do not see the need for this actually. It is perfectly fine to run the staging approach, but replace more then one package in it.
We may should have a staging project for collecting such small changes. But we should release them altogether to Factory after validating it with one rebuild (plus possible QA checks if we have some in future).
That sounds like a very sensible thing to do. I wonder how to manage it though. Any ideas?
we could start with creating a staging project for large submissions like entire KDE update.
And then have some collection projects, where we collect everything, which did not fit somewhere else for, let's say 2 weeks. After 2 weeks we do open the next staging project and continue to fix the former one until it settles.
What do you think?
I think that there is a lot to fix and enhance before we can handle it and discuss. E.g. I have no way to see if openSUSE:Factory:Staging:NoFam already contains changes or if its pure IMO we basically need to find out what problems are common to all models and start hacking. Once we're there, we can test different scenario. The models will only be as good as the tools to support them. Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Dne St 4. prosince 2013 19:58:00, Stephan Kulow napsal(a):
Am 04.12.2013 18:34, schrieb Adrian Schröter:
So as me and Michal played with this for a week or so before we got another crazy stuff here is what we prepped.
we could start with creating a staging project for large submissions like entire KDE update.
That actually is not needed as kde submission is quite self-contained. But on the other hand the Gnome stuff often need multiple passes in repo-checker to get in so it might not hurt to start there. :)
And then have some collection projects, where we collect everything, which did not fit somewhere else for, let's say 2 weeks. After 2 weeks we do open the next staging project and continue to fix the former one until it settles.
What do you think?
I think that there is a lot to fix and enhance before we can handle it and discuss. E.g. I have no way to see if openSUSE:Factory:Staging:NoFam already contains changes or if its pure
We worked on this in the factory-auto repo [1]. The verification is actually half done, we wrote staging plugin for osc and this is mostly the only part that is done and working - checking whether staging project contains only links without any local modifications. What needs to be added is checking if the changes are all part of the GR# or actually already merged in factory if it is the fact. As now the checker just validates that all the changes are back in devel projects. The script now can create staging (might need adjustment to be the no-publish repo and maybe other tweaks to reduce the load) and check if it is clean (no changes directly in there) and of course delete the staging, but without verification for now so it can in fact wipe almost anything even if you don't want it to. There are also completely empty functions to push to factory, eg mark the grouprequest as passed over staging if it is the fact. And function to submit the stuff back to devel projects from the changes within the staging (that one is easy to do but we didn't work there yet as it was lower on the testing list).
IMO we basically need to find out what problems are common to all models and start hacking. Once we're there, we can test different scenario. The models will only be as good as the tools to support them.
The problems we faced there were more on the tooling side which we can overcome by writting the api where needed (which I personaly can't do even if I try best, I wrote like 20 lines top in Ruby). So the issues we faced: If you create loads of stagings the wait-on-result was quite longish TM. MIght be solved by not publishing it or creating them as "frozen" set that can be refreshed when needed. No way how to update the staging for now. Our plan was to use the groups in a way where you could call update, and it would check the grouping request that is in the staging and refresh it based on new submissions in there if needed (could be also hook on group request that each time the GR is updated the staging is refreshed, would be probably much better than command on the client side). Addition is not written, you have to do the linkpac yourself in the repo now. Might be better to allow it in two ways, adding directly to group request (with osc group add) or via staging where you can do (osc staging add SR#) which would do the same; add SR# to the staging groupID and update the project. If you look on the repo there is .dot file which has "workflow" as thought us and it is heavily dependant on the grouping requests at least how we pictured it. So we need to adjust the groupings to be able to accept them, get properly their state and status of all subrequests within the group. Cheers Tom + Michal [1] https://github.com/coolo/factory-auto/blob/master/osc-staging.py
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 05.12.2013 10:38, Tomáš Chvátal wrote:
The verification is actually half done, we wrote staging plugin for osc and this is mostly the only part that is done and working - checking whether staging project contains only links without any local modifications.
What needs to be added is checking if the changes are all part of the GR# or actually already merged in factory if it is the fact. As now the checker just validates that all the changes are back in devel projects.
So we basically need a new project type along with a new request type in OBS. Along with maintenance_incident we need staging.
The script now can create staging (might need adjustment to be the no-publish repo and maybe other tweaks to reduce the load) and check if it is clean (no changes directly in there) and of course delete the staging, but without verification for now so it can in fact wipe almost anything even if you don't want it to.
There are also completely empty functions to push to factory, eg mark the grouprequest as passed over staging if it is the fact. And function to submit the stuff back to devel projects from the changes within the staging (that one is easy to do but we didn't work there yet as it was lower on the testing list).
IMO we basically need to find out what problems are common to all models and start hacking. Once we're there, we can test different scenario. The models will only be as good as the tools to support them.
The problems we faced there were more on the tooling side which we can overcome by writting the api where needed (which I personaly can't do even if I try best, I wrote like 20 lines top in Ruby).
20 lines might be enough in some cases ;)
So the issues we faced:
If you create loads of stagings the wait-on-result was quite longish TM. MIght be solved by not publishing it or creating them as "frozen" set that can be refreshed when needed.
Hmm, as I just implemented mail notifications, I think one missing is actually project finished to build - as a result you wait for. 20 lines might do ;)
No way how to update the staging for now. Our plan was to use the groups in a way where you could call update, and it would check the grouping request that is in the staging and refresh it based on new submissions in there if needed (could be also hook on group request that each time the GR is updated the staging is refreshed, would be probably much better than command on the client side).
Addition is not written, you have to do the linkpac yourself in the repo now. Might be better to allow it in two ways, adding directly to group request (with osc group add) or via staging where you can do (osc staging add SR#) which would do the same; add SR# to the staging groupID and update the project.
If you look on the repo there is .dot file which has "workflow" as thought us and it is heavily dependant on the grouping requests at least how we pictured it. So we need to adjust the groupings to be able to accept them, get properly their state and status of all subrequests within the group.
This needs to be handled in the webui properly, I agree. Greetings, Stephan -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iD8DBQFSoFvtwFSBhlBjoJYRAkElAJ9gdH1geZxT9CGtMDU+KBIac/qpZQCbBktI 9zPbXNKumg8QdoNsbk3YooA= =pFar -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Robert Schweikert - 8:32 1.12.13 wrote:
...
What do I have to offer other than concerns?
There is the component model that I had proposed a while back. The component model may merge well with the idea of rings, that's something that could be explored.
I agree with Christian that component model will just mean more mess than it will do good if we really would split stuff. See how messy devel projects are. -- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am Sonntag, 1. Dezember 2013, 08:32:47 schrieb Robert Schweikert:
On 11/28/2013 08:49 AM, Stephan Kulow wrote: ... The staging model will not catch adverse interactions reliably. The reason is that by definition the staging tree is always out of date, unless the target tree is frozen and after one staging tree is accepted all other staging trees get rebuilt. This is not conducive to parallel development.
It depends how you run it. If you have large enough Staging projects, I think we can build them entirely and merge in factory. Afterwards we to wait for the other staging projects that right. But it is a question how many and therefore how large staging projects we have. -- Adrian Schroeter email: adrian@suse.de SUSE LINUX GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstraße 5 90409 Nürnberg Germany -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 12/02/2013 07:16 AM, Adrian Schröter wrote:
Am Sonntag, 1. Dezember 2013, 08:32:47 schrieb Robert Schweikert:
On 11/28/2013 08:49 AM, Stephan Kulow wrote: ... The staging model will not catch adverse interactions reliably. The reason is that by definition the staging tree is always out of date, unless the target tree is frozen and after one staging tree is accepted all other staging trees get rebuilt. This is not conducive to parallel development.
It depends how you run it. If you have large enough Staging projects, I think we can build them entirely and merge in factory. Afterwards we to wait for the other staging projects that right.
But it is a question how many and therefore how large staging projects we have.
Yes, but larger staging projects imply unrelated things having to wait for each other. Lets use a simple example. Lets say we have a staging tree that has Perl and Python in it. Developer A works on Perl stuff and developer B works on Python stuff. Developer A upgrades Perl to a new version and things break, developer B upgrades Python and everything works smoothly. Or the other way around, I am not being biased. Having both of these in the same staging tree implies that developer B cannot get the stuff into factory because something unrelated is busted in the staging tree. Thus, developer B will be annoyed. Yes ideally everything would work before pushing things into the staging tree, but lets just assume we do not live in an ideal world. The solution to this problem is to split the staging tree, thus we end up with 2, one for Perl changes and one for Python changes. In this case developer B does no longer care if stuff in Perl is broken because the Python staging tree moves forward and gets merged into factory as expected. This is the root cause for the "ever growing" staging tree problem. Even if we set out with lets say one staging tree per devel project one will have the A has to wait on B problem. Look at devel:filesystems. A good chunk of the stuff there is somewhat unrelated and if you clump it all into one staging project you end up with the problem outlined in the "Perl-Python" example above. In general, this is not a technical problem. With enough build power one can theoretically have a staging project for everything and push build times toward an acceptable small time limit. The problem we face is a people problem. Number of people available to chaperone the staging projects, number of people continuing to remain interested when their stuff gets delayed because of some unrelated problem on a relative frequent basis. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Robert Schweikert - 11:42 2.12.13 wrote:
On 12/02/2013 07:16 AM, Adrian Schröter wrote:
Am Sonntag, 1. Dezember 2013, 08:32:47 schrieb Robert Schweikert:
On 11/28/2013 08:49 AM, Stephan Kulow wrote: ... The staging model will not catch adverse interactions reliably. The reason is that by definition the staging tree is always out of date, unless the target tree is frozen and after one staging tree is accepted all other staging trees get rebuilt. This is not conducive to parallel development.
It depends how you run it. If you have large enough Staging projects, I think we can build them entirely and merge in factory. Afterwards we to wait for the other staging projects that right.
But it is a question how many and therefore how large staging projects we have.
Yes, but larger staging projects imply unrelated things having to wait for each other.
Well, unrelated things shouldn't go into same staging project...
Lets use a simple example. Lets say we have a staging tree that has Perl and Python in it. Developer A works on Perl stuff and developer B works on Python stuff.
Shouldn't happen unless Python for some strange reason depend on perl and vice versa :-D
...
This is the root cause for the "ever growing" staging tree problem. Even if we set out with lets say one staging tree per devel project one will have the A has to wait on B problem. Look at devel:filesystems. A good chunk of the stuff there is somewhat unrelated and if you clump it all into one staging project you end up with the problem outlined in the "Perl-Python" example above.
We should not set up Staging project per devel project (devel project is kind of limited staging project already) but we should setup staging project for every problematic package. -- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 12/02/2013 02:19 PM, Michal Hrusecky wrote:
Robert Schweikert - 11:42 2.12.13 wrote:
On 12/02/2013 07:16 AM, Adrian Schröter wrote:
Am Sonntag, 1. Dezember 2013, 08:32:47 schrieb Robert Schweikert:
On 11/28/2013 08:49 AM, Stephan Kulow wrote: ... The staging model will not catch adverse interactions reliably. The reason is that by definition the staging tree is always out of date, unless the target tree is frozen and after one staging tree is accepted all other staging trees get rebuilt. This is not conducive to parallel development.
It depends how you run it. If you have large enough Staging projects, I think we can build them entirely and merge in factory. Afterwards we to wait for the other staging projects that right.
But it is a question how many and therefore how large staging projects we have.
Yes, but larger staging projects imply unrelated things having to wait for each other.
Well, unrelated things shouldn't go into same staging project...
Lets use a simple example. Lets say we have a staging tree that has Perl and Python in it. Developer A works on Perl stuff and developer B works on Python stuff.
Shouldn't happen unless Python for some strange reason depend on perl and vice versa :-D
So you are saying Perl and Python would not be in the same staging project, fair enough. Use a different example and think about the basic problem I was trying to illustrate. As soon as more than one package is in a staging project you have the problem I was trying to describe.
...
This is the root cause for the "ever growing" staging tree problem. Even if we set out with lets say one staging tree per devel project one will have the A has to wait on B problem. Look at devel:filesystems. A good chunk of the stuff there is somewhat unrelated and if you clump it all into one staging project you end up with the problem outlined in the "Perl-Python" example above.
We should not set up Staging project per devel project (devel project is kind of limited staging project already) but we should setup staging project for every problematic package.
Every package is potentially problematic to some extend. If we rule out that we are going to have 6k+ staging projects you end up with the basic problem outlined in the example: Developer A ends up in a staging project with developer B and developer A's changes cannot progress because developer B's stuff breaks something. There is no way around this unless every package/developer gets their own staging project. That is not realistic and creates a completely different nightmare of coordination. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Robert Schweikert - 16:44 2.12.13 wrote:
...
Developer A ends up in a staging project with developer B and developer A's changes cannot progress because developer B's stuff breaks something.
So, they would have to talk to each other and come up with the solution or neither of them will get their stuff in.
There is no way around this unless every package/developer gets their own staging project. That is not realistic and creates a completely different nightmare of coordination.
And wouldn't help on top of that. -- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am 02.12.2013 22:44, schrieb Robert Schweikert:
So you are saying Perl and Python would not be in the same staging project, fair enough. Use a different example and think about the basic problem I was trying to illustrate. As soon as more than one package is in a staging project you have the problem I was trying to describe.
You're creating the same problem we're having in factory for a very long time. It's not a nice problem, but surely not a complete no-go situation. We're quite experienced with it ;) Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 12/03/2013 01:32 AM, Stephan Kulow wrote:
Am 02.12.2013 22:44, schrieb Robert Schweikert:
So you are saying Perl and Python would not be in the same staging project, fair enough. Use a different example and think about the basic problem I was trying to illustrate. As soon as more than one package is in a staging project you have the problem I was trying to describe.
You're creating the same problem we're having in factory for a very long time. It's not a nice problem, but surely not a complete no-go situation. We're quite experienced with it ;)
I am not creating a problem. I am trying to understand how the proposed staging model resolves the problem. Apparently by pointing out problems I have experienced with the staging model in the past it leads us back to where we are. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 03.12.2013 16:33, Robert Schweikert wrote:
On 12/03/2013 01:32 AM, Stephan Kulow wrote:
Am 02.12.2013 22:44, schrieb Robert Schweikert:
So you are saying Perl and Python would not be in the same staging project, fair enough. Use a different example and think about the basic problem I was trying to illustrate. As soon as more than one package is in a staging project you have the problem I was trying to describe.
You're creating the same problem we're having in factory for a very long time. It's not a nice problem, but surely not a complete no-go situation. We're quite experienced with it ;)
I am not creating a problem. I am trying to understand how the proposed I didn't mean "Robert" when I said "you" - just as you didn't mean "Stephan" when you said "you have the problem", did you? :)
staging model resolves the problem. Apparently by pointing out problems I have experienced with the staging model in the past it leads us back to where we are.
Yes, to some degree we move the "current factory" problem into staging projects. Factory is no longer the integration project, the staging projects will be. It's still an open question how many of those we can bear, but what we learned: one integration project is too little. Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Mon, 2 Dec 2013, Michal Hrusecky wrote:
Robert Schweikert - 11:42 2.12.13 wrote:
On 12/02/2013 07:16 AM, Adrian Schr?ter wrote:
Am Sonntag, 1. Dezember 2013, 08:32:47 schrieb Robert Schweikert:
On 11/28/2013 08:49 AM, Stephan Kulow wrote: ... The staging model will not catch adverse interactions reliably. The reason is that by definition the staging tree is always out of date, unless the target tree is frozen and after one staging tree is accepted all other staging trees get rebuilt. This is not conducive to parallel development.
It depends how you run it. If you have large enough Staging projects, I think we can build them entirely and merge in factory. Afterwards we to wait for the other staging projects that right.
But it is a question how many and therefore how large staging projects we have.
Yes, but larger staging projects imply unrelated things having to wait for each other.
Well, unrelated things shouldn't go into same staging project...
Lets use a simple example. Lets say we have a staging tree that has Perl and Python in it. Developer A works on Perl stuff and developer B works on Python stuff.
Shouldn't happen unless Python for some strange reason depend on perl and vice versa :-D
At least Python build-depends on perl as perl is in Base:build (it gets pulled into every build root). Richard. -- Richard Biener <rguenther@suse.de> SUSE / SUSE Labs SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746 GF: Jeff Hawn, Jennifer Guild, Felix Imend"orffer -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Robert Schweikert <rjschwei@suse.com> writes:
2.) The staging approach I can only speak from experience and thus this might sound a little lame, sorry. I have seen two implementations of the staging model in action at companies that produce large software suites. In both cases I consider the approach as a failure.
The problem in both cases is that the number of staging trees/branches/projects has an ever increasing slope, thus consuming ever more manpower to manage the ever increasing number of staging projects.
While the original problem of "how do we deal with unknown adverse interactions between updates" remains unresolved. The "solution" to this problem taken in one case was to have intermediate staging trees where "known risky updates" were tested together. Yes, staging trees upon staging trees. But this only solves the problem superficially as the target tree will move ahead and thus the staging tree by definition is always out of date. Unless the target tree is frozen until a particular staging tree is merged. Anyway it is a maze that requires potentially a lot of people.
The other problem with the staging model is that the "potentially risky interactions" knowledge is an implicit set of interactions that the staging tree managers happen to know. This is not expressed anywhere and thus makes it difficult for other people to learn. We have this problem today and from my point of view this will not be resolved with more staging trees.
The staging model will not catch adverse interactions reliably. The reason is that by definition the staging tree is always out of date, unless the target tree is frozen and after one staging tree is accepted all other staging trees get rebuilt. This is not conducive to parallel development.
What I like in your little umm rant ;) is the notion of *interactions* that need to be tested. What I'd like to see in this whole discussion in addition is 'cadenced flow' and 'integration decision'. When you add these, the resulting staged flow will actually get both bug fixes and package or subsystem upgrades available fast and in stable quality, continously (bold claim) here's how: Let's assume there is some code stable base, call it Factory. The goal is to get updates in there, reliably, regularly, to get it to the next level of being a stable code base. For "leaf packages" that is simple: build the package, test it's functionality, then release it. For for lack of a better word "intermediate packages" (say libpng), you start like above, testing the functionality, until from that perspective it's good to release. But then you also need to get it integrated with "the rest of the world", (maybe 50-100 packages for libpng, 44 on my system). Then there is these "multi-scope packages", like NetworkManager or the bluetooth stack, which affect several whole integration scopes, desktops in this case. they have interactions in two stages (first the desktop, then all other desktops). There is the "transversal packages", affecting almost everything, like the toolchain. And then there is packages affecting few other packages that nonetheless have a lot of interactions that need to be tested (kernel, xorg, ...) Now supposed there is a cascade of staging projects, which potentially 'release', say, every week or every other week (that's the "cadence"). They build a tree structure, something like this: a t \u o \l \l \t o \i \e \o l \b \a gcc \- s \s \f --------------------------------------------------------> Factory / / /\l n/ / \x r/ K/\G \d e/ D/ \N \e k E/ \O \M \E The number of nodes from the root (Factory) to a branch corresponds to the interactions that need to be tested for what goes into that branch. This gives growing rings of scope for interaction testing and integration succes. Successfull build and automatic tests are necessary, sometimes even sufficient for interaction test and integration success. They propagate automatically, to give a 'tentative' next build. That, however, does not affect the 'last known good' build --- that last known good state remains available, too. Thus at each branch, every week a decision can be made: is the combination of the 'new' stuff good enough already to *pull* (!!) it in together? or do we --- for the combination! --- have to stick with what we had so far, the last known good version? btw, 'remote' breaks of the last known good source version, because of some, say, toolchain upgrade, clearly indicate said root cause, the toolchain upgrade, needs some love, too. Integration Manager, set priorities... what do you want in this week's Factory? maybe the new toolchain will not be there yet... A GNOME release that needs a new NM or bluetooth may need a number of cycles to get to a shape where it can be merged with KDE. This will also lead to races. For example a new gcc might race a desktop integration, i.e. the new gcc works with the last known good version of the desktops. Then the new desktop integration will need to do their homework and pull in the new gcc, and until then, the new gcc will just build the last known good desktop. Also a new GNOME will have at least to ensure integration issues are resolved with the current stable KDE (last known good), and hopefully the KDE guys are good citicens and are willing to spend the time to make sure this works. They will. They need the GNOME guys to do the same a few weeks later... Sometimes for several weeks the integration master needs to pull some old versions (the last known good combination), just because the new combination is not ready yet. Now how can small leaf package updates skip such a major barrier? e.g. a new gimp? At each junktion it is clear what needs to be tested. If the new leaf gtk+ application, gimp, can also be integrated with the 'last known good version', the one that is still in Factory, then it can be integrated into that and then thus moves on, ahead of the rest of GNOME, at the next cycle. So if we tilt the above tree and look at it sideways, it almost looks like git integration: new stuff merge success ---------------------------------------------*-------- / \ / \pull new gimp / \ last known good --------------------*-------------------------------R.I.P. Such a system of 'staging' or integration projects gives a clear flow of both updates and upgrades into a well integrated and tested Factory, which is 'released' at a weekly cadence. The cadence also scopes the size of the integration projects: they should be small enough to allow something like a weekly or at most bi-weekly lock-step. Beta users for some integration point or branch can use this a few days ahead of the release, (zypper rr + zypper dup gets them sane again) A new kernel can be available for pull for a longer while, until it has enough love and testing to actually be pulled in as The Kernel, how to manage kernel beta test is an own excercise when we get a stable Factory. Branch projects can also be used by those happy with a partial integration (new KDE even if it breaks GNOME or vice versa), or experimenting early with completely new feature sets (systemd, ...). I believe this model overcomes some of the issues Robert has brought up about the traditional 'staging' model. It gives clear responsibilities and a clear cadence. No model can solve the problem *who* is going to do the integration work, but this staging model at least clearly scopes and cadences what needs to be done and when to keep the flow going. my .02 EUR S. -- Susanne Oberhauser SUSE LINUX Products GmbH +49-911-74053-574 Maxfeldstraße 5 Processes and Infrastructure 90409 Nürnberg GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 16746 (AG Nürnberg) -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 12/02/2013 12:16 PM, Susanne Oberhauser-Hirschoff wrote:
Robert Schweikert <rjschwei@suse.com> writes:
2.) The staging approach I can only speak from experience and thus this might sound a little lame, sorry. I have seen two implementations of the staging model in action at companies that produce large software suites. In both cases I consider the approach as a failure.
The problem in both cases is that the number of staging trees/branches/projects has an ever increasing slope, thus consuming ever more manpower to manage the ever increasing number of staging projects.
While the original problem of "how do we deal with unknown adverse interactions between updates" remains unresolved. The "solution" to this problem taken in one case was to have intermediate staging trees where "known risky updates" were tested together. Yes, staging trees upon staging trees. But this only solves the problem superficially as the target tree will move ahead and thus the staging tree by definition is always out of date. Unless the target tree is frozen until a particular staging tree is merged. Anyway it is a maze that requires potentially a lot of people.
The other problem with the staging model is that the "potentially risky interactions" knowledge is an implicit set of interactions that the staging tree managers happen to know. This is not expressed anywhere and thus makes it difficult for other people to learn. We have this problem today and from my point of view this will not be resolved with more staging trees.
The staging model will not catch adverse interactions reliably. The reason is that by definition the staging tree is always out of date, unless the target tree is frozen and after one staging tree is accepted all other staging trees get rebuilt. This is not conducive to parallel development.
What I like in your little umm rant ;) is the notion of *interactions* that need to be tested.
Not ranting, not even a little bit or with a wink. All I am going to be ale to bring to the table are the experiences I have had in the past with the "staging model". I will not have time to help with the implementation and it is unlikely that I will have the time to volunteer to chaperone a staging branch if the decision should be made to go that route. Ultimately those willing to do the work and those willing to be chaperones decide. I will not complain about the decision as I am not going to be able to do the work. The staging model has fundamental design problems and one can implement processes and procedures to alleviate the impact of those fundamental design issues. Only time will tell if this is ultimately sufficient if we continue to experience the growth we have seen.
What I'd like to see in this whole discussion in addition is 'cadenced flow' and 'integration decision'.
Cadence becomes a big deal and coordination of stage projects and their order of merge into factory also becomes a big deal.
When you add these, the resulting staged flow will actually get both bug fixes and package or subsystem upgrades available fast and in stable quality, continously (bold claim) here's how:
Well we can argue about the fast part ;)
Let's assume there is some code stable base, call it Factory.
The goal is to get updates in there, reliably, regularly, to get it to the next level of being a stable code base.
For "leaf packages" that is simple: build the package, test it's functionality, then release it.
Well, I do not think it is that simple. One could argue that Perl is a leaf package. But we have perl-bootloader and thus and update to Perl could break perl-bootloader which in turn would be a bad thing with pretty far reaching effects. Father, perl-bootloader does not stand unto it's own, it uses Perl modules that should definitely be considered as leaf packages. A similar argument can be made for KIWI, which depends on a lot of leaf packages but, KIWI is very important to create our ISO images. Thus, the line for leaf packages is blurry at best.
For for lack of a better word "intermediate packages" (say libpng), you start like above, testing the functionality, until from that perspective it's good to release. But then you also need to get it integrated with "the rest of the world", (maybe 50-100 packages for libpng, 44 on my system).
Then there is these "multi-scope packages", like NetworkManager or the bluetooth stack, which affect several whole integration scopes, desktops in this case. they have interactions in two stages (first the desktop, then all other desktops).
There is the "transversal packages", affecting almost everything, like the toolchain.
And then there is packages affecting few other packages that nonetheless have a lot of interactions that need to be tested (kernel, xorg, ...)
Now supposed there is a cascade of staging projects, which potentially 'release', say, every week or every other week (that's the "cadence").
They build a tree structure, something like this:
a t \u o \l \l \t o \i \e \o l \b \a gcc \- s \s \f --------------------------------------------------------> Factory / / /\l n/ / \x r/ K/\G \d e/ D/ \N \e k E/ \O \M \E
The number of nodes from the root (Factory) to a branch corresponds to the interactions that need to be tested for what goes into that branch.
This gives growing rings of scope for interaction testing and integration succes. Successfull build and automatic tests are necessary, sometimes even sufficient for interaction test and integration success. They propagate automatically, to give a 'tentative' next build. That, however, does not affect the 'last known good' build --- that last known good state remains available, too.
Yes, however, what is being neglected is that there is a fundamental problem with the cadence. The cadence itself is influenced by the process, through rebuild times and other snafus that are inevitable. For illustration purposes lets say the cadence for the autotools staging is every other Monday, and the desktops get their say every other Wednesday on the off weeks, i.e. auto tools goes weeks 1, 3, 5 and so on and the desktops merge in weeks 2, 4, 6, and so on. At the beginning of week 2 the autotools merge has to be completed in order to give the desktop staging tree sufficient time to rebuild to meet it's merge window on Wednesday. During this time (Monday of week 2 until end of Wednesday in week 2) nothing else is allowed to be merged into factory or the desktop staging tree would have to be rebuilt again. That's all fine but we have a time problem.... We only have a certain number of days available in a year. Lets be optimistic and say our staging branch chaperones spend 300 days a year fiddling with the staging branches. This would provide a theoretical maximum of 300 staging branches, if we can manage to merge one every day. This however is not possible as the build time for the project alone dictates that certain changes require build times > 1 day. This reduces the maximum number of staging branches we can have further. Lets say we end up with a theoretical maximum of 250 staging branches. Simple math dictates that with 6k+ packages we will have things in staging branches that can break independently. Therefore, one developer is stuck in the same staging tree as another developer that happens to break something. The "innocent bystander developer", that happened not to break anything will have to wait not just the regular cadence, but the cadence plus the fix time of the unrelated breakage. This is not very satisfying for the developer that didn't break anything. In addition if the unrelated breakage does not get fixed in time for the merge window of the staging branch than the given staging branch has to wait for it's next merge opportunity, which may be weeks away. One way to alleviate some of the problems is to have a very long cadence for "traversal packages". Lets say we only allow tool chain updates once every blue moon. But what if there is a bug in the tool chain or some other unknown undesired interaction? Now we must have a fix and the tool chain staging tree must get priority and be merged much sooner than it's next expected merge window. With this the cadence goes out the window. As all other staging trees will have to be pushed of their cadence to accommodate the new tool chain merge.
Thus at each branch, every week a decision can be made: is the combination of the 'new' stuff good enough already to *pull* (!!) it in together?or do we --- for the combination! --- have to stick with what we had so far, the last known good version?
There is no way to know if you can pull things in together because staging branches do not get cross built against each other. In the figure above everything is nicely spaced, but that probably does not reflect the real world. If libs and the desktops are ready at he same time one can still not merge them into factory at the same time because they have not built against each other, they have built against "current" factory. Thus, one would have to send libs for the merge to the reference branch and rebuild and retest the desktops. Therefore, all the build and testing effort of the previous desktop staging branch is out the window and useless. To eliminate the waste in testing and build one has to wait until the reference branch is "frozen" for the merge window of a particular staging branch. Then build the staging branch, then test. Especially the testing is difficult when we talk about the DEs. I will postulate the following technical requirement: """ The only way to protect against adverse interaction is to build and test a staging tree against a frozen reference tree. """ In our case the reference tree would be factory. The technical requirement for staging work to catch adverse interaction is therefore pretty simple. But the technical requirement creates a people problem ;) . People hate the waterfall and hurry up and wait stuff. Therefore, what tends to happen is that multiple staging branches get merged into the reference branch based on heuristic historical data of "no adverse interaction when merging a given set of branches in the past". This data has a number of problems: - past behavior does not guarantee future performance if perl-bootloader or kiwi depend on a new leaf package the heuristic data of those staging trees is useless as a new set of interactions is created - the heuristic knowledge is intrinsic to the chaperone of the reference branch (granted, this is not necessarily much different than it is today, but we are looking for improvement and not "the same") - the bus factor remains 1, i.e. the chaperone of the reference branch
btw, 'remote' breaks of the last known good source version, because of some, say, toolchain upgrade, clearly indicate said root cause, the toolchain upgrade, needs some love, too. Integration Manager, set priorities... what do you want in this week's Factory? maybe the new toolchain will not be there yet...
A GNOME release that needs a new NM or bluetooth may need a number of cycles to get to a shape where it can be merged with KDE.
This will also lead to races. For example a new gcc might race a desktop integration, i.e. the new gcc works with the last known good version of the desktops. Then the new desktop integration will need to do their homework and pull in the new gcc, and until then, the new gcc will just build the last known good desktop.
Also a new GNOME will have at least to ensure integration issues are resolved with the current stable KDE (last known good), and hopefully the KDE guys are good citicens and are willing to spend the time to make sure this works. They will. They need the GNOME guys to do the same a few weeks later...
Sometimes for several weeks the integration master needs to pull some old versions (the last known good combination), just because the new combination is not ready yet.
Now how can small leaf package updates skip such a major barrier? e.g. a new gimp?
At each junktion it is clear what needs to be tested. If the new leaf gtk+ application, gimp, can also be integrated with the 'last known good version', the one that is still in Factory, then it can be integrated into that and then thus moves on, ahead of the rest of GNOME, at the next cycle.
But this implies that gimp has it's own staging branch, thus one is feeding the "ever expanding number of staging branches" monster. One cannot pull a part of a staging branch without placing the pulled pieces into a staging tree of its own and building and testing that staging tree against a "frozen" reference branch.
So if we tilt the above tree and look at it sideways, it almost looks like git integration:
new stuff merge success ---------------------------------------------*-------- / \ / \pull new gimp / \ last known good --------------------*-------------------------------R.I.P.
True, but one still has to build and test the cherry picked stuff, i.e. that's where the need for yet another staging project is created. This rests on the basic assumption that only stuff built and tested against the reference branch can be merged.
Such a system of 'staging' or integration projects gives a clear flow of both updates and upgrades into a well integrated and tested Factory, which is 'released' at a weekly cadence.
The cadence also scopes the size of the integration projects: they should be small enough to allow something like a weekly or at most bi-weekly lock-step.
Beta users for some integration point or branch can use this a few days ahead of the release, (zypper rr + zypper dup gets them sane again)
A new kernel can be available for pull for a longer while, until it has enough love and testing to actually be pulled in as The Kernel, how to manage kernel beta test is an own excercise when we get a stable Factory.
Branch projects can also be used by those happy with a partial integration (new KDE even if it breaks GNOME or vice versa), or experimenting early with completely new feature sets (systemd, ...).
I believe this model overcomes some of the issues Robert has brought up about the traditional 'staging' model. It gives clear responsibilities and a clear cadence.
No model can solve the problem *who* is going to do the integration work, but this staging model at least clearly scopes and cadences what needs to be done and when to keep the flow going.
There are good ideas here to alleviate some of the fundamental issues inherent in the staging model. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Robert Schweikert - 19:39 2.12.13 wrote:
...
Well, I do not think it is that simple. One could argue that Perl is a leaf package.
No one cannot. Perl is really nice example what is NOT a leaf package. Too many things depends on Perl. Leaf package is mc, linphone, maybe pidgin (although it has plugins, I believe these are built from the same source).
A similar argument can be made for KIWI, which depends on a lot of leaf packages but, KIWI is very important to create our ISO images. Thus, the line for leaf packages is blurry at best.
KIWI is a leaf package as no other package depends on it. KIWI maintainer should check before submitting it into Factory that it works. If some bug slips and we are not able to produce DVDs, fine, we are not going to do it for Factory every day anyway.
...
Yes, however, what is being neglected is that there is a fundamental problem with the cadence. The cadence itself is influenced by the process, through rebuild times and other snafus that are inevitable.
Yes, we will slow down, but we will make sure that stuff works and doesn't break anything. I can live with getting new gcc month later if my computer can work in the meantime.
For illustration purposes lets say the cadence for the autotools staging is every other Monday, and the desktops get their say every other Wednesday on the off weeks, i.e. auto tools goes weeks 1, 3, 5 and so on and the desktops merge in weeks 2, 4, 6, and so on. At the beginning of week 2 the autotools merge has to be completed in order to give the desktop staging tree sufficient time to rebuild to meet it's merge window on Wednesday. During this time (Monday of week 2 until end of Wednesday in week 2) nothing else is allowed to be merged into factory or the desktop staging tree would have to be rebuilt again. That's all fine but we have a time problem....
Why fixed schedule? We should do it *on demand*. Only when new change comes. And we can probably skip stagings (in case we have not enough power), for simple bugfix releases. How long does it take for new gcc to get stable? A year? We can do staging for it once a year :-) -- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 03.12.2013 09:33, Michal Hrusecky wrote:
Robert Schweikert - 19:39 2.12.13 wrote:
...
Well, I do not think it is that simple. One could argue that Perl is a leaf package.
No one cannot. Perl is really nice example what is NOT a leaf package. Too many things depends on Perl. Leaf package is mc, linphone, maybe pidgin (although it has plugins, I believe these are built from the same source).
A similar argument can be made for KIWI, which depends on a lot of leaf packages but, KIWI is very important to create our ISO images. Thus, the line for leaf packages is blurry at best.
KIWI is a leaf package as no other package depends on it. KIWI maintainer should check before submitting it into Factory that it works. If some bug slips and we are not able to produce DVDs, fine, we are not going to do it for Factory every day anyway.
KIWI also produces FTP trees and live CDs. Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Robert, thanks for your feedback. There is one important thing I did not state clearly which is the interactions between the branches (like libs underlying and affecting desktop, or what Coolo mentioned, libs that are part of some desktop and that are used by packages outside the desktop). Robert Schweikert <rjschwei@suse.com> writes:
On 12/02/2013 12:16 PM, Susanne Oberhauser-Hirschoff wrote:
Let's assume there is some code stable base, call it Factory.
The goal is to get updates in there, reliably, regularly, to get it to the next level of being a stable code base.
For "leaf packages" that is simple: build the package, test it's functionality, then release it.
Well, I do not think it is that simple. One could argue that Perl is a leaf package. But we have perl-bootloader and thus and update to Perl could break perl-bootloader which in turn would be a bad thing with pretty far reaching effects. Father, perl-bootloader does not stand unto it's own, it uses Perl modules that should definitely be considered as leaf packages.
A similar argument can be made for KIWI, which depends on a lot of leaf packages but, KIWI is very important to create our ISO images. Thus, the line for leaf packages is blurry at best.
There is packages that without much harm can be integrated late because they have very little to no impact on other packages. That to me wrt integration is a leaf package. perl is certainly not amongst them. And this whole tree is arranged by a reasonable (heuristic) order of integration.
Now supposed there is a cascade of staging projects, which potentially 'release', say, every week or every other week (that's the "cadence").
They build a tree structure, something like this:
a t \u o \l \l \t o \i \e \o l \b \a gcc \- s \s \f --------------------------------------------------------> Factory / / /\l n/ / \x r/ K/\G \d e/ D/ \N \e k E/ \O \M \E
The number of nodes from the root (Factory) to a branch corresponds to the interactions that need to be tested for what goes into that branch.
This gives growing rings of scope for interaction testing and integration succes. Successfull build and automatic tests are necessary, sometimes even sufficient for interaction test and integration success. They propagate automatically, to give a 'tentative' next build. That, however, does not affect the 'last known good' build --- that last known good state remains available, too.
Yes, however, what is being neglected is that there is a fundamental problem with the cadence. The cadence itself is influenced by the process, through rebuild times and other snafus that are inevitable.
[waterfall, one common build target in all branches at any time, hundreds of staging projects, thesis on how (toolchain) changes propagate]
Ah. I definitively did not communicate clearly. Every package needs two ways to get into the next stable base, the next Factory: very quickly flow through the system, no matter which branch you twig off, because the integration impact is low. Or in flow with the big wave that comes down the integration flow. The first path is what I call the 'fast track'. It is for minor updates, patches, fixes, things that don't impact much and can be released with little integration testing, or sufficiently reliable automatic tests. The second path is the one for things that really need integration work. That path is indeed taking longer but I don't get to match your math with my model. I don't see why a usefull tree would have more than a few dozen branches with a total depth of at most half a dozen stages. Then there is an asumption that there is exactly one build and integration target of new stuff. That indeed would cause the dealays and friction you describe. However packages added at a branch are built for both what is comming next from upstream and the 'last known good state' from upstream. First let's look from above: \ \ OTHER BRANCH \'devel' \'devel' \ \ \ 'integrate' 'devel' \ 'integrate' \_____ next cycle \_____ _____*--------------------------*------ / decision decision / / /'alpha' THE FORK UPSTREAM THIS BRANCH DOWNSTREAM pull add new stuff locally integrate integrate with other release (or don't) In a real world factory production line (steam, dust, oil) that is using kanban, when you move a part down the line, at well defined points you will send a 'signal kanban' to branches that are going to be merged soon. Like "make me a new engine, I'll need it in a short while". This is how rebuilds and automatic tests should be triggered for what you anticipate to provide soon. So here comes the part that I did not spell out clearly: There have to be *three* target builds for each branch, built from *one* source: - Build for the last accepted good state, Factory unmodified. This gives you clarity on whether your local changes work. - Build for the 'fast track' from upstream, changes that heuristically have no or only well known integration impact. - Build for the proposed next state bigger update from upstream, if any. The "chaperones" of each branch will make two major decisions during each cycle: At the beginning of each cycle they decide what to do in this cycle in this branch: small changes? big changes? and when will they start integration through 'the fork' with the 'other branch'? They may then work on three possible things, until they make the cycle end decision: * If there is a fast track from upstream, they will move that along, possibly adding their own fast track stuff, possibly negotiating with their upstream what has priority, upstream additions or their own. They might also release it mid-cycle to further accelerate downstream processing of this fast track. * If there is work on their branch they work on it, possibly getting it ready as next upgrade. * They switch to integration with the 'other branch' to have a next integrated version ready for their downstream. They may not do this at all during a cycle. That's saying: no upgrade this time from this branch. At the end of the cycle the "chaperones" decide whether they release their work for their downstream, so they decide what they offer to their downstream to start the next cycle with. They also cosnciously decide what they provide as fast track to their downstream: the fast track they got from upstream (only doing integration with the other branch), only their own fast track stuff, holding back the fast track from upstream, or both. The real novelty of this tree structure is clear focus and scope for integration: What do I need to ensure at this decision point? It is *not* less work. It just gives the work that needs to be done clear areas of responsibility: in each branch, the work is local. At each fork, the work is local for two branches, meaning collaboration. At the end of each cycle there is two outputs after each fork: - a fast track build - a major update/upgrade build So at the end of each cycle, there is a sane starting state for the next branch downstream to pull from and build upon. It is then their job to try with the most recent 'good to pull', the most recent 'next version'. The more I think about it, the tree resembles the merging part of a git merge graph. The novelty here is, that this proposal structures the branches around things that integrate easily locally, in the branches, and where the real integration then needs to happen, at the bifurcation/branch point/fork. The tree structure gives a heuristically proven order to usually best do this. In the past there was concern about an explosion of builds in such a model. This doesn't happen, though: you only build for - the last accepted good state. It gives you clarity on what you have locally changed. this will be your fallback fast track output for your downstream. - the proposed minor updates to said last known good state. this is accepting things on the fast track upstream, for integration with your fast track upgrades. If you manage to use it successfully, *this* will be your fast track merge with the other branch and your joint offer to your downstream. - the proposed next state from upstream. This is where you also contribute your big changes. So there is no build explosion. You decide which of your results is good enough to be provoded to your downstream in two buckets (fast track and big update/upgrade), plus the underlying factory. If you scope rebuilds to not percolate ahead of your integration review and test, it may even do fewer builds.
Thus at each branch, every week a decision can be made: is the combination of the 'new' stuff good enough already to *pull* (!!) it in together?or do we --- for the combination! --- have to stick with what we had so far, the last known good version?
There is no way to know if you can pull things in together because staging branches do not get cross built against each other. In the figure above everything is nicely spaced, but that probably does not reflect the real world. If libs and the desktops are ready at he same time one can still not merge them into factory at the same time because they have not built against each other, they have built against "current" factory.
They won't be ready at the same time in this way. At each junction point, these options exist: - just one of the branches is good enough - no branch is good enough - both branches are good to merge Only if both are good to merge, integration builds need to be created and tested and then, based on that result, be provided one level further downstream or not. So downstream will always see three options of base builds: - last known good state (no changes, aka "Factory") - fast track changes - serious new integration changes And downstream will make a conscious decision what they can digest at this point in time.
But the technical requirement creates a people problem ;) . People hate the waterfall and hurry up and wait stuff.
That's what the 'fast track' option is for. Which should really only be used for things that should create little to no pain, and that should be an easily overseeable amount of changes, where the chaperones at each branch can tell what to look at.
Therefore, what tends to happen is that multiple staging branches get merged into the reference branch based on heuristic historical data of "no adverse interaction when merging a given set of branches in the past". This data has a number of problems:
- past behavior does not guarantee future performance if perl-bootloader or kiwi depend on a new leaf package the heuristic data of those staging trees is useless as a new set of interactions is created
- the heuristic knowledge is intrinsic to the chaperone of the reference branch (granted, this is not necessarily much different than it is today, but we are looking for improvement and not "the same")
- the bus factor remains 1, i.e. the chaperone of the reference branch
To truly solve that you'd need to solve the halting problem... What alternative do you see to structure integration work based on heuristics or even measured past performance?
At each junction it is clear what needs to be tested. If the new leaf gtk+ application, gimp, can also be integrated with the 'last known good version', the one that is still in Factory, then it can be integrated into that and then thus moves on, ahead of the rest of GNOME, at the next cycle.
But this implies that gimp has it's own staging branch, thus one is feeding the "ever expanding number of staging branches" monster. One cannot pull a part of a staging branch without placing the pulled pieces into a staging tree of its own and building and testing that staging tree against a "frozen" reference branch.
No monsters under this bed: gimp is on one of the branches. It is built to 'fast track', 'known good' and 'next integration'. That's it.
So if we tilt the above tree and look at it sideways, it almost looks like git integration:
new stuff merge success ---------------------------------------------*-------- / \ / \pull new gimp / \ last known good --------------------*-------------------------------R.I.P.
True, but one still has to build and test the cherry picked stuff, i.e. that's where the need for yet another staging project is created. This rests on the basic assumption that only stuff built and tested against the reference branch can be merged.
Again: No additional staging project. Just the three build targets. And the *build* is what is handed down or rejected. The 'last known good' is the reference tree you mention. The 'fast track' is updates with (heuristically) low impact. The 'integration' is upgrades with (heuristically) high impact. Things on the fast track propagate quickly to become the next 'known good' state. As there is no skip lining at integration decision points, they will trigger rebuilds of all dependent packages in all branches. if one of them turns the update down (by not using it as the base for their fast track), communication and negotiation and fixes are needed, corresponding to what would normally happen where the two 'normally' meet. Once the 'fast track' is connected all the way from all tips to the root, it is the next 'known good'. Likewise 'integration'. You also mentioned base libs getting ready at the same time as some app. Well, one of the two will be further up in the tree, and will arrive at the end earlyer. They at the same time get into their respectvie next build, which then can be handed down. If we assume a tree of 6 steps and a weekly cadence that's a maximum of 6 weeks, unless the change is on a fast track. (Lightbulb) now I get something I didn't see before :) In a situation like this, say you worked something all the way from B to ABCDE, whcih was painfull and too several cycles. D\ /E \B \C * \F \ \ \DE \ ------*-----*-------*------*------- A AB ABC ABCDE ABCDEF Now A is already working on the next big thing, and they could do it faster, but not as in fast track they might want to catch up. They don't, B is going in the next ABCDEF. So what I realized is that this is rather layers of upgrades, generations, integration versions, releases, cadence ids, whatever you may call it. In this model, you create new builds in your branch, based on a latest common release, and these are handed downstream and because this is cadenced, each cycle will produce an integrated result at the end. So it's not two layers (fast track and next generation) but no harm is done, because the number remains reasonably low: The number of layers is the number of cycles a branch is releasing a new base for their downstream, be it fast track (update) or full integration (upgrade). And that number should be limited. If a branch is piling up releases that are never picked up and never make it all the way to the user, something is going wrong. Btw, if the builds are tagged by 'quality', a power beta tester user can request beta level for some branch. An end user can request release quality for some branch. Developers could also pick alpha level code for testing, before they tag it beta. S. -- Susanne Oberhauser SUSE LINUX Products GmbH +49-911-74053-574 Maxfeldstraße 5 Processes and Infrastructure 90409 Nürnberg GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 16746 (AG Nürnberg) -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
I sometimes wonder how you are able to get a working release out the door. I suppose I had some idea of what is involved as I worked with IBM's VM for many years but I never realised just how much work it takes. Thanks for managing it! Andy -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Stephan Kulow wrote:
Our ideas are just ideas, but I had several discussions in various places and nobody offered a better idea. So we really would like to start with it and I would like to hear your concerns so they can be part of the final solution.
There has been some feedback that in all the enthusiasm of throwing ideas out we still lack a good summary of the goals behind all of those proposals. So here's my attempt to contribute to the mess by writing yet another mail :-) tl;dr skip to "Long story short". According to the numbers the amount of packages in Factory is increasing➀. However, at the same time the download statistics➁ show that the number of actual Factory users does not grow. The former can be explained by successful efforts in the past which had the goal of getting more packages. Regarding the latter the openSUSE team thinks that one of the reasons for the stagnation is that many don't perceive Factory as reliable enough to run it on their development systems. At the same time however more users that actually use Factory are needed to find issues early and fix them quickly. One needs to feel the itch to start scratching. So this is some kind of a chicken and egg problem. Therefore we proposed➀ to create a process that keeps Factory as usable as possible for our contributors (ie zypper dup should be fairly safe at any point in time). This means Factory becomes a more viable option for using it as "rolling distribution". That designation in our opinion includes also regularly releasing ISO image snapshots with working installer so people can get started somewhere. Crucial part to get there is introducing openQA➂ and staging projects➃ in the process. All the work for that cannot be put on the shoulders of just one guy. So we want to have a distributed, balanced and robust development process ie share the workload among teams with different roles. To make sure we get motivated people for those teams the process needs to be set up in a way that promotes mentoring and recognition➄. So what kind of skills do we need there? The increasing amount of packages indicates that the entry level of packaging is covered quite well in the current system. The bottlenecks we have are in the integration. What we need are more people that are up to the challenge of taking care of the "core" packages and the integration of them. So that's the kind of contributors we need to have in mind first with the new Factory. Long story short, the goals in those proposals are - more focus on hardcore package maintainers and distro integrators - ensure Factory is usable at any point in time - have a distributed, balanced and robust development process - promote mentoring and recognition cu Ludwig ➀ http://lists.opensuse.org/opensuse-factory/2013-11/msg00920.html ➁ http://lists.opensuse.org/opensuse-project/2013-11/msg00094.html ➂ http://lists.opensuse.org/opensuse-factory/2013-12/msg00132.html ➃ http://lists.opensuse.org/opensuse-factory/2013-12/msg00044.html ➄ http://lists.opensuse.org/opensuse-project/2013-12/msg00147.html -- (o_ Ludwig Nussel //\ V_/_ http://www.suse.de/ SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 16746 (AG Nürnberg) -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Hi, On 12/12/2013 10:59 AM, Ludwig Nussel wrote:
Stephan Kulow wrote:
Our ideas are just ideas, but I had several discussions in various places and nobody offered a better idea. So we really would like to start with it and I would like to hear your concerns so they can be part of the final solution.
There has been some feedback that in all the enthusiasm of throwing ideas out we still lack a good summary of the goals behind all of those proposals. So here's my attempt to contribute to the mess by writing yet another mail :-)
tl;dr skip to "Long story short".
According to the numbers the amount of packages in Factory is increasing➀. However, at the same time the download statistics➁ show that the number of actual Factory users does not grow. The former can be explained by successful efforts in the past which had the goal of getting more packages. Regarding the latter the openSUSE team thinks that one of the reasons for the stagnation is that many don't perceive Factory as reliable enough to run it on their development systems.
Agreed and I doubt many will argue with this assessment.
At the same time however more users that actually use Factory are needed to find issues early and fix them quickly.
Yes, also a point that is difficult to argue. This does open the door for one of the questions I am not certain we have explored in sufficient detail. If we find a way to "stabilize" factory, or "make factory more useable all the time", how confident are we that it will do the trick? Meaning how confident are we - that we will get more factory users? - get more early bug reports? - .... Extremely difficult to answer of course, I know, thus maybe we can try a different approach to get closer to a picture that helps us. @ ALL TECHNICAL CONTRIBUTORS/DEVELOPERS (sorry for shouting) - If we collectively manage to "calm" factory down to the point where it will always boot and you do not have to fiddle with the very low level, kernel, bootloader, glibc , X11, things after doing "zypper dup" would you be - inclined - very likely - unlikely - very unlikely to switch to factory on your every day working machine? Those that run factory already should probably not answer this question ;) What is being proposed, in a certain light certainly makes sense, to me at least, and others have expressed similar sentiments. I'd just hate to see anyone spend a ton of effort on a pipe dream, thus I think getting answers to a few simple questions as the one above can help us get a general feel for the lay of the land at least among those that already contribute to openSUSE.
One needs to feel the itch to start scratching. So this is some kind of a chicken and egg problem.
Therefore we proposed➀ to create a process that keeps Factory as usable as possible for our contributors (ie zypper dup should be fairly safe at any point in time). This means Factory becomes a more viable option for using it as "rolling distribution". That designation in our opinion includes also regularly releasing ISO image snapshots with working installer so people can get started somewhere.
Crucial part to get there is introducing openQA➂ and staging projects➃ in the process.
I think there are many open questions about the staging, and maybe that's just me, and I am more than happy to just shut up if it is just me. One of my primary concerns that has not really received an answer, I think, is of the number of people that now get the "promotion" to staging tree manager. - do we have people that work at the level of code that requires staging trees willing to take on the "new", additional work/responsibility? - are we introducing a new set of arbitrary decision points? For this question I'll briefly revisit some earlier points in the discussion. I asked whether Richard would be willing to take on the new additional work and Stephan pointed out that gcc has a higher level "priority/interest" over other changes that might require staging trees, Stephan mentioned the "usr merge" effort as another example for a staging tree project. So who decides what project/movement gets the "higher level of interest" stamp, presumably more help from everyone, over the you are stuck with it staging branch? I suppose this is where the potential change process comes into place to avoid arbitrary decisions. However, this feels a bit like changes in the factory model forcing other changes, such as a change process, upon everyone. It appears that we should be able to separate those a bit better. - do we need to think of trying to develop/find a number of people that are interested in being primarily staging tree chaperones?
All the work for that cannot be put on the shoulders of just one guy.
Yes, but that's not a new concern, I remember having a discussion about that at oSC11.
So we want to have a distributed, balanced and robust development process ie share the workload among teams with different roles. To make sure we get motivated people for those teams the process needs to be set up in a way that promotes mentoring and recognition➄.
Well, yes. But I'd say of all things we probably struggle with the recognition part the most, from my perspective at least.
So what kind of skills do we need there? The increasing amount of packages indicates that the entry level of packaging is covered quite well in the current system. The bottlenecks we have are in the integration. What we need are more people that are up to the challenge of taking care of the "core" packages and the integration of them. So that's the kind of contributors we need to have in mind first with the new Factory.
Long story short, the goals in those proposals are - more focus on hardcore package maintainers and distro integrators - ensure Factory is usable at any point in time - have a distributed, balanced and robust development process - promote mentoring and recognition
I think there is one important point that has received some attention, but possibly not the attention it should receive. That is the consequences of these ideas/proposals. For 12.3 and 13.1 the openSUSE Team as a combined force working in unison has done a lot to get the release out the door. I think it is generally accepted that the quality has gone up over previous releases. One must assume that the openSUSE Team is willing to work on implementing any of these ideas and in the openQA message there was already a basic time commitment that that is going to happen. Since the openSUSE team was not sitting around twiddling their thumbs it implies that other work has to be redistributed to free up time for openSUSE Team members to work on the "new and exiting stuff". I am making the assumption that the rest of the development community cannot simply pick up the "new and exciting" stuff either. Anyone can jump in of course at any time and contribute, but all of this is a lot of work no matter how the work is distributed and it is a net gain of work. That in the end is what we probably need to pay attention to. There is a net gain of work to implement new ideas in the way we develop the distribution. With a pretty constant number of fingers on the keyboard and a possibly reasonably constant number of hours. Our hours are already filled, how are we going to handle this, something has to give? Do we have a list of things we can drop, i.e. that were done but are more nice to have than absolutely necessary? Can we do with less effort in testing at the end stages of the release cycle? Will a more everyday usable factory automatically lead to a reduced effort of testing in the end game? Is there a proposal coming that will question the 8 month release cycle? How would a longer release cycle benefit the elusive "end user" that cannot possibly be captured in one group? We have already heard from some people that run servers and say "longer is better" but I am certain we can find just as many "end users" that say 8 month is just about right. Can we be happy with a potential drop in quality to levels achieved prior to 12.3 when the release was produce with a different contribution mix? I can keep going with maybe another 10 or more questions I think we should probably discuss and that are directly related to the changes in development model. I think we have long way to go to figure this out. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Robert Schweikert - 14:18 12.12.13 wrote:
... @ ALL TECHNICAL CONTRIBUTORS/DEVELOPERS (sorry for shouting) - If we collectively manage to "calm" factory down to the point where it will always boot and you do not have to fiddle with the very low level, kernel, bootloader, glibc , X11, things after doing "zypper dup" would you be
- inclined - very likely - unlikely - very unlikely
to switch to factory on your every day working machine?
Those that run factory already should probably not answer this question ;)
Well, I run factory on my less used desktop and I run it on my notebook since some beta or something when it should be calm enough. If we stabilize it, I'll run it on my notebook all the time and I'll migrate my server to Factory :-)
... One of my primary concerns that has not really received an answer, I think, is of the number of people that now get the "promotion" to staging tree manager.
- do we have people that work at the level of code that requires staging trees willing to take on the "new", additional work/responsibility?
Well, people have to do it nowadays anyway, they just break Factory and have to fix it in Factory while Factory is broken while now the breakage will move into staging tree and get merged once it's fixed.
- are we introducing a new set of arbitrary decision points?
Kinda. Currently reviewers have rpmlint and several bots to help them with review, in the new workflow, they will two additional optional "helpers" - staging projects and some basic QA.
- do we need to think of trying to develop/find a number of people that are interested in being primarily staging tree chaperones?
I do not believe that we should have special role/people for staging tree management. It should be managed by whoever is the maintainer of the package that this project was created for. This person have the most knowledge to see/decide whether he has a bug in his package, or other people depending on him. But there will be definitely some notification actions needed in OBS to help the manager...
So we want to have a distributed, balanced and robust development process ie share the workload among teams with different roles. To make sure we get motivated people for those teams the process needs to be set up in a way that promotes mentoring and recognition➄.
Well, yes. But I'd say of all things we probably struggle with the recognition part the most, from my perspective at least.
Well, there is this Karma thing to do the recognition...
... Will a more everyday usable factory automatically lead to a reduced effort of testing in the end game?
I believe yes. Not only testing, but also integration. Nowadays coolo has to hunt everybody down to fix their stuff so he can release a milestone. Getting milestone to compile and produce DVD is actually a lot of work. And if that would be done in Factory during development continuously... -- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
If it's usable I'll devote on desktop machine to running Factory, reporting bugs and usability. That would be one new user. On Thu, Dec 12, 2013 at 11:42 AM, Michal Hrusecky <mhrusecky@suse.cz> wrote:
Robert Schweikert - 14:18 12.12.13 wrote:
... @ ALL TECHNICAL CONTRIBUTORS/DEVELOPERS (sorry for shouting) - If we collectively manage to "calm" factory down to the point where it will always boot and you do not have to fiddle with the very low level, kernel, bootloader, glibc , X11, things after doing "zypper dup" would you be
- inclined - very likely - unlikely - very unlikely
to switch to factory on your every day working machine?
Those that run factory already should probably not answer this question ;)
Well, I run factory on my less used desktop and I run it on my notebook since some beta or something when it should be calm enough. If we stabilize it, I'll run it on my notebook all the time and I'll migrate my server to Factory :-)
... One of my primary concerns that has not really received an answer, I think, is of the number of people that now get the "promotion" to staging tree manager.
- do we have people that work at the level of code that requires staging trees willing to take on the "new", additional work/responsibility?
Well, people have to do it nowadays anyway, they just break Factory and have to fix it in Factory while Factory is broken while now the breakage will move into staging tree and get merged once it's fixed.
- are we introducing a new set of arbitrary decision points?
Kinda. Currently reviewers have rpmlint and several bots to help them with review, in the new workflow, they will two additional optional "helpers" - staging projects and some basic QA.
- do we need to think of trying to develop/find a number of people that are interested in being primarily staging tree chaperones?
I do not believe that we should have special role/people for staging tree management. It should be managed by whoever is the maintainer of the package that this project was created for. This person have the most knowledge to see/decide whether he has a bug in his package, or other people depending on him. But there will be definitely some notification actions needed in OBS to help the manager...
So we want to have a distributed, balanced and robust development process ie share the workload among teams with different roles. To make sure we get motivated people for those teams the process needs to be set up in a way that promotes mentoring and recognition➄.
Well, yes. But I'd say of all things we probably struggle with the recognition part the most, from my perspective at least.
Well, there is this Karma thing to do the recognition...
... Will a more everyday usable factory automatically lead to a reduced effort of testing in the end game?
I believe yes. Not only testing, but also integration. Nowadays coolo has to hunt everybody down to fix their stuff so he can release a milestone. Getting milestone to compile and produce DVD is actually a lot of work. And if that would be done in Factory during development continuously...
-- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
-- ____________ Apply appropriate technology. Use what works without prejudice. Steven L Hess ARS KC6KGE DM05gd22 Owner Flex-1500 and Flex-3000, FT-857D, FT-817ND, FT-450 openSUSE Linux 12.3 KDE Known as FlameBait and The Sock Puppet of Doom. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 12/12/2013 02:42 PM, Michal Hrusecky wrote:
Robert Schweikert - 14:18 12.12.13 wrote:
... @ ALL TECHNICAL CONTRIBUTORS/DEVELOPERS (sorry for shouting) - If we collectively manage to "calm" factory down to the point where it will always boot and you do not have to fiddle with the very low level, kernel, bootloader, glibc , X11, things after doing "zypper dup" would you be
- inclined - very likely - unlikely - very unlikely
to switch to factory on your every day working machine?
Those that run factory already should probably not answer this question ;)
Well, I run factory on my less used desktop and I run it on my notebook since some beta or something when it should be calm enough. If we stabilize it, I'll run it on my notebook all the time and I'll migrate my server to Factory :-)
... One of my primary concerns that has not really received an answer, I think, is of the number of people that now get the "promotion" to staging tree manager.
- do we have people that work at the level of code that requires staging trees willing to take on the "new", additional work/responsibility?
Well, people have to do it nowadays anyway, they just break Factory and have to fix it in Factory while Factory is broken while now the breakage will move into staging tree and get merged once it's fixed.
Sorry, for not expressing this correctly. Yes, people that break factory have to help fix factory. But there are more eyes on factory, more help and more "pressure" than we will get in staging tree XYZ. Thus the incentive for "everyone" (and I use everyone very loosely) to jump in and help is much greater as compared to a staging tree. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thursday 2013-12-12 21:03, Robert Schweikert wrote:
Well, people have to do it nowadays anyway, they just break Factory and have to fix it in Factory while Factory is broken while now the breakage will move into staging tree and get merged once it's fixed.
Sorry, for not expressing this correctly. Yes, people that break factory have to help fix factory. But there are more eyes on factory, more help and more "pressure" than we will get in staging tree XYZ. Thus the incentive for "everyone" (and I use everyone very loosely) to jump in and help is much greater as compared to a staging tree.
Staging and Factory have two different purposes, as I perceive it, and as such, they complement each other. Staging is merely a software aid for the developer(s) to see how something interacts *at build time* with other packages; to discover what upstream-neglected tarballs will fall apart on an automake update. But for anything that was not already run during the build process — such as UIs — you need users, and you can only get them in Factory. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am 12.12.2013 21:40, schrieb Jan Engelhardt:
But for anything that was not already run during the build process — such as UIs — you need users, and you can only get them in Factory.
Well, parts of the plan is to integrate openQA in the build process and the results there don't need users. Of course you still need users to find the *real* problems, but you can remove most of the problems that these users struggle with. Uninstallable updates, obviously broken boots, ... Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, Dec 12, 2013 at 2:18 PM, Robert Schweikert <rjschwei@suse.com> wrote:
@ ALL TECHNICAL CONTRIBUTORS/DEVELOPERS (sorry for shouting) - If we collectively manage to "calm" factory down to the point where it will always boot and you do not have to fiddle with the very low level, kernel, bootloader, glibc , X11, things after doing "zypper dup" would you be
- inclined - very likely - unlikely - very unlikely
to switch to factory on your every day working machine?
I currently run factory in a VM all the time, but explicitly for testing my packages, so it is lightly used and a couple months can go by without me firing it up. I currently migrate my desktop to factory at either the beta or RC1 stage depending on what is showing up as most annoying bugs. Even with all these staging projects, will there be a need for a process like the kernel has where massive submits are allowed/encouraged for a period of time, then a stabilizing period would start? After all, just because things build doesn't mean the run well. Maybe SRs of major sub-systems could be restricted to the firs 7 days of each month. In that case, I would avoid updating at the start of each month. If I had factory installed and if periods of turmoil and stability could be defined, I would want to wait to do updates until a stability period started and a green light came from someone to say the major new functions are in and things seem to be working well. Thus if the only aspect of this that got pushed was staging projects, I would still avoid factory with the assumption that it has to be turmoil at certain phases of the development cycle. If a well-defined way existed for me to avoid those periods of turmoil I would give factory a shot. fyi: my preference would be for the zypper up process to have a mechanism to detect periods of turmoil and ask me explicitly if I wanted to update during those periods. Greg -- Greg Freemyer -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thursday 2013-12-12 20:18, Robert Schweikert wrote:
@ ALL TECHNICAL CONTRIBUTORS/DEVELOPERS (sorry for shouting) - If we collectively manage to "calm" factory down to the point where it will always boot and you do not have to fiddle with the very low level, kernel, bootloader, glibc , X11, things after doing "zypper dup" would you be
- inclined - very likely - unlikely - very unlikely
to switch to factory on your every day working machine?
No (very unlikely). I favor reasonably current software, but not a continuous update stream that Factory or Tumbleweed has, as running zypper often would spend my time without bringing me an immediate benefit — the set of {packages with new versions} that I am interested in is bound (=limited). -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am 12.12.2013 20:18, schrieb Robert Schweikert:
I think we have long way to go to figure this out.
Hi Robert, You appear very stressed. I wonder why - it's Christmas after all :) Let's give this whole thing a more personal touch - instead of all that 'we's: I want to improve our development process, because *I* feel the pain with it. You don't and that's perfectly fine - but I spend every morning (this includes weekends) reviewing what hell broke lose this time. I implemented several measures and convinced my colleagues to implement even more. So our team was busy the last year or so: - openqa improvements - factory-auto reviewer - groups of requests - repo checker - cycle detecter - Factory:Rebuild syncer - automatic setup of staging projects - webui improvements for reviews - legal-auto improvements - faster factory status page - working mail notifications for OBS ... But OBS lacks some features to get a good grip on staging with so many packages, so I experimented with fixed source rings (openSUSE:Factory:Build is the bootstrap cycle, openSUSE:Factory:Core is the minimal+X11 DVD, I'm not yet sure what the next ring will be, most likely KDE+GNOME live cd combined). And it's *really* hard to do that as a side project - there is always something coming in the way, so I kind of hijacked Agustin's "2016" thingy to make the team aware about the basic problems of our current development process and we sat together to brain storm. Unfortunately we didn't even have the time to document the brain storming in a way fitting to a mailing list (I'm sure we still have the picture of the white board if you're interested in *that* form of getting informed :). So the whole thing sat around a bit and when we discussed it again, we noticed some gaps and even more ideas. Agustin then made matters worse by insisting on having these ideas presented as part of a long term strategy - which it's not really. If you look closer, all we're proposing are 3 things: - further improvements to openQA (that's a pretty small tool if you look closer, we can easily estimate the effort there - even though we can be wrong) - taking staging projects seriously including QA (that will require a lot of experiments and as previously discussed it might be easy or a total failure - no one knows) - talk about an improved trust system instead of treating all packages and all contributors the same. The discussion so far has shown that there is more to it than you believe when you sit and brain storm, but that's perfectly fine, isn't it? Will this have an impact on the next release? No idea! Did I know for certain we would be able to release 13.1? No, I didn't. We still managed to. As I said in another thread: I'll happily ditch 13.2 or the 8 months cycle if we find something better - and I do have my preferences there, but I don't think we even need to discuss it right now. What I ask you (and this is Stephan not the openSUSE team): give us some credit. Let us take some risks - and don't expect that we have all answers right now. I hear everyone talking about their expectations from the openSUSE team, but I don't see too many asking where they can help. So I guess, I have a question too: Why is that? Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 12/12/2013 04:02 PM, Stephan Kulow wrote:
Am 12.12.2013 20:18, schrieb Robert Schweikert:
I think we have long way to go to figure this out.
Hi Robert,
You appear very stressed. I wonder why - it's Christmas after all :)
Let's give this whole thing a more personal touch - instead of all that 'we's:
I want to improve our development process, because *I* feel the pain with it.
Ack, and we discussed this probably 2 years ago that something will have to change.
You don't and that's perfectly fine - but I spend every morning (this includes weekends) reviewing what hell broke lose this time. I implemented several measures and convinced my colleagues to implement even more. So our team was busy the last year or so:
- openqa improvements - factory-auto reviewer - groups of requests - repo checker - cycle detecter - Factory:Rebuild syncer - automatic setup of staging projects - webui improvements for reviews - legal-auto improvements - faster factory status page - working mail notifications for OBS ...
All great stuff no doubt about it and this makes things better for everyone.
But OBS lacks some features to get a good grip on staging with so many packages, so I experimented with fixed source rings (openSUSE:Factory:Build is the bootstrap cycle, openSUSE:Factory:Core is the minimal+X11 DVD, I'm not yet sure what the next ring will be, most likely KDE+GNOME live cd combined).
And it's *really* hard to do that as a side project - there is always something coming in the way, so I kind of hijacked Agustin's "2016" thingy to make the team aware about the basic problems of our current development process and we sat together to brain storm. Unfortunately we didn't even have the time to document the brain storming in a way fitting to a mailing list (I'm sure we still have the picture of the white board if you're interested in *that* form of getting informed :).
So the whole thing sat around a bit and when we discussed it again, we noticed some gaps and even more ideas. Agustin then made matters worse by insisting on having these ideas presented as part of a long term strategy - which it's not really. If you look closer, all we're proposing are 3 things:
- further improvements to openQA (that's a pretty small tool if you look closer, we can easily estimate the effort there - even though we can be wrong) - taking staging projects seriously including QA (that will require a lot of experiments and as previously discussed it might be easy or a total failure - no one knows) - talk about an improved trust system instead of treating all packages and all contributors the same. The discussion so far has shown that there is more to it than you believe when you sit and brain storm, but that's perfectly fine, isn't it?
Will this have an impact on the next release? No idea! Did I know for certain we would be able to release 13.1? No, I didn't. We still managed to.
As I said in another thread: I'll happily ditch 13.2 or the 8 months cycle if we find something better - and I do have my preferences there, but I don't think we even need to discuss it right now.
What I ask you (and this is Stephan not the openSUSE team): give us some credit.
You have all the credit in the world from my side. As I said, in this thread I think; All I can do is offer up my experience with a staging model. I have lived within it at the very lowest levels of application code development and tools development. I will not be able to help with any implementation in OBS, or other parts of the system. In the end those that do the work have to make the decision. The distribution and development certainly does not hinge on any packages I maintain.
Let us take some risks - and don't expect that we have all answers right now.
Sorry if it came across that way. I do not expect you personally or as the openSUSE Team, to have all the answers. But we should be able to work together at least on a list of things that may or may not highlight potential pitfalls to avoid storming off and implementing things to end up with an "oh shit" moment. There'll be plenty of those no matter how much we talk, that's a given. But there are also plenty of those that can be avoided.
I hear everyone talking about their expectations from the openSUSE team, but I don't see too many asking where they can help. So I guess, I have a question too: Why is that?
Let me hazard a few guesses ;) - everyone is extremely busy already - people are uncertain where they potentially can help - the intermixing of the rather lengthy discussions of tangentially related things has a negative effect on engagement. Anyway, if you or the team have thought about the concerns I have raised, and even if the answers are not complete, then it shouldn't be that hard to just acknowledge the risks and the potential pitfalls. If this has happened in one of the responses and I missed it I apologize. I do not expect perfection or having answers to all questions. All I can hope for is that my experience that I am trying to convey/share will not fall on deaf ears. As I said, I am so hopelessly overloaded that contributing any code to this effort is currently not even thinkable. Those that do decide. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
* Stephan Kulow <coolo@suse.de> [12-12-13 16:03]: [...]
What I ask you (and this is Stephan not the openSUSE team): give us some credit. Let us take some risks - and don't expect that we have all answers right now. I hear everyone talking about their expectations from the openSUSE team, but I don't see too many asking where they can help. So I guess, I have a question too: Why is that?
I hazzard that the problem is fit. Where do I fit? I can script a little, I test what I use and make bug reports. I know a lot, but that lot is a little about *many* things, and am master of none. I run Tumbleweed and have *many* add on repos for things not "main-stream" Tumbleweed. I have enough experience to get into trouble, and usually find my way out but occasionally need help. The help is *usually* forth-coming and I usually learn a little more. Provide me tasks "where I fit" and I will help as time permits and I have a lot of time when my grandson is between soccer seasons. I still see "fit" as the problem, especially for "non-programmer/packager" types. Thank-you all for your many and continuing contributions. -- (paka)Patrick Shanahan Plainfield, Indiana, USA @ptilopteri http://en.opensuse.org openSUSE Community Member facebook/ptilopteri http://wahoo.no-ip.org Photo Album: http://wahoo.no-ip.org/gallery2 Registered Linux User #207535 @ http://linuxcounter.net -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am 12.12.2013 23:17, schrieb Patrick Shanahan:
* Stephan Kulow <coolo@suse.de> [12-12-13 16:03]: [...]
What I ask you (and this is Stephan not the openSUSE team): give us some credit. Let us take some risks - and don't expect that we have all answers right now. I hear everyone talking about their expectations from the openSUSE team, but I don't see too many asking where they can help. So I guess, I have a question too: Why is that?
I hazzard that the problem is fit. Where do I fit? I can script a little, I test what I use and make bug reports. I know a lot, but that lot is a little about *many* things, and am master of none. I run Tumbleweed and have *many* add on repos for things not "main-stream" Tumbleweed. I have enough experience to get into trouble, and usually find my way out but occasionally need help. The help is *usually* forth-coming and I usually learn a little more.
Provide me tasks "where I fit" and I will help as time permits and I have a lot of time when my grandson is between soccer seasons.
I still see "fit" as the problem, especially for "non-programmer/packager" types.
Thank-you all for your many and continuing contributions.
See Robert? We already have the first "staging champion" :) Thanks Patrick. I have nothing to give you right away, but it's good to know there are indeed helping hands. Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Stephan Kulow wrote:
Am 12.12.2013 23:17, schrieb Patrick Shanahan:
* Stephan Kulow <coolo@suse.de> [12-12-13 16:03]: [...]
What I ask you (and this is Stephan not the openSUSE team): give us some credit. Let us take some risks - and don't expect that we have all answers right now. I hear everyone talking about their expectations from the openSUSE team, but I don't see too many asking where they can help. So I guess, I have a question too: Why is that?
I hazzard that the problem is fit. Where do I fit? I can script a little, I test what I use and make bug reports. I know a lot, but that lot is a little about *many* things, and am master of none. I run Tumbleweed and have *many* add on repos for things not "main-stream" Tumbleweed. I have enough experience to get into trouble, and usually find my way out but occasionally need help. The help is *usually* forth-coming and I usually learn a little more.
Provide me tasks "where I fit" and I will help as time permits and I have a lot of time when my grandson is between soccer seasons.
I still see "fit" as the problem, especially for "non-programmer/packager" types.
Thank-you all for your many and continuing contributions.
See Robert? We already have the first "staging champion" :)
Thanks Patrick. I have nothing to give you right away, but it's good to know there are indeed helping hands.
Once we publish a list of jobs/tasks that need doing, I'm certain we will quickly find that there are plenty of helping hands available. -- Per Jessen, Zürich (-1.8°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Hi, engagement should be part of development. I wrote somewhere else that from my experience in managing development projects, mixing people that works full time with others working just a very few number of hours in the project is very challenging. Making this approach efficient requires quite some effort also from the core team. This effort cannot be an aside task of the full time developers. It need to be a full time task for at least one of them. I am determined to work on this area, so we can engage people like you in the development of the changes. It won't be easy but we will work on it. It can make a difference. On Friday 13 December 2013 20:39:41 Per Jessen wrote:
Stephan Kulow wrote:
Am 12.12.2013 23:17, schrieb Patrick Shanahan:
* Stephan Kulow <coolo@suse.de> [12-12-13 16:03]: [...]
What I ask you (and this is Stephan not the openSUSE team): give us some credit. Let us take some risks - and don't expect that we have all answers right now. I hear everyone talking about their expectations from the openSUSE team, but I don't see too many asking where they can help. So I guess, I have a question too: Why is that?
I hazzard that the problem is fit. Where do I fit? I can script a little, I test what I use and make bug reports. I know a lot, but that lot is a little about *many* things, and am master of none. I run Tumbleweed and have *many* add on repos for things not "main-stream" Tumbleweed. I have enough experience to get into trouble, and usually find my way out but occasionally need help. The help is *usually* forth-coming and I usually learn a little more.
Provide me tasks "where I fit" and I will help as time permits and I have a lot of time when my grandson is between soccer seasons.
I still see "fit" as the problem, especially for "non-programmer/packager" types.
Thank-you all for your many and continuing contributions.
See Robert? We already have the first "staging champion" :)
Thanks Patrick. I have nothing to give you right away, but it's good to know there are indeed helping hands.
Once we publish a list of jobs/tasks that need doing, I'm certain we will quickly find that there are plenty of helping hands available.
-- Agustin Benito Bethencourt openSUSE Team Lead at SUSE abebe@suse.com -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Robert Schweikert wrote:
If we find a way to "stabilize" factory, or "make factory more useable all the time", how confident are we that it will do the trick? Meaning how confident are we - that we will get more factory users? - get more early bug reports? - ....
Extremely difficult to answer of course, I know, thus maybe we can try a different approach to get closer to a picture that helps us.
@ ALL TECHNICAL CONTRIBUTORS/DEVELOPERS (sorry for shouting) - If we collectively manage to "calm" factory down to the point where it will always boot and you do not have to fiddle with the very low level, kernel, bootloader, glibc , X11, things after doing "zypper dup" would you be
- inclined - very likely - unlikely - very unlikely
to switch to factory on your every day working machine?
Unlikely - my office tools/applications simply have to remain stable and in working order. -- Per Jessen, Zürich (-0.8°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Robert Schweikert wrote:
On 12/12/2013 10:59 AM, Ludwig Nussel wrote:
Stephan Kulow wrote: At the same time however more users that actually use Factory are needed to find issues early and fix them quickly.
Yes, also a point that is difficult to argue. [...] Extremely difficult to answer of course, I know, thus maybe we can try a different approach to get closer to a picture that helps us.
@ ALL TECHNICAL CONTRIBUTORS/DEVELOPERS (sorry for shouting) - If we collectively manage to "calm" factory down to the point where it will always boot and you do not have to fiddle with the very low level, kernel, bootloader, glibc , X11, things after doing "zypper dup" would you be
- inclined - very likely - unlikely - very unlikely
to switch to factory on your every day working machine?
Hard to answer at this point too. Depends on whether we get the right balance between having the latest and greatest hot new features quickly and not breaking too badly.
I think there are many open questions about the staging, and maybe that's just me, and I am more than happy to just shut up if it is just me.
It's not just you. I also have my doubts but at the same time think it's worth a try. Esp since we have the chance now to not introduce this as stand alone feature but rather back it up with other other technical as well as social actions.
One of my primary concerns that has not really received an answer, I think, is of the number of people that now get the "promotion" to staging tree manager.
As Michal already wrote the current idea is to make the packager whose submission originally triggered the staging project responsible. If it turns out that some people are interesting in helping out in staging projects no matter what topic they are about that would be even better of course :-)
- are we introducing a new set of arbitrary decision points? For this question I'll briefly revisit some earlier points in the discussion. I asked whether Richard would be willing to take on the new additional work and Stephan pointed out that gcc has a higher level "priority/interest" over other changes that might require staging trees, Stephan mentioned the "usr merge" effort as another example for a staging tree project. So who decides what project/movement gets the "higher level of interest" stamp, presumably more help from everyone, over the you are stuck with it staging branch? I suppose this is where
I guess it's kind of natural. If one gets stuck with your staging branch because noone wants to help then maybe the change was not that great after all.
the potential change process comes into place to avoid arbitrary decisions. However, this feels a bit like changes in the factory model forcing other changes, such as a change process, upon everyone. It appears that we should be able to separate those a bit better.
I'm not sure I can follow you here.
All the work for that cannot be put on the shoulders of just one guy.
Yes, but that's not a new concern, I remember having a discussion about that at oSC11.
Which actions were taken back then to address the concern?
For 12.3 and 13.1 the openSUSE Team as a combined force working in unison has done a lot to get the release out the door. I think it is generally accepted that the quality has gone up over previous releases.
I take that as compliment :-)
Do we have a list of things we can drop, i.e. that were done but are more nice to have than absolutely necessary?
What is the absolutely necessary? progress.opensuse.org lists most not so technical tasks at least. http://en.opensuse.org/openSUSE:Public_release_action_plan_12.2 has lists of things we started with before we had progress if you want to look at something simpler. Some tasks were already dropped for 13.1. For example taking care of the manuals. On the technical side there's for sure also room for simplification. Do we need 14 iso images for example? How about dropping the Live and Promo images? i586? Rescue? Do we need 6 "Desktops"? Full hard disk encryption?
Can we do with less effort in testing at the end stages of the release cycle?
Will a more everyday usable factory automatically lead to a reduced effort of testing in the end game?
Depends. Not having to deal with really basic problems like crashing installer can mean less effort or more focus on the polishing with the same resources.
Is there a proposal coming that will question the 8 month release cycle? How would a longer release cycle benefit the elusive "end user" that cannot possibly be captured in one group? We have already heard from some people that run servers and say "longer is better" but I am certain we can find just as many "end users" that say 8 month is just about right.
The success of Tumbleweed and Evergreen at least suggests that openSUSE releases leave things to be desired on both ends of the spectrum. cu Ludwig -- (o_ Ludwig Nussel //\ V_/_ http://www.suse.de/ SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 16746 (AG Nürnberg) -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thursday 12 December 2013 14.18:11 Robert Schweikert wrote:
@ ALL TECHNICAL CONTRIBUTORS/DEVELOPERS (sorry for shouting) - If we collectively manage to "calm" factory down to the point where it will always boot and you do not have to fiddle with the very low level, kernel, bootloader, glibc , X11, things after doing "zypper dup" would you be
- inclined
-- Bruno Friedmann openSUSE Member GPG KEY : D5C9B751C4653227 irc: tigerfoot -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am 12.12.2013 20:18, schrieb Robert Schweikert:
@ ALL TECHNICAL CONTRIBUTORS/DEVELOPERS (sorry for shouting) - If we collectively manage to "calm" factory down to the point where it will always boot and you do not have to fiddle with the very low level, kernel, bootloader, glibc , X11, things after doing "zypper dup" would you be
- inclined - very likely - unlikely - very unlikely
to switch to factory on your every day working machine?
I would be inclined to search something not "calmed down", as i'm very happy with factory on my main machine as it is today. Where will all the fun be if it always just works? :-)
Those that run factory already should probably not answer this question ;)
Oh, come on. We should really still provide something that resembles todays factory for the brave! Actually for me, running factory (today) is less painful than upgrading twice a year: I have small breakages every now and then, but seldom really big stuff. And I have to fix one thing per week. After updating my server from 12.3 to 13.1, I have to fix up lots of small things, all at once, to get it working as intended again. And I won't even talk about updating my kids' machines (they use GNOME -- nothing anyone ever would even consider when running Factory :-)), that is always a hell of a task to get running again. I'll probably also might switch them to Factory / XFCE. Best regards -- Stefan Seyfried "If your lighter runs out of fluid or flint and stops making fire, and you can't be bothered to figure out about lighter fluid or flint, that is not Zippo's fault." -- bkw -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On Thu, 2013-12-19 at 14:45 +0100, Stefan Seyfried wrote:
And I won't even talk about updating my kids' machines (they use GNOME -- nothing anyone ever would even consider when running Factory :-)),
Why? :) I've been running Factory/GNOME since 2010 without ever re-installing the system from scratch on this very machine. Dominique -- Dimstar / Dominique Leuenberger <dimstar@opensuse.org> -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
Am 19.12.2013 22:10, schrieb Dimstar / Dominique Leuenberger:
On Thu, 2013-12-19 at 14:45 +0100, Stefan Seyfried wrote:
And I won't even talk about updating my kids' machines (they use GNOME -- nothing anyone ever would even consider when running Factory :-)),
Why? :) I've been running Factory/GNOME since 2010 without ever re-installing the system from scratch on this very machine.
GNOME updates have been painful for me: Stuff that worked before is no longer there in the next version. Extensions do no longer work and / or are not available for the current version. You can no longer suspend the system without somebody explaining how it works, there is no extension to bring the "suspend system" button back. My kids are really pissed about these constant changes, and they don't use Factory (yet :-) That's the "update to next release" problem. While I was running GNOME on Factory (for a pretty short time), it just broke too often (and even though I liked some of the Ideas, the complete package was not my cup of tea). What I like about XFCE is that there apparently are just enough developers to keep it working, and that apparently nobody is trying to implement his "vision" :-) But it is probably just a preference: I prefer to run kernel of the day and a few other more-recent-than-Factory packages, mostly from areas where I have some experience. If these things break, I'm prepared and can handle that. You prefer running latest unstable GNOME -- if it breaks, you can handle that. To each his own, I suppose :-) Best regards, seife -- Stefan Seyfried "If your lighter runs out of fluid or flint and stops making fire, and you can't be bothered to figure out about lighter fluid or flint, that is not Zippo's fault." -- bkw -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
On 11/28/2013 07:49 AM, Stephan Kulow wrote: <snip>
Integrating these to make a good distribution is real work. And one of my favourite songs (in that context) goes:
No one said it would be easy But no one said it'd be this hard No one said it would be easy No one thought we'd come this far
<snip>
One thing I tried was to setup "rings" to help easing the very painful staging projects (with 6800 packages, every staging project as we use them is a monster). That experiment has shown rings to be worthy way to check, but they won't work as I thought with the OBS as it is. We need to think bigger. So we tried to come up with an idea on how to improve the factory development process that includes a more clever way to utilize staging projects and openQA.
As this development process is a bit hard to explain in email, Alberto and Ancor prepared an interactive diagram:
https://progress.opensuse.org/workflow/factory-proposal.html
Let me add a few observations that may give you a place to look to draw ideas from. Essentially trying to keep factory working at all times presents the classic rolling-release model challenge. "How to make the big changes, but keep a working factory distribution at the same time while minimizing the configuration and setup change impact to users." The big changes like systemd, boot loader, gcc, LSB migration, etc.. If you view factory from a rolling-release standpoint, modeling staging around the lessons others have learned providing a rolling-release may simplify what you are trying to accomplish here. Yes, you have release-targets that will become openSuSE number next, but factory continues to roll on and should remain a usable and functional group of packages throughout the development cycle. One distro that has done a very good job at making the rolling release work is Archlinux. In implementing their rolling release they employ two basic collections of packages. The use a "testing" repository where packages incorporate the latest stable source from all vendors and then ultimately move to "core" and become the Archlinux packages. Their "testing" is essentially what you envision as "staging" and "core" is what you have as factory. The biggest difference in their approach is they have a more distributed spoke and wheel type of package maintainer setup with individual package managers responsible for large logical groups of packages that work their way through "testing" into "core". This lets them identify those sets of packages that present the biggest challenge and impact to move through "testing" to "core". Just as you envision "groups" of packages remaining in "staging" until they can move to factory. I can't give a dissertation on the exact assignment of packages to package managers distribution, etc, but what I can tell you is that they do (and have done well for a while) exactly what you are discussing doing for factory. Rather than reinventing the wheel, it may be well worth the while to see if there are things that can be learned there that can save some trial and error here and reduce both the number of times factory gets "broken" and the configuration/setup jolts to factory user. <snip>
There are several problems with the current "everything through devel project" approach we need to solve. Our ideas are just ideas, but I had several discussions in various places and nobody offered a better idea. So we really would like to start with it and I would like to hear your concerns so they can be part of the final solution.
We have more ideas, but we can only achieve that if we get help, so let me finish with another favourite of mine
What would you think if I sang out of tune? Would you stand up and walk out on me? Lend me your ears and I'll sing you a song And I'll try not to sing out of key Oh, I get by with a little help from my friends
Greetings, Stephan
Sheer poetry... -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
David C. Rankin - 15:59 12.12.13 wrote:
...
One distro that has done a very good job at making the rolling release work is Archlinux. In implementing their rolling release they employ two basic collections of packages. The use a "testing" repository where packages incorporate the latest stable source from all vendors and then ultimately move to "core" and become the Archlinux packages. Their "testing" is essentially what you envision as "staging" and "core" is what you have as factory.
Having staging projects goes one step further. There are two aspects that speaks for staging projects instead of yet another testing layer IMHO. a) Staging project has a limited scope, so when something break, it's much easier to figure out what caused it. But it waste some resources, I'll admit that. b) People will not test testing and will concentrate on Factory. c) And once you decide to move packages from testing to stable, you have the same issue as when you want to move packages from devel project to the Factory. Unless you move completely everything, there will be some dependencies that you forget in testing and will not work the same in stable... -- Michal HRUSECKY SUSE LINUX, s.r.o. openSUSE Team Lihovarska 1060/12 PGP 0xFED656F6 19000 Praha 9 mhrusecky[at]suse.cz Czech Republic http://michal.hrusecky.net http://www.suse.cz -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org
participants (32)
-
Adrian Schröter
-
agustin benito bethencourt
-
Alberto Planas Dominguez
-
Andreas Schwab
-
Bernhard M. Wiedemann
-
Bruno Friedmann
-
Christian Boltz
-
David C. Rankin
-
Dimstar / Dominique Leuenberger
-
Dominique Leuenberger a.k.a. Dimstar
-
gm1mqe@aol.com
-
Greg Freemyer
-
Jan Engelhardt
-
Jos Poortvliet
-
Josef Reidinger
-
Ludwig Nussel
-
Michael Schroeder
-
Michal Hrusecky
-
Olaf Hering
-
Patrick Shanahan
-
Per Jessen
-
Richard Biener
-
Richard Brown
-
Robert Kaiser
-
Robert Schweikert
-
Sascha Peilicke
-
Stefan Seyfried
-
Stephan Kulow
-
Steven Hess
-
Susanne Oberhauser-Hirschoff
-
Tomáš Chvátal
-
Vincent Untz