On 12/03/2013 04:41 AM, Michal Hrusecky wrote:
Sascha Peilicke - 10:17 3.12.13 wrote: <snip>
- [snip]
This is just a proposal on how staging projects could work. The main goal of this proposal is to partially move the responsibility of the integration process into the community of packagers so:
- Staging project maintainer has to cooperate with package maintainer to get his SR in.
To whom does the SR belong here? Is it the staging prj maintainer that wants it or the packager? IMO its usually the latter wanting the SR to get in while the former wants to get rid of the staging project again.
Actually both. It was meant that the guy maintaining staging project has to work together with other people to get rid of his staging project and get stuff in,
Correct. I agree that having basically one person doing this (coolo today) is not a good approach and we need to fix this problem. What I am not so confident about is that those touching the very core packages have the interest/knowledge/energy/time to chaperone a staging branch. If AJ pulls a new glibc from upstream and tons of stuff breaks in the staging project we are basically asking AJ to be coolo and run after all the package maintainers that now have broken stuff. From my point of view that doesn't really resolve the basic problem, it just shifts the problem onto the shoulders of someone else. That someone else most likely has less time to chase all the broken stuff than coolo. I am not advocating to stay with what we have, I just fail to see how the staging of updates improves the situation overall. Certainly coolo will do less chasing and that's a good thing, but will the distribution as a whole improve or are we more likely to get stuck with older versions because the new chaperones of the staging branches do not have the time/energy etc. to chase everything that happens to break?
but it works also other way around. Packager need to cooperate with staging project maintainer to get his new and cool version in.
- Package maintainers have motivation to fix their stuff, otherwise new versions don't get included. I think this was discussed elsewhere already, but not-so-core community members are often asking why they should fix their package if breakage came from elsewhere (e.g. new glibc vs. random game pkg). But it's
Well, it works other way around as well, why should glibc maintainer fix something that uses API that was deprecated long time ago. The important part is that they should work it out together, otherwise neither of them can in.
That's not quite correct. The broken package is already in Factory, using the deprecated API. Thus glibc is blocked by something that is "nominally" broken already. The developer that submitted the glibc changes has to chase the packagers with broken stuff, as mentioned above. But there is also a good chance that the packager with the broken stuff is not intimately familiar with the code and thus has no idea how to fix it in the first place. Yes, these are all problems that rest on coolo's shoulders today and that's not good. However moving them onto someone else' shoulders does not make the problem go away. I think what we are actually doing is discouraging changes for packages that have a large impact such as gcc, glibc and others. As I mentioned in another thread, we do not want to encourage "dump and run" but this appears to go toward the other extreme. Basically if one is not willing to chaperone a staging project better not send any things that are potentially disruptive. Later, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org