Am Sonntag, 1. Dezember 2013 schrieb Robert Schweikert:
1.) Putting pressure on the submitter
Putting pressure on the submitter is a good
concept to avoid "dump and
run scenarios", i.e. put you code in and everyone has to fix the fall
out. However, stated as such the submitter can very easily feel
overwhelmed and left alone, thus the submission may never take place.
What I think we need is a process/environment that holds the
submitter sufficiently responsible to avoid "dump and run" while at
the same time providing enough support such that the submitter does
not feel left alone and overwhelmed.
While "the submitter fixes everything he breaks" would be ideal,
I'd define the goal as:
The submitter has to coordinate fixing everything he breaks.
This can mean:
- the submitter fixes it
- the submitter works with the maintainers of the broken packages to get
- the submitter asks for help on the factory ML
We should also have a rule saying (assuming it didn't happen in a
If the fixes don't happen in a reasonable time, revert the commit
that caused the breakage.
I know "reasonable time" is vague, but we'll probably need it that way
because it depends on the number of broken packages, time until the
In a staging model I have no idea how to get
That's easy - the package can move from staging to factory after the
fallout is fixed ;-)
2.) The staging approach
staging trees upon staging trees. But this only
solves the problem
superficially as the target tree will move ahead and thus the staging
tree by definition is always out of date. Unless the target tree is
frozen until a particular staging tree is merged.
The staging tree should link all packages from factory (except the
changed package), so it can't be outdated. It just needs to be rebuild.
We'll see if we have enough build power ;-)
(For speedup, copying the factory binaries to the staging project and
only rebuilding the updated package and its dependencies might save some
This seems to concentrate fully on build-time fallout. While that's
certainly part of the quality of Factory a failed build cannot break
my running system because I cannot install a failed-to-build package.
Yes, with enough build power we can re-build the world for every
tiny change. But what does it really mean if the tiny change causes
something to no longer build? It means that our dependencies are
too weak (foo requires bar-devel instead of bar-devel = 10.2) or
incomplete? Or that the now failing packages are simply broken?
That said, how does ensuring everything builds enhance the user
experience when you have Factory installed?
With the Debian 'testing' distribution approach you scale QA by
making people using 'unstable' (aka Factory) do testing and file
bugs which blocks packages from migrating from 'unstable' to
'testing' unless they are fixed. So to throw in another name
(than the appearantly misleading Tumbleweed), 'testing' is a
rolling release for 'unstable'. Do we want a rolling-released-Factory?
Richard Biener <rguenther(a)suse.de>
SUSE / SUSE Labs
SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746
GF: Jeff Hawn, Jennifer Guild, Felix Imend"orffer
To unsubscribe, e-mail: opensuse-factory+unsubscribe(a)opensuse.org
To contact the owner, e-mail: opensuse-factory+owner(a)opensuse.org