On Thursday, May 16, 2013 02:55:54 AM Marguerite Su wrote:
On Thu, May 16, 2013 at 12:37 AM, Alberto Planas Dominguez
<aplanas@suse.de> wrote:
The number of combinations that we need to cover is 240, and we need to do this for every stable build that comes from Factory, with more attention before milestones, betas or RCs.
If we add a new medium, this number will add 48 more different scenarios where we need to test, and this is only in the installation part, where the desktop has less impact.
Hi, Alberto,
I understand your concern and really appreciate your work to make our distro unbreakable.
But this growing trend will certainly drive us mad someday.
I can agree with that, but the future can be quite the oposite: the core distribution can be smaller but with higher quality / more tested components. In either way this decision is in the community hands, and my work is to help in whatever is the future of openSUSE as a distribution.
I don't know if our OpenQA can do all the testing, but I think anyway visual machines can't replace human-beings in the near future. So you'll be always lacking of capable hands.
You are right here again. VM has limits, we need to be careful that with this kind of tests we are not searching VM bugs, instead of bugs in the distribution. Also, automatic test has his own limits too. But if there is a percentage of the work that can be automatized with certain guarantee, I think that is a good idea to work on this.
As testers can't grow as fast as maintainers, why not make maintainers themselves do the testing by policy instead of ethic?
If you want to implement some basic QA in the process, you need some grade of reproducibility and tracking: you need to be sure what tests was made, by whom and the result of the test. There are tools for that, like testopia [1]. But in my opinion they are more for a real QA department, also is a slow tool with a very boring workflow. Can we impose such a thing to the maintainer? Is a lot of work, that need to be done in every integration process, and not only when a new version of the package is made. Another different thing is that the maintainer (or the community) provides a small perl script that use the openQA API to test the basic functionallity of the package, and there is an automatic process that launch this test when a new integration is made in Factory,
eg: some DEs are "officially tested by tester team", some DEs of small user database are "maintainer guaranteed". We just test base-system and GNOME/KDE.
This work can't be avoided. For 12.3 there was a weekend hackathon were contributors and maintaners like Dominique or Robert (and a lot more people) spend a full weekend testing and fixing bugs: there is not automatic test that can replace such a thing. But if some of the manual test can be expresed as a perl script (like testing the network, testing grub2, the integration between dbus and the DE, etc..), you can reproduce those test for every release, and be sure that we do not have a regression in one of the previously fixed bug.
And actually that's just the fact right now...lightweight DEs always have few maintainers, if they don't even guarante its working state, I certainly won't use it. Actually those DEs under development in OBS are all "maintainer guaranteed" fow now, they've already started getting users but have to wait for OpenQA to test and make them official.
Actually they do not need to wait for openQA to create a DE. But if we create a new official DVD for this DE, I think that we need to provide the same tests that for the other mainstream DE like KDE or GNOME. But this is only my opinion here. Also, the main problems is not in the DE, but in the integration points with other components, like NetworkManager, systemd ...
So why not, in the mid term, give them an entrance to second DVD by making a policy like "if you want it in, show me 100 people's test results", give them an "official unstable" entrance? It's kinda like creating a DVD on SuSEstudio yourself, but get backup from and promoted by openSUSE because you've proved you're usable although still need further professional testing.
If we had >100 tests for every milestone, beta and rc, with every arch, DE, FS, ... the automatic test tool can be completely avoided. But the problem is, again, in the integration part: a change in one component can break something in KDE, E17, or GNOME, and you need routinely tests to detect this.
I think that's what KlyDE do for now, I see Jos, Will and AJ, so I know it's of course usable even it's now in SuSEstudio. All we need is to add an entry on s.o.o and tell users it hasn't been officially tested yet.
It's kinda like "fix them then give out" vs "give them something to play first then fix them". I think users may prefer the last way, we just need some way to pick ourselves out of the responsibility. We can just tell them "okay I'm about to test it, but I got my hands full so it's on schedule. you're encouraged to fix and find workarounds".
I agree that this is the perfect approach for the development process. If you want to integrate a new DE, this is the way to go. My only concern is in the next step: put this DE as an official openSUSE medium, not as a package or as a installation option in YaST. This is the case that I think that we need to integrate those tests, because now we are distributing this DE.
At the same time, we can still improve our OpenQA's capacity in the long term. They do not conflict with each other.
Exactly, I do not see any conflict during the development / integration stage. [1] Testopia http://www.mozilla.org/projects/testopia/
Greetings
Marguerite
Thanks, Alberto Planas. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org