Am Montag, 21. September 2009 05:59:16 schrieb Refilwe Seete:
oS Testing Core Team,
Since it looks like an IRC meeting may be a little while away, perhaps
we can deal with more things on the mailing list before we lose too
much momentum. I think the one of most important things we can do is
decide what we want to test against and when we want to move on from a
given release. This then gives all the team members a rough schedule
to follow, and it also lets the dev team know what to expect from us -
at least as far as testing the distro on real hardware goes.
A quick and dirty general roadmap of a release looks like this:
1. Check-in new updated packages
2. Decide what new features will be targeted at the release
3. Develop new features
4. Progressively freeze elements [Milestone 5 & 6]
5. Final Round of Testing [Release Candidates]
6. Post-release critical bug-fixing [Gold Master]
The points we can test are:
1. Factory, as components offered on top of a stable release
(ex:KDE:Factory) 2. Factory, as a complete rolling distribution testing
updates as they come 3. Factory, taking regular builds of our own choosing
4. Official Snapshots, such as Milestones/RCs/GM
IMHO, I think the oS TCT should also test the final release. Since
the team was formed pretty late in the roadmap, we should run the
final release for at least 2-4 weeks to help knock out bugs that
squeaked through. This should be done for 11.2 at least.
Looking ahead to 11.3, things are pretty wide open. Judging from some
of the changes in Factory and comments online, I believe that openSUSE
developers would really appreciate if more people ran Factory.
However, there may be a reduction in our effectiveness if we end up
fighting against obvious but big bugs (bad kernel builds,
uninstallable images, etc.). Ideally we would thoroughly test both
Factory as a distribution and Milestones, but that's probably not
My first instinct is to test crap out of all the snapshots as that
will also provide the simplest scheduling mechanism - We just test a
given snapshot until the next one comes out...thus we test a final
release until the devs deliver Milestone 0 of the next version. We
can also leverage that time between GM and Milestone 0 to review the
last run and adjust our procedures accordingly.
However, we should decide as a team what to focus on - even if it
means splitting the team down the into a Snapshot Team and a Rolling
sounds familiar to me, at the company I work for we have two phases of
Phase 1: All subprojects are going through an extensive functional test, so we
try to find as much defects there.
After this phase has reached a certain level of coverage (this depends on the
test plans and the overall schedule) we switch to a different test bed.
Phase 2: At this phase we run system wide integration tests to make sure that
all subprojects work together smoothly.
If we adopt this for our testing would mean that we focus testing of a
Milestone on certain areas and find as much defects as possible. For this I
suggest that we distribute as much areas (application, subsystems etc.) as
possible to the core testing test (or testing core team??).
If we reach the Release Candidate/Gold Master phase we switch our focus to the
area where the distro will be use.
Here a rough example of what I mean
Phase 1 Testing areas:
1. Installation setup
- Do an installation from scratch and check if the possible selection made
are installed withou any error
- Do a upgrade from a previous final release
- do a upgrade from a prevoius Milestone
- do a upgrade for a specific package
- add hardware and install related packages
- Setup various network services and check if they work flawlessly.
- Install additional services and check them.
3. Office tools
and so on. This List can be derived from the package groups in YaST.
- Testing a standard installation from scratch and check if all components are
installed correctly and work together
- Testing of an upgrade of a standard installation from prevoius release
- Testing a KDE4 Desktop System
and so on. This approche would cover more the schema approche like a user
would do a upgrade and/or installation from scratch.
BTW switching to a different testbed is not neccessary, because we need to
cover as much hardware as possible.
But I am not sure if this fits into the overall testing strategy of Novell and
the QA team. So I stuck a little bit, because I don't now where to go from
Am Steinebrück 23
Telefon : 0211/788 5115
Mobil : 0172/210 4989
EMail : juergen(a)radzuweit.eu
To unsubscribe, e-mail: opensuse-testing+unsubscribe(a)opensuse.org
For additional commands, e-mail: opensuse-testing+help(a)opensuse.org