On Tue, Nov 22, 2011 at 2:40 AM, Duaine Hechler
On 11/22/2011 12:59 AM, Per Jessen wrote:
Roger Luedecke wrote:
Part of the problem is people don't do early testing so we can catch bugs better before release. Everybody wants it stable, but few are willing to risk some instability to assure a better result for us all.
I'm not sure we has any data to really substantiate that. At some point, I did sort of half-way propose we should be using a test-case tracking system, but given the size of this project, it's probably not a very good idea. 15 years ago I helped write and document about 1000 test-cases for a project I was managing. There was about 20 people involved in total, and even those 1000 cases were too much.
Well, back in my day on the mainframe, I wrote many, many "system" level utilities and short cuts for the operators all the way up to higher management.
I used what is called a "devils advocate" approach. As I was writing the code, I would take a step back and try to think of all the possibilities of how and where it would break. As part of this approach, I would take the main task and break it down into subtasks and begin testing at that level, then start putting it together. So by the time I released it into production, it would be nearly 100% error free.
The worst case was when my work wanted to take "line" mode (normal impact print data) and convert it to "page" mode a.k.a. AFP (Advanced Function Printing) data for the IBM 3800 Laser Printer. That meant building a HEX data stream to include actual print data, definition sequence numbers, font selection, X & Y coordinated, margins, page size definitions, a forms overlay stream (like adding a grid line - boxes with borders or line shading, etc.), adding an image hex stream (graphs, logos, etc) and, last but not least, handling multi-section pages. IF ANY PART of the data stream was wrong - the printer would either stop, print garbage or go completely nuts.
BTW, my part was the easy part, because once my data stream definition were right, I would pass them to the application programmers who had to - convert - and - build - the data stream to run on an OCTAL mainframe (Bull / Honeywell) which in turn meant they had to use COBOL COMP-3 fields which turned it into BINARY data field.
AND, if the application programs did not build the AFP data stream in the right sequence, it would do the same as above.
Ahhhhhh..... Those Were The Days ........
Duaine
Duane, You should check out some of Bernhard Wiedemann's work for openSUSE QA. He has some automated QA logic, but I think it aims at just ensuring we have basic installation, zypper, and desktop functionality. Check out: http://openqa.opensuse.org/results/?sort=-7&hours=300&match= So let's assume you want to know how a 64-bit install of opensuse with the lxde desktop from the NET install CD works. You can look it up on that page. First, you have basic quantitative results on that page, but on the left you find a link to this detail page: http://openqa.opensuse.org/results/openSUSE-NET-x86_64-Build0039-lxde It shows still images from various places in the install. Then you will also find a link to this movie: http://openqa.opensuse.org/opensuse/video/openSUSE-NET-x86_64-Build0039-lxde... The whole process is automated and recorded as you can see. As to how the magic happens: The opensuse factory process self-identifies consistent snapshots. They tend to happen once or twice a week, but sometimes its longer between snapshots. AIUI, for a snapshot to be identified everything in the factory repo has to be compiled and there can not be many build failures. That causes a factory snapshot to trigger automatically. For each snapshot various boot/live/install CDs/DVDs get created and Bernard's automated installs kickoff and update the QA page above. It is really pretty cool stuff as far as I'm concerned. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org