[opensuse-project] Crazy idea - use bugzilla metrics to gauge quality?
On opensuse-factory, Marco Calistri wrote:
P.S. Too many bug reports, perhaps should be better awaiting a bit more for a GM release.
Let me throw in a crazy idea - could we use bugzilla metrics to gauge the quality of an upcoming release? Could we perhaps even use them to delay if the numbers don't meet our requirements? Some possible metrics: # of bugs in status [x,y,z] in subsystem [a,b,c]. # of bugs with priority or severity [x] Comments? -- Per Jessen, Zürich (9.1°C) -- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-project+help@opensuse.org
Per Jessen wrote:
On opensuse-factory, Marco Calistri wrote:
P.S. Too many bug reports, perhaps should be better awaiting a bit more for a GM release.
Let me throw in a crazy idea - could we use bugzilla metrics to gauge the quality of an upcoming release? Could we perhaps even use them to delay if the numbers don't meet our requirements?
Forgot to mention - we might use the same numbers to gauge the amount of testing, i.e. "delay release or drum up more testing" if the numbers aren't right. -- Per Jessen, Zürich (9.5°C) -- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-project+help@opensuse.org
On Mon, 14 Mar 2011 07:59:20 +0100, Per Jessen wrote:
Per Jessen wrote:
On opensuse-factory, Marco Calistri wrote:
P.S. Too many bug reports, perhaps should be better awaiting a bit more for a GM release.
Let me throw in a crazy idea - could we use bugzilla metrics to gauge the quality of an upcoming release? Could we perhaps even use them to delay if the numbers don't meet our requirements?
Forgot to mention - we might use the same numbers to gauge the amount of testing, i.e. "delay release or drum up more testing" if the numbers aren't right.
My initial thought is that it's not a bad idea, but one can get an inaccurate picture of the product quality if the reasons why bugs aren't being filed are not known. For example, if on release 1 we have 100 bugs and 89 of them are resolved, then that's a resolution rate of 89%. If on release 2, we only have 20 bugs reported and all 20 are resolved (resolution rate of 100%), does that mean it's a higher quality release, or does it mean that people didn't submit bugs (for whatever reason), or does it mean that the initial release quality was so high there weren't actually that many problems? To be a useful metric, there has to be some context around the numbers. Jim -- Jim Henderson Please keep on-topic replies on the list so everyone benefits -- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-project+help@opensuse.org
2011/3/14 Jim Henderson
On Mon, 14 Mar 2011 07:59:20 +0100, Per Jessen wrote:
Per Jessen wrote:
On opensuse-factory, Marco Calistri wrote:
P.S. Too many bug reports, perhaps should be better awaiting a bit more for a GM release.
Let me throw in a crazy idea - could we use bugzilla metrics to gauge the quality of an upcoming release? Could we perhaps even use them to delay if the numbers don't meet our requirements?
Forgot to mention - we might use the same numbers to gauge the amount of testing, i.e. "delay release or drum up more testing" if the numbers aren't right.
I think of it as a great idea too
My initial thought is that it's not a bad idea, but one can get an inaccurate picture of the product quality if the reasons why bugs aren't being filed are not known.
For example, if on release 1 we have 100 bugs and 89 of them are resolved, then that's a resolution rate of 89%.
If on release 2, we only have 20 bugs reported and all 20 are resolved (resolution rate of 100%), does that mean it's a higher quality release, or does it mean that people didn't submit bugs (for whatever reason), or does it mean that the initial release quality was so high there weren't actually that many problems?
To be a useful metric, there has to be some context around the numbers.
I am probably not the one who will give a solution to that but I would recommend something like(based in your example): 89% solution on 100 problems 100% solution on 20 problems So that the people can have a wider image of the whole situation and quality of it. It reminds me a bit the torrent seeds and peers situation...
Jim -- Jim Henderson Please keep on-topic replies on the list so everyone benefits
-- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-project+help@opensuse.org
Just a thought Kostas -- http://opensuse.gr http://amb.opensuse.gr http://own.opensuse.gr http://warlordfff.tk me I am not me ------- Time travel is possible, you just need to know the right aliens -- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-project+help@opensuse.org
And the bugs fixed upstream ?
NM
On Mon, Mar 14, 2011 at 5:52 PM, Kostas Koudaras
2011/3/14 Jim Henderson
: On Mon, 14 Mar 2011 07:59:20 +0100, Per Jessen wrote:
Per Jessen wrote:
On opensuse-factory, Marco Calistri wrote:
P.S. Too many bug reports, perhaps should be better awaiting a bit more for a GM release.
Let me throw in a crazy idea - could we use bugzilla metrics to gauge the quality of an upcoming release? Could we perhaps even use them to delay if the numbers don't meet our requirements?
Forgot to mention - we might use the same numbers to gauge the amount of testing, i.e. "delay release or drum up more testing" if the numbers aren't right.
I think of it as a great idea too
My initial thought is that it's not a bad idea, but one can get an inaccurate picture of the product quality if the reasons why bugs aren't being filed are not known.
For example, if on release 1 we have 100 bugs and 89 of them are resolved, then that's a resolution rate of 89%.
If on release 2, we only have 20 bugs reported and all 20 are resolved (resolution rate of 100%), does that mean it's a higher quality release, or does it mean that people didn't submit bugs (for whatever reason), or does it mean that the initial release quality was so high there weren't actually that many problems?
To be a useful metric, there has to be some context around the numbers.
I am probably not the one who will give a solution to that but I would recommend something like(based in your example): 89% solution on 100 problems 100% solution on 20 problems So that the people can have a wider image of the whole situation and quality of it. It reminds me a bit the torrent seeds and peers situation...
Jim -- Jim Henderson Please keep on-topic replies on the list so everyone benefits
-- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-project+help@opensuse.org
Just a thought Kostas
-- http://opensuse.gr http://amb.opensuse.gr http://own.opensuse.gr http://warlordfff.tk me I am not me ------- Time travel is possible, you just need to know the right aliens -- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-project+help@opensuse.org
-- Nelson Marques /* As cicatrizes lembram-nos de onde estivemos, mas não ditam para onde vamos */ -- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-project+help@opensuse.org
On Mon, 14 Mar 2011 19:52:00 +0200, Kostas Koudaras wrote:
I am probably not the one who will give a solution to that but I would recommend something like(based in your example): 89% solution on 100 problems 100% solution on 20 problems So that the people can have a wider image of the whole situation and quality of it. It reminds me a bit the torrent seeds and peers situation...
Somewhat, yes, though I've often seen statistics like that misused to show that product quality has improved because the number of reports has dropped from 100 to 20 as well. That isn't necessarily an indicator of quality - since people's willingness to report depends on a number of factors, including whether or not they feel as if their issues are being addressed or not. (I'm not saying that's a problem with openSUSE, just a general statement on the use/misuse of that kind of statistic.) Jim -- Jim Henderson Please keep on-topic replies on the list so everyone benefits -- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-project+help@opensuse.org
Hi, On Mon, Mar 14, 2011 at 07:54:56AM +0100, Per Jessen wrote:
On opensuse-factory, Marco Calistri wrote:
P.S. Too many bug reports, perhaps should be better awaiting a bit more for a GM release.
Let me throw in a crazy idea - could we use bugzilla metrics to gauge the quality of an upcoming release? Could we perhaps even use them to delay if the numbers don't meet our requirements?
Some possible metrics:
# of bugs in status [x,y,z] in subsystem [a,b,c]. # of bugs with priority or severity [x]
Comments?
This is already done Novell internally by the support department, but I think I am not allowed to share the measurements here. However I am not a fan of measuring quality with bugzilla bugs, although I can think that this type of statistics can give a hint if something is utterly wrong. I would say a difference of less than 10% would not mean much. -- Bye, Stephan Barth Novell Technical Services, Worldwide Support Services Linux SUSE LINUX GmbH, GF: Felix Imendörffer, HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409 Nuremberg -- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-project+help@opensuse.org
Stephan Barth wrote:
Hi,
On Mon, Mar 14, 2011 at 07:54:56AM +0100, Per Jessen wrote:
On opensuse-factory, Marco Calistri wrote:
P.S. Too many bug reports, perhaps should be better awaiting a bit more for a GM release.
Let me throw in a crazy idea - could we use bugzilla metrics to gauge the quality of an upcoming release? Could we perhaps even use them to delay if the numbers don't meet our requirements?
Some possible metrics:
# of bugs in status [x,y,z] in subsystem [a,b,c]. # of bugs with priority or severity [x]
Comments?
This is already done Novell internally by the support department, but I think I am not allowed to share the measurements here.
If you're keeping track of data that is purely openSUSE, why not?
However I am not a fan of measuring quality with bugzilla bugs,
I use the word "quality" with great care, I completely agree that quality is difficult to measure, not to mention by simply counting bugs. Perhaps this should be seen more as a way to manage quality?
although I can think that this type of statistics can give a hint if something is utterly wrong. I would say a difference of less than 10% would not mean much.
I was hoping to avoid diving straight into the numbers/metrics, my post was really intended as "here's a crazy idea, please shoot it down". I would prefer to wait a few days before we start looking at what kind of metrics and criteria we might want to apply. However, here's perhaps something to fuel the debate: track number of changes per subsystem, as well as number of bugs reported. Could be further qualified with severities, amount of activity, reason for closing etc. Depending on the subsystem (some are more prone to errors, some more to testing, I suspect), prior to a scheduled release candidate, we could evaluate status (cnages vs testing vs bugfixing) and determine if we would want to postpone due to insufficient testing or insufficient bug-attention, for instance. Enough fuel for now. -- Per Jessen, Zürich (10.1°C) -- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-project+help@opensuse.org
On Tue, 15 Mar 2011 20:26:46 +0100, Per Jessen wrote:
track number of changes per subsystem, as well as number of bugs reported. Could be further qualified with severities, amount of activity, reason for closing etc.
You would probably also want to identify differences between changes introduced as enhancements vs. changes introduced to fix defects. Just saying that a subsystem hasn't changed isn't good if it could be enhanced and made better. Jim -- Jim Henderson Please keep on-topic replies on the list so everyone benefits -- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-project+help@opensuse.org
Jim Henderson wrote:
On Tue, 15 Mar 2011 20:26:46 +0100, Per Jessen wrote:
track number of changes per subsystem, as well as number of bugs reported. Could be further qualified with severities, amount of activity, reason for closing etc.
You would probably also want to identify differences between changes introduced as enhancements vs. changes introduced to fix defects. Just saying that a subsystem hasn't changed isn't good if it could be enhanced and made better.
Jim
Yes, absolutely, changes ought to be qualified just as bugs are. There are also significant, revolutionary changes such as rsyslog, systemd, plymouth, mariadb, libreoffice and plymouth that would/could be counted differently. I'm not sure how we get control or stats of the changes that are happening, but that might have to wait until we can say "we need better change control in order to have better quality management". -- Per Jessen, Zürich (9.9°C) -- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-project+help@opensuse.org
participants (5)
-
Jim Henderson
-
Kostas Koudaras
-
Nelson Marques
-
Per Jessen
-
Stephan Barth