On Wed, 15 Jan 2014 10:52:29 +0100, Jos Poortvliet wrote:
I think there is a mis-match between what we (Henne, myself and others) are trying to achieve and what you (and Robert and others) seem to read into it.
Please describe what that mismatch is. I'm not sure there is, but perhaps that is the case.
But then recognizing major issues that have been resolved and by whom - in the context of the overall project - that might have value. Something like a list of collaborators in the release notes, or acknowledgment of the contributions of those who made the release possible.
Exactly. In every Free Software project, credit where it is due (and the respect of your peers) plays a huge role. So does seeing the impact of what you do. While there are many different reasons why people contribute, these two are shared by at least a majority of our contributors.
That is what this is about: making visible what impact your work has (eg show the number of users of packages on OBS) and allowing others to see what you do (eg show packages you maintain on your profile).
From there on, we can do more elaborate things with this information, like calculate an activity metric or make a top-ten of bugfixers. Or not. That is something we can decide in a later stage, and even experiment with and get rid off if it doesn't work.
Measurement of contribution is a tricky thing. In the years I've been doing forums stuff, that's always been one of the larger challenges - quantifying the quality of a contribution. I see that discussion taking place here in another part of the thread. Usually the end result ends up being a quantitative measurement rather than a qualitative measure, because qualitative measurements are much more difficult to measure in a meaningful and precise manner, certainly without some sort of manual process (say, a peer review or a quality review). Qualitative data is very difficult to gather in an automated way. Another example is in performance measurement of training effectiveness (an example I use because I'm familiar with the methods used to measure instructor and course performance in a qualitative way). In that instance, you gather data from the students about their experience in the class. Ideally, to get a good measurement of the training effectiveness, you also have to come back a period of time after the course and identify how much of the students' job performance improvement since the class was due to what they learned in the class. It's possible to do that kind of qualitative analysis - but it's not easy. It's a problem that's been researched by economists (because ultimately, it's a measurement of efficiency, ROI, and cost), and a pretty sound methodology exists, but getting it wasn't without pain. Is fixing 10 bugs more valuable than fixing one bug? Answering this question requires an qualitative analysis of the bugs in question, the performance of the individuals fixing the bug, and a measurement of the skills. Fixing a kernel panic is more valuable than fixing 10 cosmetic bugs in the UI. So a measurement of the number of bugs fixed is ultimately meaningless without attention to the impact, severity, and skills needed to fix the issue. Similarly, fixing a kernel panic that's because someone improperly initialized a pointer might be seen as an easier fix than one that involves fixing a defect that happens as a result of the interactions within in an entire subsystem. That's the problem that setting up a competitive "top ten" type measurement introduces, and it's important at least to understand that. There's always the possibility that the one person who fixes a kernel panic issue is also going to be demotivated by their contribution being recognized in one of those "top ten" lists because they only fixed the one bug, while others fixed 10 or 20 bugs in the same time period - it's important to recognize that not all bugs are equal.
Also, please realize that motivation differs between people. Some do find a top-ten kind'a cool, others don't. The fact that you might not does not mean it does have no value for others. For a short spell AJ and myself put weekly contributor stats to Factory on news.o.o and we heard back from several people that they found it cool to get in there. I'm not saying it makes a huge difference, but why not recognize and motivate those who do appreciate this?
Ultimately, the goals of the project have to be met. If creating a competitive environment for people (for example) fixing bugs helps with that, then of course I'm for it. But more often than not, it's been my experience that when you set up a competitive environment (as happens so often in the business world), the approach does harm that wasn't intended. Sales people competing for clients end up not working together to get the really big score (because their motivation is their own commission, and there can only be one "salesperson of the year" - and they want it badly enough to not only try to keep the sales for themselves, but also to make it more difficult for coworkers to complete their sales. It's a tactic I've seen over and over again.) I'm not saying that sort of thing happens here, but that it's something to be aware of. Jim -- Jim Henderson Please keep on-topic replies on the list so everyone benefits -- To unsubscribe, e-mail: opensuse-project+unsubscribe@opensuse.org To contact the owner, email: opensuse-project+owner@opensuse.org