Sorry, I stand corrected. I mislabelled the researcher as CERT rather than Aberdeen. But, you comments are in line with my thoughts. <rant> I work as a clinical practitioner in health care (R.N.) and research and the analysis of "conclusions" and possiblities for implementing the conclusions of research into practice is a primary function of my profession. Therefore I was taught to be critical of studies due to the implications on my patient healths status. Furthermore, I was taught in the scientific method and though the scientific method is actually fairly poor at really "proving" anything it is moreover designed to disprove things (as well as replicate - which is used as a form of proof). One of the primary methods of disproving or debunking studies is to look at methods used to derive data in a meaningful way and then pair this with the conclusions of the study. Using raw data that is not properly put into context is the oldest trick in the book related to making things appear to make sense without really using techniques used to relate data to the variables and the study question and is poor work and cannot be relied upon. Just spewing forth numbers, putting them into a table or graph, and then using this to support one's claims does not make it valid. There is no mention of techniques, groups. selection of groups, selections of variables (or what the variables are - such as the "constant") or the manner in which the data is collected - at least in a manner that I would take seriously as valid and worth consideration. They bank on the fact that most people don't know about or forget that "correlation is not causation". Unfortuanately, especially in marketing and advertising, this technique is preferred (using raw data in graphs and tables to support antecdotal claims). An example is the claim "4 out of 5 dentist prefer <insert product>". The reality is that they send out samples of a product as a marketing tool with instructions to participate with the data gathering process. The problem with this is that often the case is that 4 out of 5 dentist don't return the item or respond to any included media from the company that sent the sample product. This plays right into the hands of the marketers and advertisers that in turn translate this into the statement that since the dentist (or M.D. or whatever) didn't return the sample that they A) they must be using it and B) prefer it. This couldn't be farther from the truth. The fact is most of these samples go in the trash or a drawer and the marketing departments are quite aware of this. None the less they use this behavior to further their claims about a products or service. This seems to be the case in much of the analysis I see about IT and computing products and services. This is fine (well sort of) if your trying to sell the latest and greatest Video card or NIC (except in the case of M$ new line of Routers, Hubs, etc... which are buggy and call home way too much - arghhh). But if it involves liability related to data loss/theft, commerce, online transaction of any sort, or just secure and cost effective day to day business use then it is akin to negligence on the part of the supposed researcher. People often rely on these groups and organizations in order to make decision about what to use and how to spend their money. And it's all about the money. CIO's and IT managers need to understand and have reliable sources of information that is both properly collected and in a form that is concise and understandable. This is why poeple that really want to understand things of this nature use multiple sources of information and look at such things as who has funded these studies and what are the affiliations of the researchers. I just wish people had both a respository in which to go where they could compare and contrast information and also had the understanding of the importance of doing so. <end rant> Cheers, Curtis. On Monday 30 December 2002 11:16, James Mohr wrote:
On Monday 30 December 2002 16:38, Curtis Rey wrote:
Objectivity by CERT is a misnomer, they are not objective in the least. They are, however, a for profit organizaton open to the highest bidder.
Case in point: They did a report on AMD flaw where they cited numerous reasons why Intel made a better product. Fact of the matter they never did one interview with anyone at AMD and derived their "facts" for the study base on perceptions commonly subscribed to by the general public. So, what we see with CERT reports is more often due to its mission of fulfilling it's own market strategy - that is to reports with a "wow" factor in order to get people to subscribe to their service. They're all about producing reports with a marketing target and they are not a objective research organization. The report is for the purpose of telling people what they want to hear - namely those the work for or support M$. It isn't worth the time it takes to read it. They lack completely any sort of meaningful data and offer up antecdotal evidence that is weakingly, if at all, confirmable. I'm not saying this because I'm a Linux supporter. I'm saying this because their research methods are not mentioned, given, and no real analysis techniques are offered in support of the "data" they claim in the story/report.
Curtis
Hang on there. Aberdeen did the report, not CERT. (If memory serves me correctly.) However, what you are saying is basically true, but about Aberdeen. I personally think their "research methods" **were** mentioned and that was simply to quote numbers based on the CERT advisories and create some interesting, if not confusing, statistics. Using raw numbers, independent of any real frames of reference and then creating a set of statistics is a common and (unfortunately too offen) acceptable "reasearch method".
Regards,
jimmo
-- Billboard Writer vs. Literature = Micorsoft vs. Computing,