To date, unfortunately, the software research literature is still peppered with ubiquitous references to the Chaos Reports from the Standish Group, which insist that the software industry is in a deep, terrible crisis (the latest report, from 2009, states that only 32% of software projects are successful, while 44% are challenged and 24% are failed projects. This is an improvement for the original and most-often cited study from 1994, which claimed that 16% of projects were successful, 53% challenged, and 31% failed).
Methodologically, the Chaos Reports are a mess. A part of its methods is not disclosed under a trade secrets excuse; the part that is visible shows problematic flaws in the analysis. I’ve written about pointers to good critiques to the report in the literature, as well as a crass dismissal of these critiques by Jim Johnson, the Chairman of the Standish Group, here.
There is yet another critique now, taking a different approach from the previous ones. The January/February issue of IEEE Software contains a great paper by J. Laurenz Eveleens and Chris Verhoef on “The Rise and Fall of the Chaos Report Figures”:
Our research shows that the Standish definitions of successful and challenged projects have four major problems: they’re misleading, one-sided, pervert the estimation practice, and result in meaningless figures.
Those problems are convincingly documented in the paper. The authors end with a cameo from Johnson:
We communicated our findings to the Standish Group, and Chairman Johnson replied: “All data and information in the Chaos reports and all Standish reports should be considered Standish opinion and the reader bears all risk in the use of this opinion.”
We fully support this disclaimer, which to our knowledge was never stated in the Chaos reports.
By now I tend to dismiss any papers that use the Chaos reports to back up their claims, and I don’t see any reason why you shouldn’t do the same.