When is the last time you read something in an ACM or IEEE journal that changed how you program or the tools you use? Ever?
The answer is never, in my case. I suspect it is the same in many others, too. Though I’m really curious to hear about cases where the reading of an academic article on software development has brought about a tangible change in a developer’s behaviour, I don’t think they’re that commonplace.
For all of us tasked by society with the goal of improving the effectiveness of software development organizations through research (if you’re a software engineering researcher, that’s you), this should be something to be ashamed of. Why do we have this disconnect? I can think of a few possible reasons:
Researchers waste their time working on problems that are pointless or irrelevant to practitioners. Often true, I’m afraid, judging by the drop in industry involvement in academic conferences and by the topics explored in our most prestigious publications. Sometimes we’re mesmerized by a technically challenging puzzle, or self-deluded into thinking that since many research findings take decades to trickle down, we shouldn’t concern ourselves with applicability at all. And so we lock ourselves in an ivory tower, and the rest of the world just forgets we’re even there.
Practitioners are expecting something that researchers can’t honestly provide. For instance, claims that following recipe X will yield an 18% increase in performance for the average organization. There are so many variables, and context is so important, that we just can’t offer precise predigested statements like that.
Researchers and practitioners have different understandings of what counts as evidence. Some researchers (myself included, at one point) seem to think that a small experiment with a few students working on a toy example is enough to get the ball rolling. Others (myself, now) are convinced that for many software research problems the best (or even the only) way to tackle them is a rich qualitative study. Both of these camps fare poorly for many practitioners, who were educated to see carefully controlled Physics experiments as the gold standard of science. For them, the small-experiment-with-students folks look like sloppy amateurs, and the rich-qualitative-study folks are postmodern poseurs that can’t give a straight answer. The techie culture doesn’t really appreciate great sociological studies the way it appreciates great physical studies; here we are at a disadvantage.
The academic forum isn’t the right place to connect. Academese is full of minutiae, and it looks like a foreign language for the rest of the world. But what is the right forum then? Halfway publications, like IEEE Software, are better, but still seem fairly detached from the real world in many ways. Trade magazines are moribund. Books are a possibility. As for more modern media, I don’t know of any blogs dedicated to disseminate software research findings the way Mark Guzdial’s does for CS education, but this is an attractive alternative.
Habit is a powerful force. Smokers know that smoking kills, but they keep at it. Once we find a route that will take us where we want to go reasonably quickly, we become unwilling to try out alternatives. This inertia is even stronger at an organizational level. And so, even if we discover better ways to organize and develop software, changing their ways is too much to ask of many professionals and teams: what they’re doing is good enough, and our proposals not worth the risk.
What do you think? How can we fix this? Or, alternatively, have you actually changed your ways after reading an academic paper? How did it happen?