Publishing into the void

A few days ago, Greg Wilson tweeted:

When is the last time you read something in an ACM or IEEE journal that changed how you program or the tools you use? Ever?

The answer is never, in my case. I suspect it is the same in many others, too. Though I’m really curious to hear about cases where the reading of an academic article on software development has brought about a tangible change in a developer’s behaviour, I don’t think they’re that commonplace.

For all of us tasked by society with the goal of improving the effectiveness of software development organizations through research (if you’re a software engineering researcher, that’s you), this should be something to be ashamed of. Why do we have this disconnect? I can think of a few possible reasons:

Researchers waste their time working on problems that are pointless or irrelevant to practitioners. Often true, I’m afraid, judging by the drop in industry involvement in academic conferences and by the topics explored in our most prestigious publications. Sometimes we’re mesmerized by a technically challenging puzzle, or self-deluded into thinking that since many research findings take decades to trickle down, we shouldn’t concern ourselves with applicability at all. And so we lock ourselves in an ivory tower, and the rest of the world just forgets we’re even there.

Practitioners are expecting something that researchers can’t honestly provide. For instance, claims that following recipe X will yield an 18% increase in performance for the average organization. There are so many variables, and context is so important, that we just can’t offer precise predigested statements like that.

Researchers and practitioners have different understandings of what counts as evidence. Some researchers (myself included, at one point) seem to think that a small experiment with a few students working on a toy example is enough to get the ball rolling. Others (myself, now) are convinced that for many software research problems the best (or even the only) way to tackle them is a rich qualitative study. Both of these camps fare poorly for many practitioners, who were educated to see carefully controlled Physics experiments as the gold standard of science. For them, the small-experiment-with-students folks look like sloppy amateurs, and the rich-qualitative-study folks are postmodern poseurs that can’t give a straight answer. The techie culture doesn’t really appreciate great sociological studies the way it appreciates great physical studies; here we are at a disadvantage.

The academic forum isn’t the right place to connect. Academese is full of minutiae, and it looks like a foreign language for the rest of the world. But what is the right forum then? Halfway publications, like IEEE Software, are better, but still seem fairly detached from the real world in many ways. Trade magazines are moribund. Books are a possibility. As for more modern media, I don’t know of any blogs dedicated to disseminate software research findings the way Mark Guzdial’s does for CS education, but this is an attractive alternative.

Habit is a powerful force. Smokers know that smoking kills, but they keep at it. Once we find a route that will take us where we want to go reasonably quickly, we become unwilling to try out alternatives. This inertia is even stronger at an organizational level. And so, even if we discover better ways to organize and develop software, changing their ways is too much to ask of many professionals and teams: what they’re doing is good enough, and our proposals not worth the risk.

What do you think? How can we fix this? Or, alternatively, have you actually changed your ways after reading an academic paper? How did it happen?

    About Jorge Aranda

    I'm currently a Postdoctoral Fellow at the SEGAL and CHISEL labs in the Department of Computer Science of the University of Victoria.
    This entry was posted in Academia, Software development. Bookmark the permalink.

    15 Responses to Publishing into the void

    1. An alternative is that the question is based on a mistaken idea about the relationship between software research and software practice. I wouldn’t expect leading edge SE research to lead to immediate changes in an individual programmer’s tools or practices.

      Studies of diffusion of innovation show a time lag of anywhere between 5 and 15 years for new research (in various fields) to transition into practical applications, and even longer to become something that’s commercially profitable. Plus a significant fraction (some say 9/10) of research never makes it that far; research is inherently risky, and most of it turns out not to be a useful path. Unfortunately you can’t get the 1/10 that does pay off without doing all the rest, because you can’t know in advance which ideas will work.

      I’ve seen no reason to think that software research should be any different from other research fields in this respect.

      • Jorge Aranda says:

        Thanks, Steve.

        That time lag, as I understand it, should only be a lag in the most abstract way. That is, an idea comes up, and if it shows promise, then after a lot of hard work (and further publications, tools, dead-ends, etc.) it begins to gain adoption, until it is in widespread use. Between inception and adoption there is (or there should be) no silence. Some of the stuff we publish should be stuff that is closer to the end of that cycle, and so I think it’s fair to demand that even a tiny fraction of it should lead directly to useful changes. The question is whether this happens at all.

        I’m OK with the idea that academic journals and conferences are not the right channel for this—that they’re not the right forum to connect, as I say above. But is there one in place? If so, what is it?

    2. There is a recent interview to Linus Torvalds, where he funnily states:

      [….] It started out as a patch from a PhD student that did something totally different (it was designed to do file namespace replication over multiple machines) and was really a classic university research project – pretty odd, rather hacky, and not really useful in general. And I just got really excited about the patch, and took the crazy idea and made it do things it was never really meant to do. And these days our whole filename cache is based on that, and the original reason for it is just an obscure detail and was never actually used by anybody afaik

      On the other hand, it depends on where you put the incentives. I think researchers are evaluated by papers published, not by their impact in the industry. Hence, once the paper is published, their work is pretty much done. For instance, in Free/Libre/Open Source Software (FLOSS) projects, how and when researchers have given something back when they have studied FLOSS communities? So, they can get in touch with them to study how they work, but they are not able to let them know the results or even a pointer to the paper. Much better would be to show the results in a FLOSS conference, and the feedback could be far better, but also harder.

      • Re-reading it, my comment sounds a bit rude, but it was not my intention.

      • Jorge Aranda says:

        Your comment doesn’t come off as rude at all, Germán, and I think many of us (I’ll admit being guilty of this several times already) don’t really follow up with our research participants the way we should to bring about positive changes. I think, like you, that it has a lot to do with our incentive structure: once we get a publication, we move on.

    3. I’ve sometimes wondered if other engineering fields require a research project to identify research impact the way that software engineering apparently does.

    4. Neil says:

      Greg’s comment presupposes that practitioners are changing their practice in the first place. Like Steve says, diffusion research suggests this change is difficult to bring about, even in the face of overwhelming evidence. How much more research do we really need to realize there is AGW happening? I’m not convinced the problem lies entirely with the research community.

      It would be interesting to take a “best practice” and study how many developers use it. Like TDD. Is it accepted as a best practice? How did it get disseminated? How many developers are using it the way it should be used?

      • Jorge Aranda says:

        I’m actually convinced that the problem *does not* lie entirely with the research community—but that shouldn’t stop us from trying to fix it.

        I like your research question. I suspect that people mean very different things when they talk about, for instance, TDD. The description of the practice is actually loose enough to allow for several interpretations. Part of the enchantment or disenchantment with such practices may be caused by these differences in meaning.

    5. I’ve been a “practitioner” in my own limited way for 10 years (*cough* *cough*, Habit is a powerful force…. uhhh…. guilty!). We certainly changed our practice! Where I worked within that 10 year period there were at least 4 or 5 major shifts in the internal practices:

      2000: We need to use XML.

      2002: We need to use Aspect-Oriented programming. And SOAP. And we should be refactoring.

      2004: We need to be Agile.

      2006: We need to use Ruby-On-Rails. And Spring.

      2008: We need to use scripting languages (groovy, jruby, jpython). Static typing is passé.

      So for example, in 2006 the whole team goes to an industry conference for Friday to Sunday. When we come back to work on Monday all the rock stars spend the next 3 to 12 months programming Ruby-On-Rails, while the supporting cast & crew stick to their usual copy/paste/comment-out/never-delete paranoid cautious development in Java style. These shifts in culture were also used to torture potential hires during the job interview process: “Describe a time you refactored your inversion-of-control through a REST web service…”

      Sometimes these fads do result in real improvements to our internal software engineering (everyone seems pretty unanimous about TDD inside the company), but it seems accidental to me. I cannot help but think of Mackay’s, “Extraordinary Popular Delusions and the Madness of Crowds.” Cynically, I think the greatest impact in software engineering is due to people like these seventeen trying to sell more books: http://agilemanifesto.org/. At least in the Java / Python / Ruby world! Seriously, what proportion of Fowler, Beck, Gamma, Dave Thomas’s annual income comes from direct software development, and what proportion comes from book sales (and related speaking tours)? If I were in their shoes, I’d focus on writing & selling more books. Improving software would be a very secondary concern.

      If SE research wants to have significant and immediate impact, it has to get in line and buy some column-inches just like all the others. The developers have limited bandwidth to consider options, alternatives, and new ways of doing things. It’s a classic marketing problem. And you’re up against incumbents who have real financial incentives to keep the attention to themselves.

      If the SE field is serious about impact, SE researchers should try to better leverage their one inherent marketing advantage: direct contact with the undergrads, masters, and doctoral students who will soon be decision makers in their own careers. But be careful what you wish for! I have nightmares where everyone only programs Aspects, where un-aspected code becomes a rare and precious commodity, mined from the raw-Java-Sands of Athabasca….. :-p

      • Jorge Aranda says:

        Yeah, the signatories of the Agile manifesto probably have had a greater impact in software engineering in the last decade than the research community—and for good, I think, having seen their proposals carried out in practice. But I’m not sure this is a marketing problem, as I don’t see a need to compete with them: there is plenty of room for empirical evaluation, tweaking, corroboration, refutation, etc. to be made about their ideas. To this point there’s not much of that, though.

        • Laurent Bossavit says:

          I have a bibliography of about 130 empirical papers related to various Agile practices, of which 50 are on TDD and 30 on pair programming. Are you perhaps underestimating the amount of attention given by researchers to Agile ideas?

          Granted, this attention is somewhat dispersed. There isn’t (yet) a journal specializing in research about Agile ideas, and what papers there are in this area suffer from a lack of dissemination because ACM and IEEE and Springer lock them behind a paywall, to be forgotten of anyone except academics…

        • Jorge Aranda says:

          Well, if there are 50 (good quality) empirical studies on TDD and 30 on pair programming, then yes, maybe I’m underestimating the attention researchers are giving to these topics! Still, I assure you even this is a tiny fraction of our community’s output and emphasis.

    Leave a reply to Jorge Aranda Cancel reply