ICSE 2011 Panel – slides and recap

(Updated August 7th to include David Weiss’ slides)

Here are the slides from the four participants of our “What Industry Wants from Research” ICSE 2011 panel that gave us their permission to share them.

Lionel Briand:

Peri Tarr:

Tatsuhiro Nishioka:

Wolfram Schulte:

David Weiss:

By all indications this was a useful, popular, and thought provoking panel, and I’m glad it turned out the way it did. A few notes about it. First, our perceptions that software development research and practice are disconnected, and that this is a bad thing, were shared and unchallenged across the board. There seem to be a lot of people that are concerned by this problem and that want to do something about it.

Second, the panel naturally turned into a conversation about what can researchers do to overcome this problem. On this there were many good pointers, but I found Peri Tarr’s perspective most enlightening: connecting research and practice is not just a matter of sharing research results or of listening to practitioners to understand their problems, it is about building trust in the research-practice partnership. This is especially true (though she did not say this) in a field like ours, where trust is so badly damaged. But she also pointed out that given the way our academic system is set up, following through with this advice may hurt a researcher’s academic career prospects.

Third, if we were to do this again, the one thing I would change is that I would try to make sure to have some more organizational diversity; to represent the open source perspective, for instance, or scientific software development. During the panel, “software industry”  drifted somewhat into “software business,” which is fortunately still not quite the right characterization of software practice out there.

Fourth, a few calls for better measurements and quantitative data arose from the panel, just as they did from our interviews. For those of us convinced of the inadequacy of plain numbers to account for some of the subtleties of software development on their own, there is a serious question here: how can we overcome this epistemological barrier?

About Jorge Aranda

I'm currently a Postdoctoral Fellow at the SEGAL and CHISEL labs in the Department of Computer Science of the University of Victoria.
This entry was posted in Academia, Software development. Bookmark the permalink.

9 Responses to ICSE 2011 Panel – slides and recap

  1. BadInfluence says:

    Hi Jorge, thanks for the interesting post.
    Can you say a bit more about these “calls for better measurements and quantitative data” that keep coming up? What kinds of measurements and experiments are industry calling for? I think it’s wrong to dismiss them with the argument that they doesn’t understand qualitative research methods. For example, case studies are usually very readable, whereas you need to know at least some statistics to understand an experiment. Could it be more of a problem of generalisability?
    Howell

    • Jorge Aranda says:

      They would like it (and who wouldn’t?) if we reported that implementing some practice or some tool will yield a net gain of X percent in productivity, for instance. But no research method can provide this kind of finding, at least not at this point.

      That qualitative studies do not generalize well is a common misconception. Their generalization challenges are comparable to those of quantitative studies, even if people unfamiliar with them do not perceive them as such—hence the epistemological barrier I mention above.

      • BadInfluence says:

        Hi Jorge, thanks for your reply. Ah, if only I could report a context-independent, domain-independent, reproducible gain of X percent in anything! You make some interesting points about the generalisation challenges of different research methods – do you have any references on the topic?

      • Jorge Aranda says:

        Hi Howell,

        I can give you a couple of references regarding case studies (my usual method of choice). The standard one is Robert K. Yin’s book, “Case Study Research,” which does a pretty good job at addressing this misconception by pointing out the difference between generalization to a theory and statistical generalization from a sample to a population (an experiment allows you to do the latter, but you still need to do the former no matter your choice of method). The second reference is Flyvbjerg’s “Five Misunderstandings About Case Study Research”.

        As Yin’s book and Flyvbjerg’s paper point out, there are plenty of well known examples of generalizations brought about by case studies (which, I should note, include both quantitative and qualitative data when feasible). But I don’t think either of them discuss my favourite example of tremendously generalizing and tremendously powerful qualitative work: Darwin’s “The Origin of Species.”

  2. Pingback: Announcing “It will never work in theory” | Catenary

  3. Pingback: I have a research solution. Now I just need to find a real problem for it | My Research Rants

  4. Pingback: ICSE 2011 Panel on “What Industry Wants from Research” « Margaret-Anne Storey

  5. Pingback: Lucky industry girl finds fulfillment in academia | The CHISEL Group, University of Victoria, BC

  6. Pingback: I have a research solution. Now I just need to find a real problem for it

Leave a comment