Cuevano

I have just realized that the last post in this blog is from a while ago, when I was looking for a job, before I left my university position. I’ve happily moved on, and my blogging, though currently infrequent, has also moved on—to this site.

Posted in General | 1 Comment

Looking for a job

My time at the University of Victoria will be over in a few months, so I’m beginning to look for a job. More details here.

Posted in Uncategorized | Leave a comment

Migrating my personal blog

Although most of my blogging time lately has gone into the Never Work in Theory blog, I might keep posting here about software development research stuff that doesn’t fit there. However, I’ve moved my other blog, which used to be under wordpress.com as well, into its own domain: you can now find it at http://cuevano.ca. I’ll be posting there about my news and thoughts, so please update your bookmarks/subscriptions/feeds if you want to keep up to date!

Posted in General | 2 Comments

Empirical Software Engineering at American Scientist

(Crossposted from Never Work in Theory.)

A feature article on recent developments on empirical software engineering, by Greg Wilson and myself, has just been published in the November-December issue of American Scientist. Electronic version available here. Thanks to Morgan Ryan, our editor at American Scientist, for all his help in preparing this piece!

Posted in Academia, Software development | Leave a comment

The IROP paper

If you keep track of recent developments in empirical software engineering, you may have already heard of the fantastic IROP study. I was too busy writing a paper to blog about it when Andreas Zeller presented it at PROMISE 2011, but here I go, in case you haven’t read it.

Basically, Zeller, Thomas Zimmermann, and Christian Bird did what I’m afraid some researchers in our field do on a regular basis: take some mining tools and some data, and then go nuts with them—abuse of them in the most absurd ways imaginable. Luckily, Zeller, Zimmermann, and Bird did it on purpose and as a parody.

Here’s what they did: take Eclipse data on code and errors, and correlate the two to find good predictors of bugs. Sounds sensible. But they did the correlation at the ASCII character level. So it turns out, for Eclipse 3.0, the characters that are most highly correlated with errors are the letters ‘i’, ‘r’, ‘o’, and ‘p’. What is a sensible researcher to do facing these findings? Well take those letters out of the keyboard, of course! Problem solved:

The IROP keyboard

They then go over a supposed half-baked validation study with three interns, who reported great success in adapting to a life without ‘i’, ‘r’, ‘o’, and ‘p’ in their keyboards. Trial feedback:

We can shun these set majuscules, and the text stays just as swell as antecedently. Let us just ban them!

Near the end, the authors go over everything that’s wrong with their approach (lack of theoretical grounding, dishonest use of statistics, and a long et cetera). It’s a fun read, and instructive. Research, in general, needs more parodies. If you like this one, some of my other favourites are:

Posted in Academia, Software development | 6 Comments

Over at “Never Work in Theory”…

Just a reminder—over at the Never Work in Theory blog, we’ve already got about a couple dozen papers with empirical findings that (we think) are relevant for software practice. They’re beginning to cover a wide area: from parallel programming to teamwork dynamics to requirements prioritization to organizational structure. If you think of good and interesting papers that we haven’t discussed yet, drop me a note!

Posted in General | 4 Comments

Greg Wilson at the University of Victoria

Greg Wilson, editor of Beautiful Code, Making Software, and the Architecture of Open Source Applications, author of several other books, creator of the Software Carpentry project, and all-around great guy, will be giving a presentation at the University of Victoria tomorrow (July 7th) at 1pm: “What I learned in seven and a half years of being a professor that might be useful for those of you contemplating academic careers“. Room ECS 660. My guess is that you’ll get a lot out of it if you’re at all connected to software development, even if you’re not contemplating academic careers. All welcome!

Posted in Academia, Software development | Leave a comment

Announcing “It will never work in theory”

As you may know, a few colleagues and I have been trying to find ways to close the gap between software development research and practice. We believe that in recent years there has been much research that practitioners would find sound and useful, if they knew it existed and how to interpret it. We also believe that many researchers would still benefit from learning and negotiating with practitioners on which questions are important in real life and what parts of their proposals do not ring true in practical experience.

One of the problems we found is that there are few good channels to communicate and discuss interesting software engineering research news. This is particularly so in the electronic media space: although a couple handfuls of researchers have their own blogs, they we tend to discuss their own work in them, not to provide pointers to interesting stuff happening at other labs and in other areas of their field. Talking about this state of affairs, Greg Wilson pointed out that we really don’t have something like Lambda the Ultimate for software development, and that we really should.

And so we created It will never work in theory, a new software development research blog. As we explain in the introductory post, we want this blog to be a bridge between research and practice. To begin with, it’s modeled after LtU: we’ll be posting abstracts or excerpts from academic papers that are relevant in practice, and we hope that, eventually, a mixed community of researchers and software professionals will grow to discuss them. We’ll see how it evolves. So if you’re interested in the topic, please visit the site, subscribe, and join the conversation. And if you think of good material that we should cover (note: preferably not your own work! 😉 ), please send it my way.

Posted in Academia, Community, Software development | Leave a comment

ICSE 2011 Panel – slides and recap

(Updated August 7th to include David Weiss’ slides)

Here are the slides from the four participants of our “What Industry Wants from Research” ICSE 2011 panel that gave us their permission to share them.

Lionel Briand:

Peri Tarr:

Tatsuhiro Nishioka:

Wolfram Schulte:

David Weiss:

By all indications this was a useful, popular, and thought provoking panel, and I’m glad it turned out the way it did. A few notes about it. First, our perceptions that software development research and practice are disconnected, and that this is a bad thing, were shared and unchallenged across the board. There seem to be a lot of people that are concerned by this problem and that want to do something about it.

Second, the panel naturally turned into a conversation about what can researchers do to overcome this problem. On this there were many good pointers, but I found Peri Tarr’s perspective most enlightening: connecting research and practice is not just a matter of sharing research results or of listening to practitioners to understand their problems, it is about building trust in the research-practice partnership. This is especially true (though she did not say this) in a field like ours, where trust is so badly damaged. But she also pointed out that given the way our academic system is set up, following through with this advice may hurt a researcher’s academic career prospects.

Third, if we were to do this again, the one thing I would change is that I would try to make sure to have some more organizational diversity; to represent the open source perspective, for instance, or scientific software development. During the panel, “software industry”  drifted somewhat into “software business,” which is fortunately still not quite the right characterization of software practice out there.

Fourth, a few calls for better measurements and quantitative data arose from the panel, just as they did from our interviews. For those of us convinced of the inadequacy of plain numbers to account for some of the subtleties of software development on their own, there is a serious question here: how can we overcome this epistemological barrier?

Posted in Academia, Software development | 9 Comments

How do practitioners perceive software engineering research?

(Note: this is cross-posted at Margaret-Anne Storey’s blog and at Greg Wilson’s blog, but please post your thoughts here, on my blog. This post is based on the work of its coauthors, Jorge Aranda and Margaret-Anne (Peggy) Storey, as well as of Daniela Damian, Marian Petre, and Greg Wilson.)

Listening to software professionals over the past few years, we sometimes get the impression that software development research began and ended with Fred Brooks’ case study of the development of the IBM 360 operating system, summarized in “The Mythical Man-Month,” and with his often-quoted quip that adding people to a late project only makes it later. Now and then, mentions of Jerry Weinberg (on ego-less programming) and of DeMarco and Lister (on how developers are more productive if they’re given individual offices) pop up, and for the most part, it seems as if the extent of what software development academics have to offer to practitioners is a short list of folk sayings tenuously validated by empirical evidence. The fact that Brooks, Weinberg, DeMarco, and Lister are not academics — or were not at the time of these contributions, as in the case of Brooks — only makes the academic offerings look worse.

And yet, the software development academic community is considerably large and increasingly empirical. The International Conference on Software Engineering (ICSE), its most important gathering, consistently draws a crowd of over a thousand researchers. Researchers mine software repositories, they perform insightful ethnographic studies, and they build sophisticated tools to help development teams become more efficient. Many researchers, from junior Masters students to tenured professors, jump at the opportunity to study and help software organizations. In other words, there is a significant academic offering of results on display. But if we look at the list of ICSE attendees, we discover that industrial participation is very low (less than 20% last year), and there seems to be very little dissemination of scientific findings overall. What is going on? Are we wasting our time studying problems that practitioners do not care about? Or do we have a communication problem? Are practitioners expecting help with intractable problems? And most importantly, how can we change this situation?

To explore these questions, we decided to interview leading practitioners. Over the past few months, we talked to CEOs, senior architects, managers, and creators of organizations and products most of us would recognize and use. We asked them to tell us their perceptions of our field and how they think we could improve our relationships with them. One outcome of these interviews was the organization of a panel at ICSE, where people that straddle the line between research and practice will use insights from these interviews as a starting point to discuss the apparent industry-research gap.

We are still thinking about how to disseminate the observations that our ongoing interviewees have been giving us. For now, we want to broadcast some of the most important points from our conversations here, in blog post format, hoping to give them as much exposure as possible.

Perceptions of software research

For those of us venturing out of the ivory tower to do empirical research, it shouldn’t be a surprise that many practitioners have a general disregard for software development academics. Some think our field is dated, and biased toward large organizations and huge projects. Others feel that we spend too much time with toy problems that do not scale, and as a result, have little applicability in real and complex software projects. Most of our interviewees felt that our research goals are unappealing or simply not useful. This was one of the strongest threads in our conversation: one person told us that our field is this “fuzzy stuff at a distance that doesn’t seem to affect [him] much,” another, that we ignore the real cutting-edge problems that organizations face today, and one more, a senior architect about to make the switch to academia himself, gave a rather scathing critique of the field.

“[I’m afraid] that industrial software engineers will think that I’m now doing academic software engineering and then not listen to me. (…) if I start talking to them and claim that I’m doing software engineering research, after they stop laughing, they’re gonna stop listening to me. Because it’s been so long since anything actually relevant to what practitioners do has come out of that environment, or at least the percentage of things that are useful that come out of that environment is so small.”

Part of the problem seems to be that we have only been able to offer professionals piecemeal improvements. Software development is essentially a design problem, a wicked problem, and it is not amenable to silver bullets (as, ahem, Fred Brooks argued convincingly decades ago). But the immaturity and difficulty of software development still make it a prime domain for the presence and profit of snake oil salesmen — people that are not afraid to advertise their miraculous formulas, grab the money and run. Honest academics, reporting improvements of 10% or 20% for a limited domain and under several constraints, have a hard time being heard above the noise.

Difficulty in applying our findings

The problem with piecemeal improvements has another angle: many professionals can’t be bothered to change their processes and practices for gains as small as 10% or 20%, since overcoming their organizational inertia and forcing themselves to incur significant risks may be more costly than the benefits they’d accrue.

“(…) it would depend in part of how cumbersome your techniques are; how much retraining I’m going to have to do on my staff. (…) I might decide that even if you’re legit and you actually do come up with 15%, that that’s not enough to justify it.”

This puts us in a bit of a quandary as we’re extremely unlikely to come up with any technique that will guarantee a considerable improvement for software organizations. At the same time, they’re extremely unlikely to adopt anything that doesn’t guarantee substantial improvements or that requires them to change their routines significantly. However, there are a few ways out of this problem. One of them is to propose lightweight, low-risk techniques. Another is to aim for organizational change at the periphery, in pilot projects, rather than at the core, hoping that the change will be appealing enough that it will spread through the organization. But it’s an uphill battle nonetheless.

What counts as evidence?

Another, perhaps bigger problem lies in the perception of what counts as valid scientific evidence. For better or worse, software developers have an engineering mindset, and have an idea of science as the calm and reasoned voice of hard data among the cackling of anecdote. The distinction between hard data and anecdote is binary, and hard data, according to most of our interviewees, is quantitative data; anything else is anecdote and should be dismissed.

“without measurements you can’t… it’s all too wishy-washy to be adopted.”

“managers are coin operated in some sense. If you can’t quantify it in terms of time or in terms of money, it doesn’t make much difference to them. (…) I think there does need to be some notion of a numeric or at least an objective measure.”

“So when you’re gonna tell me that I’m wrong, which is a good thing, you know you gotta have that extra ‘yeah, we ran these groups on parallel and guess what, here are the numbers'”

Why is this a problem? Because over the years, we as a community have come to realize that many of the really important software development problems are not amenable to study with controlled experiments or with (exclusively) quantitative data. Ethnographies, case studies, mixed-method studies, and others, can be as rigorous as controlled experiments, and for many of the questions that matter, they can be more insightful — but they don’t have the persuasive aura of a string of numbers or a p value. Faced with this perception, we have two choices. First, to give practitioners what they (think they) want: controlled experiments to the exclusion of everything else (never mind the fact that often these won’t be able to actually answer the questions that matter to professionals in a scientifically sound manner), or second, to push for a better dissemination of our results and methods, making the argument that there’s more to science than trial runs and statistical significance, and helping practitioners distinguish between good and bad science, whatever its methods of choice.

Dissemination of results

Although, from talking to our interviewees, it was clear that the dissemination of scientific results is almost non-existent, this seems to be a problem that we can address more easily than the others. Of course, presenting research findings to non-academics, as our interviewees reminded us, is difficult; you need to be a good storyteller, you need passion, clear data, and a strong underlying argument. To some extent, this is feasible.

In any case, it became evident that academic journals and conferences are not the right venues to reach software professionals overall. Blog posts may help communicate some findings (but it is hard to be heard above the noise), and books could help too (especially if you have Brooks’ writing abilities). Another alternative is intermediate journals and magazines, like IEEE Software and ACM Queue. One interviewee suggested that we should be visiting industry conferences way more often; when a researcher ventures into an industry conference with interesting data, it does seem to generate excitement and good discussions, at the very least.

Areas of interest

We asked our interviewees what questions should we focus on; that is, what problems do they struggle with on a frequent basis that researchers may tackle on their behalf. A few themes arose from their lists of potential problems:

  • Developer issues were very common. These include identifying wasteful use of developer time, keeping older engineers up to date with a changing landscape (an interesting riff on the rather popular research question of bringing new engineers up to speed with the current organizational landscape), identifying productive programmers and efficient ways to assemble teams, overcoming challenges of distributed software development, achieving better effort prediction, learning to do parallel programming well, and identifying mechanisms to spread knowledge of the code more uniformly throughout the organization.
  • Evaluation issues also arose frequently. Essentially, these consist of having academia perform the role of fact checker or auditor of proposals that arise from consultants, opinion leaders, and other influential folks in the software development culture. Many interviewees were curious to find to what extent does agile development work as well as its evangelists claim it works, for instance, but their curiosity also extends to other processes, techniques, and tools.
  • Design issues came up as well. One in particular seemed interesting: figuring out why some ideas within a project die after a lot of effort was spent on them. This could lead into techniques to identify ideas probably doomed to failure early on, so that the team can minimize the resources spent on them.
  • Tool issues were rather popular, and on many of the tools that our interviewees mentioned there is already some good work from our community that hopefully can be turned into tools that will be successfully adopted by the mainstream. Our interviewees were interested in tools that would provide warnings as a developer was to enter a conflicting area of the code, in good open source static analysis tools, in test suite analytics, and in live program analysis tools that scale well.
  • Code issues, though less common, were interesting as well. In particular, studying and providing help in dealing with the blurred line between project code and configuration code (and treating configuration code with the same care and level of tool-set sophistication that we give to project code), and providing a better foundation for higher-level abstractions such as modeling languages.
  • User issues arose more frequently than they seem to in our academic literature. Several of our interviewees wanted to bring user experience to the forefront, and some were concerned that software development skill and user experience gut instinct were rarely found in sufficient quantities in the same professional. One of them wanted to bring the kind of mining techniques that we use to analyze software repositories into an analysis of customer service audio and email data.

So as you can read, there were plenty of interesting research questions brought up by our interviewees. Some of these questions are more approachable than others, some have already been addressed numerous times in research and are therefore now in need of better dissemination of findings.

In summary, the managers, creators, and architects we interviewed confirmed our fear that the software research academic community is extremely disconnected from software practice. This seems to be partly our fault (we often do not work on the issues that practitioners worry about, we rarely reach out to them purposefully), and partly a misconception of what it means to do science and what counts as valid evidence in our domain.

We hope to further explore these initial insights from industry at our upcoming panel at ICSE.  We have sought panelists that straddle the line between research and practice to provide their perspectives on what they think compelling evidence would look like to industry, what they consider to be the important questions for academia, to suggest to us how we can more effectively disseminate results and to suggest how we can engage in productive collaborative research that is of benefit to both sides. In the meantime, we welcome your comments on this post! And stay tuned, as we will follow up to summarize the discussion from the ICSE panel.

Update: Forgot to add a reminder for new readers that we cover some ground on these issues in the “Making Software” book, which summarizes some of the things we know about software development and why do we think they are true. Our interviews are a follow-up on that work.

Posted in Academia, Software development | 32 Comments