How do practitioners perceive software engineering research?

(Note: this is cross-posted at Margaret-Anne Storey’s blog and at Greg Wilson’s blog, but please post your thoughts here, on my blog. This post is based on the work of its coauthors, Jorge Aranda and Margaret-Anne (Peggy) Storey, as well as of Daniela Damian, Marian Petre, and Greg Wilson.)

Listening to software professionals over the past few years, we sometimes get the impression that software development research began and ended with Fred Brooks’ case study of the development of the IBM 360 operating system, summarized in “The Mythical Man-Month,” and with his often-quoted quip that adding people to a late project only makes it later. Now and then, mentions of Jerry Weinberg (on ego-less programming) and of DeMarco and Lister (on how developers are more productive if they’re given individual offices) pop up, and for the most part, it seems as if the extent of what software development academics have to offer to practitioners is a short list of folk sayings tenuously validated by empirical evidence. The fact that Brooks, Weinberg, DeMarco, and Lister are not academics — or were not at the time of these contributions, as in the case of Brooks — only makes the academic offerings look worse.

And yet, the software development academic community is considerably large and increasingly empirical. The International Conference on Software Engineering (ICSE), its most important gathering, consistently draws a crowd of over a thousand researchers. Researchers mine software repositories, they perform insightful ethnographic studies, and they build sophisticated tools to help development teams become more efficient. Many researchers, from junior Masters students to tenured professors, jump at the opportunity to study and help software organizations. In other words, there is a significant academic offering of results on display. But if we look at the list of ICSE attendees, we discover that industrial participation is very low (less than 20% last year), and there seems to be very little dissemination of scientific findings overall. What is going on? Are we wasting our time studying problems that practitioners do not care about? Or do we have a communication problem? Are practitioners expecting help with intractable problems? And most importantly, how can we change this situation?

To explore these questions, we decided to interview leading practitioners. Over the past few months, we talked to CEOs, senior architects, managers, and creators of organizations and products most of us would recognize and use. We asked them to tell us their perceptions of our field and how they think we could improve our relationships with them. One outcome of these interviews was the organization of a panel at ICSE, where people that straddle the line between research and practice will use insights from these interviews as a starting point to discuss the apparent industry-research gap.

We are still thinking about how to disseminate the observations that our ongoing interviewees have been giving us. For now, we want to broadcast some of the most important points from our conversations here, in blog post format, hoping to give them as much exposure as possible.

Perceptions of software research

For those of us venturing out of the ivory tower to do empirical research, it shouldn’t be a surprise that many practitioners have a general disregard for software development academics. Some think our field is dated, and biased toward large organizations and huge projects. Others feel that we spend too much time with toy problems that do not scale, and as a result, have little applicability in real and complex software projects. Most of our interviewees felt that our research goals are unappealing or simply not useful. This was one of the strongest threads in our conversation: one person told us that our field is this “fuzzy stuff at a distance that doesn’t seem to affect [him] much,” another, that we ignore the real cutting-edge problems that organizations face today, and one more, a senior architect about to make the switch to academia himself, gave a rather scathing critique of the field.

“[I’m afraid] that industrial software engineers will think that I’m now doing academic software engineering and then not listen to me. (…) if I start talking to them and claim that I’m doing software engineering research, after they stop laughing, they’re gonna stop listening to me. Because it’s been so long since anything actually relevant to what practitioners do has come out of that environment, or at least the percentage of things that are useful that come out of that environment is so small.”

Part of the problem seems to be that we have only been able to offer professionals piecemeal improvements. Software development is essentially a design problem, a wicked problem, and it is not amenable to silver bullets (as, ahem, Fred Brooks argued convincingly decades ago). But the immaturity and difficulty of software development still make it a prime domain for the presence and profit of snake oil salesmen — people that are not afraid to advertise their miraculous formulas, grab the money and run. Honest academics, reporting improvements of 10% or 20% for a limited domain and under several constraints, have a hard time being heard above the noise.

Difficulty in applying our findings

The problem with piecemeal improvements has another angle: many professionals can’t be bothered to change their processes and practices for gains as small as 10% or 20%, since overcoming their organizational inertia and forcing themselves to incur significant risks may be more costly than the benefits they’d accrue.

“(…) it would depend in part of how cumbersome your techniques are; how much retraining I’m going to have to do on my staff. (…) I might decide that even if you’re legit and you actually do come up with 15%, that that’s not enough to justify it.”

This puts us in a bit of a quandary as we’re extremely unlikely to come up with any technique that will guarantee a considerable improvement for software organizations. At the same time, they’re extremely unlikely to adopt anything that doesn’t guarantee substantial improvements or that requires them to change their routines significantly. However, there are a few ways out of this problem. One of them is to propose lightweight, low-risk techniques. Another is to aim for organizational change at the periphery, in pilot projects, rather than at the core, hoping that the change will be appealing enough that it will spread through the organization. But it’s an uphill battle nonetheless.

What counts as evidence?

Another, perhaps bigger problem lies in the perception of what counts as valid scientific evidence. For better or worse, software developers have an engineering mindset, and have an idea of science as the calm and reasoned voice of hard data among the cackling of anecdote. The distinction between hard data and anecdote is binary, and hard data, according to most of our interviewees, is quantitative data; anything else is anecdote and should be dismissed.

“without measurements you can’t… it’s all too wishy-washy to be adopted.”

“managers are coin operated in some sense. If you can’t quantify it in terms of time or in terms of money, it doesn’t make much difference to them. (…) I think there does need to be some notion of a numeric or at least an objective measure.”

“So when you’re gonna tell me that I’m wrong, which is a good thing, you know you gotta have that extra ‘yeah, we ran these groups on parallel and guess what, here are the numbers'”

Why is this a problem? Because over the years, we as a community have come to realize that many of the really important software development problems are not amenable to study with controlled experiments or with (exclusively) quantitative data. Ethnographies, case studies, mixed-method studies, and others, can be as rigorous as controlled experiments, and for many of the questions that matter, they can be more insightful — but they don’t have the persuasive aura of a string of numbers or a p value. Faced with this perception, we have two choices. First, to give practitioners what they (think they) want: controlled experiments to the exclusion of everything else (never mind the fact that often these won’t be able to actually answer the questions that matter to professionals in a scientifically sound manner), or second, to push for a better dissemination of our results and methods, making the argument that there’s more to science than trial runs and statistical significance, and helping practitioners distinguish between good and bad science, whatever its methods of choice.

Dissemination of results

Although, from talking to our interviewees, it was clear that the dissemination of scientific results is almost non-existent, this seems to be a problem that we can address more easily than the others. Of course, presenting research findings to non-academics, as our interviewees reminded us, is difficult; you need to be a good storyteller, you need passion, clear data, and a strong underlying argument. To some extent, this is feasible.

In any case, it became evident that academic journals and conferences are not the right venues to reach software professionals overall. Blog posts may help communicate some findings (but it is hard to be heard above the noise), and books could help too (especially if you have Brooks’ writing abilities). Another alternative is intermediate journals and magazines, like IEEE Software and ACM Queue. One interviewee suggested that we should be visiting industry conferences way more often; when a researcher ventures into an industry conference with interesting data, it does seem to generate excitement and good discussions, at the very least.

Areas of interest

We asked our interviewees what questions should we focus on; that is, what problems do they struggle with on a frequent basis that researchers may tackle on their behalf. A few themes arose from their lists of potential problems:

  • Developer issues were very common. These include identifying wasteful use of developer time, keeping older engineers up to date with a changing landscape (an interesting riff on the rather popular research question of bringing new engineers up to speed with the current organizational landscape), identifying productive programmers and efficient ways to assemble teams, overcoming challenges of distributed software development, achieving better effort prediction, learning to do parallel programming well, and identifying mechanisms to spread knowledge of the code more uniformly throughout the organization.
  • Evaluation issues also arose frequently. Essentially, these consist of having academia perform the role of fact checker or auditor of proposals that arise from consultants, opinion leaders, and other influential folks in the software development culture. Many interviewees were curious to find to what extent does agile development work as well as its evangelists claim it works, for instance, but their curiosity also extends to other processes, techniques, and tools.
  • Design issues came up as well. One in particular seemed interesting: figuring out why some ideas within a project die after a lot of effort was spent on them. This could lead into techniques to identify ideas probably doomed to failure early on, so that the team can minimize the resources spent on them.
  • Tool issues were rather popular, and on many of the tools that our interviewees mentioned there is already some good work from our community that hopefully can be turned into tools that will be successfully adopted by the mainstream. Our interviewees were interested in tools that would provide warnings as a developer was to enter a conflicting area of the code, in good open source static analysis tools, in test suite analytics, and in live program analysis tools that scale well.
  • Code issues, though less common, were interesting as well. In particular, studying and providing help in dealing with the blurred line between project code and configuration code (and treating configuration code with the same care and level of tool-set sophistication that we give to project code), and providing a better foundation for higher-level abstractions such as modeling languages.
  • User issues arose more frequently than they seem to in our academic literature. Several of our interviewees wanted to bring user experience to the forefront, and some were concerned that software development skill and user experience gut instinct were rarely found in sufficient quantities in the same professional. One of them wanted to bring the kind of mining techniques that we use to analyze software repositories into an analysis of customer service audio and email data.

So as you can read, there were plenty of interesting research questions brought up by our interviewees. Some of these questions are more approachable than others, some have already been addressed numerous times in research and are therefore now in need of better dissemination of findings.

In summary, the managers, creators, and architects we interviewed confirmed our fear that the software research academic community is extremely disconnected from software practice. This seems to be partly our fault (we often do not work on the issues that practitioners worry about, we rarely reach out to them purposefully), and partly a misconception of what it means to do science and what counts as valid evidence in our domain.

We hope to further explore these initial insights from industry at our upcoming panel at ICSE.  We have sought panelists that straddle the line between research and practice to provide their perspectives on what they think compelling evidence would look like to industry, what they consider to be the important questions for academia, to suggest to us how we can more effectively disseminate results and to suggest how we can engage in productive collaborative research that is of benefit to both sides. In the meantime, we welcome your comments on this post! And stay tuned, as we will follow up to summarize the discussion from the ICSE panel.

Update: Forgot to add a reminder for new readers that we cover some ground on these issues in the “Making Software” book, which summarizes some of the things we know about software development and why do we think they are true. Our interviews are a follow-up on that work.

About Jorge Aranda

I'm currently a Postdoctoral Fellow at the SEGAL and CHISEL labs in the Department of Computer Science of the University of Victoria.
This entry was posted in Academia, Software development. Bookmark the permalink.

32 Responses to How do practitioners perceive software engineering research?

  1. One thing I see often in other fields of research is that knowledge from generated at Universities gets often disseminated via start up. Say a PhD student came up with this great solution for a problem and after graduating opens up a start-up and brings that knowledge to the “real” world.
    I am not sure but from what I have seen in my short time as a PhD student so far there seems to be less of an entrepreneurial spirit that pushes students to go and change the world. One example that comes to mind were it worked is the tool Mylyn that started out as research and is now brought to industry via a StartUp founded around that very by research generated concept.

  2. Chris Parnin says:

    Nice summary of sanity. Part of the problem comes from the mismatch of engineering and scientific problems. Science likes to slice up problems to understand basic principles. This is a slow, but solid way of building knowledge. The issue is that if your problem is moving faster and with more moving parts than “science” can study, then we’re wasting resources on solutions that will not have a baring on society. For example, look at how long it took Java to place generics from idea (1994) to implementation (2004): 10 years. It wasn’t even “mainstream” until about 2008. If it takes that long for a language feature to make an impact, then how will any of our ideas fare any better? Will the problem still be around 15 years later?

    There are all sorts of problems we try solving that might just be rare in practice but sound important. Thanks for looking into this!

    The hope is that already we’re seeing younger researchers use twitter, blogs, github, etc to socialize and disseminate ideas at a much faster pace then before. My favorite example is Zef’s mobl [http://www.mobl-lang.org/] Who knew that DSLs something talked about in the 60’s) could finally have an impact!

    • Jorge Aranda says:

      The hope is that already we’re seeing younger researchers use twitter, blogs, github, etc to socialize and disseminate ideas at a much faster pace then before.

      Yes. Personally, an even greater hope I have for younger researchers is the recent move away from prescription and towards description and understanding.

  3. Irving Reid says:

    Industry analyst firms are one way to get through to the director/CxO level. Convince Gartner, Forrester, et. al. that you’ve got something, and they’ll spread it for you. Their conferences are also worth hitting.

  4. Pingback: How do practitioners perceive software engineering research? « Margaret-Anne Storey

  5. Ian Bull says:

    I think part of the problem can be seen in the title of the article “How do practitioners perceive software engineering research?”. There is an implicit distinction between practitioners and researchers… us and them, them and us.

    Many good software engineers were born out of academia. Many did research (at the Master’s, PhD, and even faculty level). Many still do research in their day jobs (study the performance of their work, look for ways to improve it, etc…). Many software engineers enjoying sitting down and discussing “hard problems” over a few pops. Engineers often attend conferences, read articles and take great pride in bettering their work.

    However, engineers don’t want to hear “we studied X and you should do Y”. Instead, they want to hear “I was having a problem with X and starting doing Y and it helped because of Z”.

    I think we need to bring these two communities much closer together. The “practitioners” should be presenting the research and researchers should be living and breathing the practice. To further Adrian’s point, I believe Mylyn was successful (and yes, having millions of “practitioners” use your results is successful in my mind) because the researcher and the practitioners were one in the same.

    Just my $0.02.

    • Jorge Aranda says:

      You’re a good example of what you’re proposing, Ian, but I don’t think there’s that many. Part of the problem is that it does take lots of effort to learn how to make good software, and lots to learn how to do good research. So the claims of an engineer that says “I was having a problem with X…”, etc., will rarely withstand scrutiny from a scientific perspective.

      In other words, although I think, as you seem to, that there should be many more hybrid scientists/engineers, I also think there is a (very important) need for strong scientists in the area, even if they do not develop software themselves.

      • Ian Bull says:

        I would agree with you, except many SE Researchers (I unfortunately did this too), don’t simply “research”, but instead they attempt to create tools / processes to help improve* the development process.

        *define improve however you like.

        As you mention, shipping software is hard. However, many researchers will abstract away the difficult parts (chalk them up to limitations), and solve the easy parts. I’ve seen many tools that will help teams of software developers collaborate: assuming the teams are novices (1st or 2nd year students) working on 1,000 LoC projects.

        It’s often those limitations where the crux of the problem lies.

        Now when I say “researcher and practitioner should be one in the same”, I don’t mean physically. I think researchers should immerse themselves in development teams / organizations and find practitioners to be champions for their work. The results should be presented at both “academic” and “industrial” conferences, and over time these should be brought closer together.

      • Jorge Aranda says:

        It seems to me you’re arguing for far more action research than we’ve had in software development, and if so, I entirely agree!

  6. Ken Hertz says:

    My experience (at a large custom software shop) is that a main problem is creating the system specifications. Our customer would be represented by several people, all domain experts but not all pointing the same way. The resulting multi-volume specification could include self contradictions. Specific terms could be interpreted differently by customer and vendor — even when they had the same native language.
    This is perhaps a management issue rather than a software development issue, but such are not independent. See also Adam Barr’s Proudly Serving My Corporate Masters: What I Learned in Ten Years as a Microsoft Programmer.

  7. I think this article hits a lot of nails right on the head. Good job. I consider myself a practitioner and a researcher. I worked for several years in industry, have always done research with industry and I try to run my development for research infrastructure using industrial best practices. Doing this, I see how incredibly hard it is, when you are time constrained, with so many pressures pulling you in different directions, to do anything other than muddle through! It’s largely due to poor tools and lack of money to invest in making them better.

    Here are some additional thoughts:

    1. A lot of SE research really is of no current practical use, or of use only in small niches of practice. This includes formal methods, which may be great for safety critical systems, but not for anything else. So much of what researchers are developing in process, quality, testing etc. is also not of current use because even the basic techniques are not being well deployed: You can’t expect more esoteric academic results to see the light of day when practitioners don’t do the basics yet. The chasm between academia and industry gets wider as the research gets farther and farther away from the adoption.

    2. The most incredibly valuable (to practitioners) tools and techniques have generally come out of industry or the open source community, sometimes with involvement of academics on the periphery. I am thinking of agile approaches, particularly test-driven development, Eclipse, new programming languages, etc.

    3. Another category of tools and techniques just doesn’t get off the ground because it requires a large investment by tool developers to make it work really well. Most tools are poor because not enough quality engineering is put into them, or are too expensive for most engineers to use, let alone academics. The main example I am thinking of here is model-driven development. It has so much promise, and indeed proof that it works is there (e.g. in the automotive industry), but the tools are either expensive and proprietary or are poor. Academics are blocked from being able to make big contributions because of the large amount of nuts and bolts development effort required (which we don’t have the money for). Industry is generally interested in developing end-user products, not the tools that would help them (which would be ‘overhead’).

    4. Research that may truly benefit practitioners often fails, at least at first, because peer-reviewers don’t like it. If have often been told my research is not formal enough, not formally evaluated etc. So what if know my technique makes it easier to develop good software (as shown in small scale evaluations), the peer reviewers demand proof from industrial practice or a lot of time consuming formalization. Well I am never going to get that industrial practice or that postdoc for the formalization am I, if peer reviewers reject grants that would pay for it, or papers that would lead to grants being accepted?

    5. Academics do contribute in a huge way: To educating the next generation of software engineers. We often disseminate our results in that manner. However too many academics, lacking industrial exposure, still continue to propagate long discredited concepts such as waterfall development, or promote formalism, process etc. as the be all and end all.

    6. I look at my colleagues doing other types of engineering and note that they often can develop quality tools, often because their engineering problems are less ‘wicked’, or because they work in an area where there is lots of money for tools development.

    7. Large infusions of government and industrial funding really do make a difference. I am thinking of what a difference CSER made to Canadian SE research between 1996 and 2006; that effect is still being felt.

    Sorry I can’t be at the ICSE panel, I have a conflicting ICSE event.

    I am going to post the above comment, and reference this article in
    my blog http://tims-ideas.blogspot.com/2011/05/on-chasm-between-academic-and.html

    • Jorge Aranda says:

      Great points, Tim, and especially this:

      Academics do contribute in a huge way: To educating the next generation of software engineers. We often disseminate our results in that manner. However too many academics, lacking industrial exposure, still continue to propagate long discredited concepts such as waterfall development, or promote formalism, process etc. as the be all and end all.

      Furthermore, this propagation of long discredited concepts at software engineering courses helps reinforce the perception that academia doesn’t know what it’s talking about—hence widening the gap.

      • Not too many academics that I know propagate waterfall development – but it’s still the normal development mode for military systems. If you are doing serious systems engineering with many different contractors, it’s hard to do without specifications and documents.

  8. Interesting piece. My response got a little long-winded, so I made it into a blog post. Briefly, the uninformed perception I have of SE research is that a lot of effort is spent in attempts to optimise the software development endeavour and not enough is spent trying to just understand how programmers and teams work to build software.

  9. Great post Jorge. First of all, I want to congratulate all of you for this work. I was at UVic during my PhD in 2007, for one year, at Segal, and it is good to read this from members of both Segal and Chisel groups.

    I would like to contribute saying that in 2008 I had the chance to participate, as panelist, in an industry-academia panel in Bangalore, at Wipro. Most of the results of your interviews were explored in that panel, but I want to add one more. The different expectations of a company and a researcher. At that panel, was clear to everybody that companies, in the end, want results, profit. Researchers want good problems to investigate, and maybe generate good results, most often translated into publications. But most of the times companies don’t have 2-4 years to wait for some results (Master or a PhD thesis). And this is part of the challenge. How to manage expectations.

    I consider myself part academia and part industry. I do research, supervise students in Brazil, but I also teach courses, and act as coach of software development teams. And it is difficult to align common interests. A company can do research in a hot topic for a year, and call you to collaborate. But in the next year the topic might change. So what do we do, if a student is already researching that topic? As researchers we can not simply “follow” a company’s idea or problem. Some ways to minimize this problems is to think on mid to long-term partnerships, when is possible. My group in Brazil (www.munddos.com) is having this experience since 2001, when we decided to embark in the field of Distributed Software Development. Since then, we were able to develop long term relationships with some companies, and I’m sure this was one of the reasons for keep doing industry-related research.

    Hope this contributes to this interesting discussion.

    • Jorge Aranda says:

      True, the differences in goals and expectations between industry and academia is a factor when you need funding or significant in-kind investments from a company, and given that our field depends increasingly on corporate funding, this is a problem. But I’ve found that if all you request from an organization is access, and some time for interviews, observations, or surveys, chances are good they’ll open their doors to you.

      • Rafael Prikladnicki says:

        Agree! Have positive experiences with this. The only concern here is with confidentiality and how data will be analyzed and reported. But in general it works well.

  10. Marian Petre says:

    A small addendum to ‘what counts as evidence’: for a number of our informants, there appeared to be a conflict between what they say they value as evidence (typically expressed in terms of scientific method, quantitative results, and statistical significance) and the stories they tell about they have actually made decisions (typically based on the opinions of key colleagues, on distillation of experience, and on practical factors). There’s a ‘programmed response’ about evidence that’s at odds with the pragmatic reality of what influences their opinions and decision-making — they talk quantitative evidence, but by their own accounts they more often use qualitative evidence. This is consistent with things like Everett Rogers’ work on ‘diffusion of innovations’, which describes the role of social systems and opinion leaders in the spread of new ideas.

  11. Ian Bull says:

    I’ve been thinking a bit more about this (because I find it fascinating), and I wonder what we can learn from other fields. How do accounting researchers (those who study the “process” of accounting, or the accountants themselves), disseminate their results to accountants. How about those who study dentists (not dentistry, but those who actually study the dentists) pass their advice / knowledge / findings back to their subjects.

    What about other professions: construction workers, secretaries, and even educators.

    Going back to the title of the blog, I wonder if this is really: How to practitioners perceive research?

    • Jorge Aranda says:

      That’s a great set of questions, Ian. For the professions you list I have no answers, but I too would like to know. I can think of a few disciplines where research of the practice has some clout in practice and some good dissemination channels: management, psychology/therapy, medicine. I think it’s a good idea to figure out what they’re doing that works.

      • Galax says:

        I am a Public Accountant in Mexico. I can tell you how we do it.
        We are an organized profession. We affiliate to a local association which in turn is affiliated to a national federation, the Mexican Institute of Public Accountants (IMCP).
        The IMCP promotes accounting research in all its fields (auditing, finances, etc.) and every month publishes and distributes to all its members the articles it receives. It also publishes specialized books.
        The IMCP also appoints committees to study and propose the standardized guidelines to be applied to financial information and to auditing procedures. Once a committee drafts a proposal, it is distributed to all the individual members of the profession in order to get feedback and comments, which in turn may be incorporated to the final version of the guideline or, in some cases, to a new draft and to a second round of distribution and feedback.
        Every member of the association is required to comply with a certain number of yearly points, earned by attending conferences, teaching, lecturing, writing serious comments or proposals to the above mentioned drafts, writing of published articles or by being a member of the different committees. In my case I have to get at least 65 points a year –equivalent to 65 hours of conference attending-. A yearly certification, mandatory for certain legal uses, is issued to complying members.
        Accounting is not a science (although some accountants argue it is). There are few full time researchers. So, part time researchers are simultaneously part time practitioners. It has worked well for us, but I doubt this model would fit to the software development field.

      • Jorge Aranda says:

        Thanks for the info! I also doubt this would work for us. In the software development field some people call for licensing developers, or for having some sort of organization (like the IMCP you talk about) for software engineers. But it’s an idea doomed to failure: there are really no barriers of entry to become a software developer, and no artificially imposed one could work, except perhaps for a few domains (government contractors, for instance).

  12. Pingback: ICSE 2011 Panel – slides and recap | Catenary

  13. dmg says:

    Jorge,

    how many SEng profs and researchers do you know that could design _and_ build a complex software system? Many don’t even know how to program!

    Parnas used to say that we should spend our sabbaticals working in industry, to better understand those who (we claim) we are trying to help.

    My gut feeling tells me that a big part of the problem is the reviewing process we use. It reminds me of how Hypertext died, when it failed to create the WWW (I will tell you the story of his paper describing the WWW).

    For a comparison of fields, look at the program of SIGGRAPH’2011. It almost feels as if the research track if just part of a trade show.

    Nonetheless, I think the work you are all doing is very important.

    –dmg

    • Jorge Aranda says:

      I agree with that opinion of Parnas’. On the matter of whether a software engineering researcher needs to know how to design and build a complex software system, I’m less sure. (If the researcher is going to *teach* how to do it, then yes, not knowing is a case of intellectual dishonesty.) Being a good software developer helps you do better research, for sure, but so does being a good sociologist or psychologist.

      That said, on a personal level I’m worried that my lack of recent practice is hurting my research. I used to develop software for a living, but I stopped doing that seven years ago, and now, among new environments and frameworks, I feel I can barely code my way out of a paper bag. (I’m trying to fix that, though.)

  14. Pingback: Announcing “It will never work in theory” | Catenary

  15. Pingback: SchuldenRestschuldbefreiung

  16. I stopped going to ICSE some years ago simply because of the utter irrelevance to practice of most of the papers.

    The problem here is not really the researchers but the tenure and promotion system. Serious SE research needs to study systems of significant scale and this takes a lot of time – I paper in 3 years would be pretty good. But this doesn’t fit with the academic tenure and promotion criteria so there are lots of papers about toy systems and small-scale empirical studies.

    I don’t see any sign of the academic system changing – if anything, it is getting worse. A sad consequence of an academic measurement culture.

  17. I don’t think other disciplines have the diversity of CS and SE. In Engineering, everyone agrees that their job is to be practical and working with practitioners is recognised as worthwhile and sometimes essential. But in CS there is a range of people from mathematicians through business folk to psychologists as well as engineers and there is no common culture. Theory folks will publish 5 or 6 papers a year and others simply try to compete, otherwise they think, rightly or wrongly, their work won’t be thought of as highly.

  18. Pingback: Lucky industry girl finds fulfillment in academia | The CHISEL Group, University of Victoria, BC

  19. Perhaps a reason why one interviewee commented that “the percentage of things that are useful that come out of [software engineering research] is so small”, is that it is often too broad and generic? For example, the engineering problems faced by a large team (or even group of teams) developing multiple, interdependent, enterprise-scale systems at a financial institution, are very unlikely to be the same as those faced by a small start-up developing a ground-breaking, social web app. There are completely different requirements for e.g. scalability, availability, fault tolerance, quality assurance, time to market, or even legal and regulatory requirements. Is it even possible to produce research findings that will apply significantly to both areas?

    Borrowing a note from Ian Bull’s post about how other fields do it, perhaps it could be worthwhile to look at how clinical research is performed. From my (relatively limited) understanding, clinical research is usually applicable to only one, or a few, medical disciplines (e.g. intensive care, geriatrics, psychiatry, oncology, etc). Why? Because while they are all part of the same domain (health care), they have completely different problems and requirements.

    Finally, I’d just like to say that I found this post quite interesting, and I’m looking forward to hearing more about your findings. 🙂

Leave a comment