Naur’s “Programming as Theory Building”

A critique from Alistair Cockburn on how the agile movement is under attack from Taylorism led me to an essay by Dave West on the philosophical incompatibilities between lean and agile techniques, and this in turn led me to finally give a read to Peter Naur’s 1985 text, “Programming as Theory Building.”  (Also available in Appendix B of Cockburn’s “Agile Software Development” book, and here.) I don’t know why I had not read it earlier. Not only did I find it a brilliant example of the kind of clear argumentation that I think is missing from much software research today, I also found that it should have been a key building block of my Ph.D. thesis: for the first time since I finished it, I felt the urge to go back and tinker with it some more. Perhaps I did read it at some point, absorbed it, and forgot about it.

Naur explains what he’s after in the abstract to his paper:

(…) it is concluded that the proper, primary aim of programming is, not to produce programs, but to have the programmers build theories of the manner in which the problems at hand are solved by program execution.

The actual code that the programmers deliver is not the point of programming. That code will probably soon need to be changed again: it lives in a state of constant flux. Instead, the real goal of the members of a development team is to understand in depth the problem that they are trying to solve and the solution that they are developing to solve it. If the team builds an appropriate theory, its software will be a better fit to the context in which it will perform, and the team members will find it easier to carry out the inevitable modifications and enhancements to its software. In fact, Naur stresses the extent to which the theory is important and the code is unimportant in a pretty clear way: he claims that the code in isolation from its developers is dead, even though it may remain useful in some ways:

During the program life a programmer team possessing its theory remains in active control of the program, and in particular retains control over all modifications. The death of a program happens when the programmer team possessing its theory is dissolved. A dead program may continue to be used for execution in a computer and to produce useful results. The actual state of death becomes visible when demands for modifications of the program cannot be intelligently answered. Revival of a program is the rebuilding of its theory by a new programmer team.

Reading Naur’s paper I felt a very deep connection to the ideas I put forward in my thesis: Naur’s programmer’s theories are essentially mental models in the sense I (and many others before me) present them, and both he and I claim that the overarching goal of a software development organization is to build those models (or theories) during the life of the project. I could actually restate my thesis contributions as extensions to Naur’s sketch in two ways: first, I explore what I think is the main challenge that software team members find today: to build consistent mental models (or in the terms of the thesis, to develop a shared understanding) of the world, among potentially large groups of people, in the face of abundant, shifting, and tacit information, and unclear or exploratory goals. Second, I outline some attributes of team interaction that make such a challenge easier to overcome.

I was glad to see that several of my conclusions mirror Naur’s. He argues that programming methods (taken as sets of work rules for programmers that tell them what to do next) are unhelpful from a Theory Building view because we can’t really systematize theory production: like other knowledge construction endeavours, it is an organic process. Developers can, and perhaps should, have a set of techniques and tools at their disposal, but they are ultimately in charge of choosing the actions that will best help them build their theories at any given time. Naur also argues that documentation is not an appropriate mechanism to transmit knowledge in software projects, an observation that I explore when I discuss the differences between the Shared Understanding paradigm and the more prevalent paradigms in software research (which I named Process Engineering and Information Flow). He claims that since the main end result of a development effort is the inarticulated theory that the programmers have built, “the notion of the programmer as an easily replaceable component in the program production activity has to be abandoned,” an observation that I think is better received now than it was at the time (it was taken as one of the organizing principles of the agile movement), and that in my own analysis I labeled proportionality of action and responsibility.

I really enjoyed reading someone far smarter than I am presenting these arguments clearly and concisely. I only wonder, how is it that more than 25 years later we still need to be making roughly the same points—how is it that they still feel fresh, mainly uncharted, and in need of advocacy?

Posted in Academia, Conceptual Models, Philosophy, Software development | 17 Comments

Inflo is out

I had forgotten to post this announcement: Inflo, an online tool to collaboratively construct arguments, is out! Jono Lung, the brains behind the idea and a friend of mine, explains:

Inflo is an on-line tool for collaboratively constructing arguments.  It’s wiki meets spreadsheets.  It’s a bit like a spreadsheet in that you can enter numbers and formulae in individual nodes (cells).  But, unlike a traditional spreadsheet, each node has its own permanent URL corresponding to a snapshot in time that can be sent around or used as part of other arguments.

I’ve been playing a little bit with the tool, exploring Jono’s arguments, such as whether printing a paper or reading it on screen is more carbon efficient. It seems to me some of the concepts of Inflo might still take a little bit of effort to get used to; fortunately, Jono has also created a user manual for the tool in the form of an Inflo argument—so that would be a good starting point for people unfamiliar with the idea.

Posted in Conceptual Models, Information visualization, Recommendations | Leave a comment

Nature – Climate Change

There is a new journal from Nature on topics (physical and social) surrounding climate change. In the first issue, Kurt Kleiner has a pretty good essay on open data and open climate software. Among his interviewees are Steve Easterbrook and Greg Wilson, the two best mentors I (could have possibly have) had at the University of Toronto. Worth reading!

Posted in Academia, Recommendations | Leave a comment

Guildenstern and epistemology

A propos of nothing in particular, this quote from Stoppard’s brilliant Rosencrantz and Guildenstern are Dead. After losing nearly a hundred coin tosses in a row to Rosencrantz, who bets Heads every time, Guildenstern suspects there’s something funky going on with reality—but should he rely on his own experience as a valid indication of anything?

Guildenstern: A man breaking his journey between one place and another at a third place of no name, character, population or significance, sees a unicorn cross his path and disappear. That in itself is startling, but there are precedents for mystical encounters of various kinds, or to be less extreme, a choice of persuasions to put it down to fancy; until—“My God,” says a second man, “I must be dreaming, I thought I saw a unicorn.” At which point, a dimension is added that makes the experience as alarming as it will ever be. A third witness, you understand, adds no further dimension but only spreads it thinner, and a fourth thinner still, and the more witnesses there are the thinner it gets and the more reasonable it becomes until it is as thin as reality, the name we give to the common experience… “Look, look!” recites the crowd. “A horse with an arrow in its forehead! It must have been mistaken for a deer.”

Posted in Books, Philosophy | 2 Comments

Then a Miracle Occurs?

I think one of the reasons that make many people uncomfortable with qualitative research lies in the difficulty of doing data analysis properly. You prepare a study design with carefully worded research questions, you have a bunch of interview guides, you do your observations, perform your interviews, take lots and lots of notes, and then you’re supposed to take all that and bring it down to a set of easily communicable findings and (if your data and epistemological persuasion allows you) generalizations. That data analysis step may feel phony, a bit like that old classical cartoon:

Then a miracle occurs

In his Case Study Research book, Yin acknowledges this is the step that is least developed, methodologically speaking. And though we may attack it with all kinds of coding and techniques and structure, nothing guarantees that we’ll actually extract the essence from our data. You actually need to think, to question yourself, and to try many things, see them fail, and try again. That’s what makes it hard.

I used to think this was a drawback of qualitative research when compared to its quantitative sibling and I was not alone in my belief. But I’ve come to realize that, deep down, quantitative research suffers from the same problem—it’s merely often ignored. In an experiment, for instance, you may observe an effect in your sample, and you may be able to generalize to your population with enough statistical significance. But this (in a field like mine) doesn’t get you very far. You then need to examine whether the effects you observed would hold in an uncontrolled environment, with lots of other confounding factors, in different contexts and for people and organizations with differing motivations. Your experimental data does not help you there—you again need to think, to question yourself, and so on, though if you have good numbers you may be giving short shrift to those concerns in a rushed “Threats to Validity” discussion near the end of your paper.

(Incidentally, Yin also talks about this in his book—he discusses the difference between generalizing to a population and generalizing to a theory, and how experiments need to—or should—do both, but case studies, by design, are usually only concerned with the latter.)

As I am currently at this stage with an empirical study, my mind goes back to this idea of the difficulty of doing this kind of research. I always feel like there’s a thread hanging up there somewhere—that if I jump high enough I can reach it. Or as if there’s a hidden melody that I can uncover if I listen intently. And I try and grope, and sometimes I find something and I feel exhilarated, and sometimes I find nothing and I want to just forget about it all. But if there were a straightforward list of steps to follow, this wouldn’t be research, and it wouldn’t be interesting.

Posted in Academia | 4 Comments

Like learning Old Norse

Since I’m doing fieldwork at a software organization these days, I have very present in my mind the difficulty of getting my bearings in a new environment. Several researchers have reported of these difficulties in the case of newcomers to a software project (for instance, Dagenais & Co., Begel & Co., and Sim & Co.), but what we may not realize is that we, as researchers, go through a similar set of difficulties when learning about the terms and dynamics of an organization we were not familiar with, and getting over them takes some time.

I saw this firsthand a couple of days ago, when a colleague who hadn’t been with me at the field site in the previous weeks joined me for a few interviews. While I was conducting the interviews, I realized that she was probably not getting the meaning of almost anything in the conversation: we were using acronyms, names of other people, references to previous events, to some tools or products, to decisions or plans. By that time I was familiar with nearly all of these, but she wasn’t: she was at the point I was weeks before when I first walked into the organization.

This brought to mind a scene in an otherwise forgettable movie from 1999, The 13th Warrior, in which Antonio Banderas plays a Middle Eastern nomad fighting along a bunch of Vikings. It summarizes in three minutes the process that for us takes a few weeks to go through:

(But note! As opposed to what the clip above shows, the researcher does not end up referring to his or her participants as pig eating sons of anything! Also note that the researcher usually does not look like Antonio Banderas. Oh, and that, at least in software organizations, the participants rarely act surprised if the researcher suddenly starts speaking their language.)

 

Posted in Academia, Software development | 2 Comments

Software development according to Game Dev Story

A few days ago I found out about Game Dev Story at this list of meta games (hat tip to Lila Fontes). As the name suggests, it’s a game about having and running a game development company. Seeing as I’ve been interested for a long time in the idea of using software development simulators to understand our assumptions on software development, I decided to go ahead and spend a few bucks to see what it was about (exclusively in the interest of research, of course!).

Game Dev Story screenshotAs a game it’s entertaining but bland, and I wouldn’t really recommend it. It’s addictive, but in a bad way, like potato chips are addictive—you keep wanting to play one more turn, and one more turn after that, and you end up asking yourself why did you waste several hours on a mediocre game. So it’s probably better to stay away from it.

But seeing it as a snapshot of what some game developers think game development is like (or rather, as a snapshot of what they portray a cartoon of game development to be like), I had lots of fun. So how is software development according to this game? Some slightly cynical observations:

Game development follows a bizarre waterfall process. The phases seem to be Inception (where you choose what kind of game you’ll develop; more on that below), Design (you hand in your choices to a trusted employee, and she works based on the idea), Development (where your employees mainly add to the Fun and the Creativity of the game), Post-Alpha Release (emphasis on Graphics), Post-Beta Release (emphasis on Sound), and Debugging (where your employees clean out the bugs added in the previous phases. Its quite possible that the phases are like that mainly for gameplay reasons, and I’m sure most game developers would agree they are just silly.

The requirements are really simple. They’re just about choosing the type and genre of the game you want to build. For instance, Robot Shooter (I’m disappointed that Ogre Racing didn’t do very well…). The designer will take it from there. This phase takes zero time.

Group dynamics (and coordination) are not a factor. Everyone just brings in their attributes (in the case of game development: programming, scenario creation, graphics, sound, and stamina) and gets to work independently. Your success depends almost entirely on aggregated individual genius. There are no coordination overheads or conflicts—the mythical man-month is not mythical at all.

You always know how many bugs are there. No need for any kind of testing, because you have perfect knowledge of the quality of your product.

It’s fairly easy to ship bug-free code. As a manager, it’s just a matter of waiting a little longer until all bugs are sorted out. Debugging, by the way, never introduces bugs of its own.

Heavy turnover is not a problem. In fact it’s encouraged—you use your first employees as stepping stones to get the first couple of products out to the market and get enough money to hire better people, whom you’ll fire to get the real stars. You could also level up and train your employees, but it seems to take too long and too much. New hires don’t have a ramp-up problem; they are immediately productive. Laid off employees don’t take product or organizational knowledge away with them.

One-hundred-and-twenty-hour work weeks are great. When your employees get tired, they go home to rest a little. If your product is in trouble, though, you can always bring out energy drinks. Your employees’ energy will be completely restored, they’ll pull an extremely productive all-nighter, and carry on as efficient as ever once the effect is gone. You can repeat this as many times as you wish, if you have enough energy drinks in stock. Your employees will never ever quit. The only aspect in which there is an element of burnout, perhaps, is that if you ask the same person to take care of an aspect of a product for the second time in a row, they won’t feel as inspired as the first time around.

The boss does nothing. He (it’s always a he) just sits back and sees the rest of the team work while the millions and awards roll in. Commentary on the value that CEOs bring to a company aside, this seems like a dubious strategy for a budding start-up.

I could go on—there are actually some bits of the “simulation” that I liked, such as the fact that generalists are much better for your company than specialists (they can contribute with many aspects of the product and can alternate responsibilities with their peers)—but I think you can get the idea with this. What interests me here the most is to see what aspects of software development people think should be modeled, and how do they see their own profession: in a way it’s a form of theory-building. If you know of other games that do this kind of thing (besides SimSE, which I tried out a few years ago), I’d really like to hear about them!

Posted in Organizations, Software development | 7 Comments

Publishing into the void

A few days ago, Greg Wilson tweeted:

When is the last time you read something in an ACM or IEEE journal that changed how you program or the tools you use? Ever?

The answer is never, in my case. I suspect it is the same in many others, too. Though I’m really curious to hear about cases where the reading of an academic article on software development has brought about a tangible change in a developer’s behaviour, I don’t think they’re that commonplace.

For all of us tasked by society with the goal of improving the effectiveness of software development organizations through research (if you’re a software engineering researcher, that’s you), this should be something to be ashamed of. Why do we have this disconnect? I can think of a few possible reasons:

Researchers waste their time working on problems that are pointless or irrelevant to practitioners. Often true, I’m afraid, judging by the drop in industry involvement in academic conferences and by the topics explored in our most prestigious publications. Sometimes we’re mesmerized by a technically challenging puzzle, or self-deluded into thinking that since many research findings take decades to trickle down, we shouldn’t concern ourselves with applicability at all. And so we lock ourselves in an ivory tower, and the rest of the world just forgets we’re even there.

Practitioners are expecting something that researchers can’t honestly provide. For instance, claims that following recipe X will yield an 18% increase in performance for the average organization. There are so many variables, and context is so important, that we just can’t offer precise predigested statements like that.

Researchers and practitioners have different understandings of what counts as evidence. Some researchers (myself included, at one point) seem to think that a small experiment with a few students working on a toy example is enough to get the ball rolling. Others (myself, now) are convinced that for many software research problems the best (or even the only) way to tackle them is a rich qualitative study. Both of these camps fare poorly for many practitioners, who were educated to see carefully controlled Physics experiments as the gold standard of science. For them, the small-experiment-with-students folks look like sloppy amateurs, and the rich-qualitative-study folks are postmodern poseurs that can’t give a straight answer. The techie culture doesn’t really appreciate great sociological studies the way it appreciates great physical studies; here we are at a disadvantage.

The academic forum isn’t the right place to connect. Academese is full of minutiae, and it looks like a foreign language for the rest of the world. But what is the right forum then? Halfway publications, like IEEE Software, are better, but still seem fairly detached from the real world in many ways. Trade magazines are moribund. Books are a possibility. As for more modern media, I don’t know of any blogs dedicated to disseminate software research findings the way Mark Guzdial’s does for CS education, but this is an attractive alternative.

Habit is a powerful force. Smokers know that smoking kills, but they keep at it. Once we find a route that will take us where we want to go reasonably quickly, we become unwilling to try out alternatives. This inertia is even stronger at an organizational level. And so, even if we discover better ways to organize and develop software, changing their ways is too much to ask of many professionals and teams: what they’re doing is good enough, and our proposals not worth the risk.

What do you think? How can we fix this? Or, alternatively, have you actually changed your ways after reading an academic paper? How did it happen?

    Posted in Academia, Software development | 15 Comments

    “Agile” as an organizational form

    Why do old organizations die? Their death runs counter to our intuition of the nature of organizations as rational entities: if an organization has established itself and secured economic stability, then, through an efficient and rational management of its resources, it will almost always have a serious advantage over its younger challengers—and once it prevails over them, it will incorporate the spoils of its victories into its structure, presenting an even greater (at some point insurmountable) challenge to its future competitors. And yet, though we see them try to do this, we also witness an inability to keep up with the times, to understand the new context in which they need to perform, and to snatch the opportunities that appear before them. I’m not keen on anthropomorphizing organizations, but it is almost as if they, like humans, became rather sclerotic as they aged, or as if they lost their senses; their hearing dull, their sight blurred, and their teeth on decay, suffering from  countless illnesses, slouching towards the fall that will end their days.

    An explanation that I like for this phenomenon is the concept of organizational inertia. Broadly, organizations need to be reliable to survive (you wouldn’t frequent a store if you didn’t know whether you’d find it open for business, what kinds of goods you can expect to find there, at roughly what prices, with what quality, etc.). Reliability requires stability, but stability’s evil twin is sclerosis, or inertia. And so successful organizations (stable organizations) will exhibit a high resistance to change their ways, and this resistance, which helps them when the going is good, will be their downfall in the end.

    (Note that this runs counter to the popular view of change as something that organizations do, and should do, all the time, and of managers as rational players pulling the levers of the social machinery and being generally in control of the activities and resources of their organizations. Organizational inertia tells us, first, that change is usually not desirable—it’s risky and unreliable—, and second, that when change is desirable, it is unfeasible. A corollary of this is that, when organizations claim to be changing, or to have changed, it’s a fair bet that their changes are superficial and inconsequential, mere wishful thinking or PR efforts.)

    There are exceptions to this, of course, and you may be thinking of mentioning one or two in the comments. Generally speaking, though, the theory seems to be statistically sound—the book Organizational Ecology, by Hannan and Freeman, presents the idea and several empirical studies that support it. Hannan and Freeman, in fact, take the argument further: using Stinchcombe’s concept of organizational forms (Stinchcombe argues not only that organizations preserve their form over time, but that their form is a product of their time), they explain that we can actually see cohorts of organizations with essentially the same forms rising and falling, being created and becoming extinct, as if they were biological species in an ecosystem. Organizational forms do evolve over time, but Hannan and Freeman argue that this evolution is Darwinian, not Lamarckian: because of its inertia, an organization is stuck with the form it was given in its early stages, which is very similar to that of the rest of its cohort, and which worked well in the organization’s youth.

    Now, in the context of software development organizations, what forms do we see? Though there could be many low-level variations depending on the domains we are dealing with, the concept of organizational forms is intended to be broad and comprehensive, not concerned with minutia. With this in mind, I see at least two clear basic forms to date: on one hand, traditional software firms, on the other, open source organizations, which have a social structure and a set of incentives, behaviours, and values, clearly different from those of traditional firms. It would be quite difficult for a traditional organization to transition into an open source group, and vice versa; this goes in accordance to Hannan and Freeman’s argument.

    I had a harder problem determining whether Agile organizations have a form that is truly different from that of traditional firms (a question I struggled with as I write a paper on this topic). There are some indications that they don’t: their hierarchical structure is very similar to that of traditional firms, as are many of their goals and the incentives of their employees. Additionally, I hear all the time about firms “becoming Agile,” “switching to Scrum,” “testing XP,” and so on. If the change to Agile is as common as all that, then either Hannan and Freeman are wrong, or Agile, deep down, is not that different from business as usual.

    But I’ve studied some Agile firms, and I’ve been impressed by the striking contrast they present when compared to traditional organizations. They favour generalists over specialists, enforcing knowledge sharing to an extent that seems wasteful to traditional firms. They embrace team autonomy and self-management to an extent that seems irresponsible to their counterparts. They welcome uncertainty in requirements and deliverables to an extent that threatens the long-term viability of their products, according to others. They insist upon having the customers adapt to doing things their way, with a zeal that seems to border on arrogance to more accommodating organizations. And they demand a level of personal interaction, of co-location, and of up-close communication that many developers find frankly uncomfortable and counterproductive.

    It is these deeper elements, and the values from which they spring, that I think are what lies at the heart of Agile development. This is the reason why it’s so hard for an organization to truly transform its modus operandi. “We’re becoming Agile” doesn’t usually mean “we’re fully switching towards a model of autonomous, self-managed generalists working in close quarters to satisfy ever-shifting requirements whether the customer (and the boss) likes it or not;” it means “we’re asking our developers to write unit tests before they code, we’re keeping a backlog, and we’re referring to our project manager as the Scrum Master.” No wonder that Agile, for many, is simply a buzzword, and that they end up disappointed after their trials.

    When I talk about truly Agile firms to managers and developers in more traditional software organizations, they often reply that it all sounds nice, but that couldn’t work in their setting. They would say this despite my assurances that I’m talking about firms that are contextually very similar to theirs, working in projects that they themselves could be working on. This would make me think that they must not be getting it, but on closer examination, and bringing in the work of Hannan and Freeman, I think they might be right: Agile demands a radically different arrangement of work and of responsibilities, and for most firms already used to more traditional software development, that may simply not be in their DNA.

    I don’t know what will be the outcome of this market competition between Agile and traditional software firms. For a long time there may be room for both organizational forms, or it may well be that Agile won’t be as effective as traditional development, on the whole. It doesn’t help the Agile case that so many organizations have co-opted the term as a tired buzzword to impress their clients (nowadays everyone is Agile, or “as Agile as they can be”). But I suspect that the software development field today is a fertile ground for the underlying proposals and values of the movement, and that we’ll see them bearing many more fruits, under this or another name, in years to come.

    Posted in Organizations, Software development | 2 Comments

    The thorny and the obvious

    This discussion between Laurent Bossavit and Steve McConnell makes for very interesting reading: Bossavit critiques McConnell’s Making Software chapter on differences in programming productivity (original in French here), arguing that the studies it cites do not establish as a fact that some programmers are an order of magnitude better than others; McConnell responds, nobly and patiently, justifying the citations and the order-of-magnitude claim that they support.

    Bossavit’s critique seems slightly tinged with indignation at discovering how scientific sausages are made. (Incidentally, Bruno Latour, whom he discusses at some length throughout his piece, is a prime exponent of such sausage making.) Bossavit goes back to some of the studies cited by McConnell and finds that they are not controlled laboratory experiments, or that their sample size is fairly small, or that the participants were debugging instead of programming properly, or some other problem. He therefore finds McConnell’s litany of citations suspect, as none of them conclusively establish as a fact that there are programmers that are an order of magnitude better than others, though taken all together they form an intimidating wall of academic texts that encourage the reader to just take McConnell’s summary, erroneously, for a fact. In his reply, McConnell convincingly shows that the scientific evidence for an order-of-magnitude difference in individual programming productivity is far more solid than Bossavit makes it out to be. Not conclusive, perhaps, but as strong as it gets in our field to date.

    However, having settled the issue of McConnell’s chapter, there are still several important observations to extract from Bossavit’s critique, taken more generally as discussing software development research (which was, I believe, Bossavit’s intent), that were overlooked in the subsequent discussion.

    The first is our tendency to protect some of our questionable claims with a layer of citations. Whole subfields of software research have sprouted from such clever gardening; by the time they wither their creators will have long secured themselves, having achieved tenure and respectability years before. It is truly a pain, in some occasions, to dig through the list of references in an initially exciting paper, only to find that it is supported by the flimsiest empirical support. Even if in this case Bossavit’s criticism was unwarranted, it holds for many other academic papers in our area.

    On the other hand, paired with this tendency to offer questionable citations is the tendency to demand them in the literature we read and review, even to support fairly obvious statements. In a sense, we’ve become rather lazy, preferring an (Author, Year) string over the (slight) effort of considering whether a claim makes sense through simple argumentation or experience. We demand statistical significance, rather than clarity of thought.

    This is the case with the whole productivity issue. I know that there are people who are at least an order of magnitude better programmers than others; I have seen them, and I suspect most other software developers or researchers have, too. It’s just part of the difficulty of the task and of the variety of human nature. I also know runners, jugglers, writers, cooks, managers, and scientists, who, by any sensible criteria, are far beyond the abilities of some of their peers. We don’t really need a series of double-blind controlled experiments with thousands of participants around the globe to establish this; our resources are better spent otherwise, and in the case of programmer productivity the sources that McConnell offers, methodologically weak as they might be, are more than enough to convince ourselves that there are no surprises here, and move on.

    The real question, the thorny issue, is the nebulousness of our constructs—in the case in point, the development productivity construct. Bossavit gets to this near the end of his critique, but his arguments appear to have been ignored in the posterior debate. What is productivity? For starters, Bossavit reminds us (and again, there’s no need to demand citations or studies here; our experience confirms the statement) that some people have a net negative productivity. Initial measurement efforts are naive: lines of code have long been discredited as an accurate indicator of anything. Other programming-centric elements (such as function points) risk missing the essence of productivity: as Greg Wilson likes to say, “a week of work saves an hour of thought,” and such hours of thought are not amenable to straightforward measurement, as they tend to produce very little code. And what about the more subtle components of productivity? Perhaps you, like I, have worked with someone who may not be particularly skilled, technically speaking, but that holds some other attribute (charisma, empathy, drive, a sense of purpose) that greatly amplifies the productivity of the team many times over. How can we include such considerations in our productivity construct, seeing as they are meshed within our understanding of it?

    The problem is that, for as long as a construct is as weakly built as the one of development productivity, any experimentation we carry out is bound to be unsatisfactory. We know that some people are more productive than others; whether they are exactly five, ten, or twenty seven times more productive is not a question we can settle at this point (or, perhaps, ever—and by the way, I’m not sure this is really where we want to go as a society, but that’s a different topic). If we can spare research efforts in exploring productivity in more detail, I suggest we should aim them at settling these theoretical and conceptual issues first, rather than at more careful and methodical experimentation.

    Posted in Academia, Software development | 23 Comments