“Weather” and “climate” in ethnographic work

In the past I’ve made it clear that for software research I personally prefer to do qualitative case studies over other empirical methods. I can see the value in data mining, experiments, and surveys, but I’ve argued that those methods have significant weaknesses that are often unacknowledged in our field. In fairness, however, qualitative work is no less susceptible to pitfalls, and I think failing to make the weather/climate distinction is probably one of the greatest.

To explain what I mean, think of our reaction to the weather. Lately it seems that whenever we get a cold snap or a freak weather event somewhere, some people will come out and say that this is clear evidence against global warming. The latest example is the recent snowstorm in Northeast US. Stephen Colbert summarizes the problem brilliantly. You can watch his video on the Huffington Post (I would link from it here but the Comedy Network won’t let me):

After showing clips of Fox News correspondents explaining that the weather is burying Al Gore’s “hysterical” theories, Colbert joined in on the ridiculous logic, deeming it “simple observational research: whatever just happened is the only thing that is happening.”

Using the same rationale as Fox News, Colbert couldn’t help but point out that, due to it being nighttime, the city was covered in darkness. “Based on this latest data, we can only assume that the sun has been destroyed.” Who’s to blame for this “forever-night”? Gore, of course.

Failing to discriminate between weather and climate is a novice mistake in understanding climate change, of course. But I wonder if those of us doing qualitative software research are sometimes caught off-guard by the equivalent mistake in our own observations. We join a team for a few hours, days, or weeks, and report back on the “weather” we observed as if it was the “climate” of the team. Without a historical record, it is quite difficult to determine whether what we are observing is representative of the climate under study, or rather, which elements of our observation are representative and which are freak weather events.

As an example, last Summer I was doing fieldwork on a software organization. The very first day of my observations I sat with a team that was going through a crisis: their demo wasn’t working as expected, their client was getting restless, the team had been growing significantly to meet their long list of urgent requirements, and many team members were rather frantic and tense. The second day of observations was similar, but milder. By the fifth day there was still some underlying tension; the crisis had dissolved, although some of its causes were still latent. Two weeks later the issue was mostly under control.

While I was conducting my observations a couple of people in the team told me, on several occasions, that this was not the way their team usually worked—in other words, I was observing a snowstorm. I made sure to take note of that, but I confess that on a more instinctual level I didn’t fully believe them. It was only later, when I repeatedly saw the team in a different and calmer context, when I interviewed people to get their impression on what is their usual climate, that I came to understand to what extent my initial perception had been misleading.

From a research perspective, this turned out to be a great event to witness: you rarely get the chance to watch an evolving snowstorm with your notepad ready. But it’s only a great event when I realized it was unique and treated it as such; I can think of several other qualitative studies, mine or from others, where the danger of extrapolating from unique events hasn’t been fully explored or acknowledged.

How can we address this problem? My preferred mechanism (and the one Robert Yin suggests repeatedly) is to collect and triangulate information from as many sources as possible. It also helps to be constantly skeptical of anything you hear; I’ve lost count of how many times I’ve been given conflicting accounts of the same events. And of course, you should know that the first few days of your observations are going to be rather unrepresentative simply by your presence; you’re altering the weather yourself.

Ultimately qualitative work is impossible to pull off spotlessly, and there’s always a nagging fear that you’re over- or under-representing some observations, that you’re not doing things right. Then again, no matter which empirical method you choose, you should have that fear, and you should act on it.


About Jorge Aranda

I'm currently a Postdoctoral Fellow at the SEGAL and CHISEL labs in the Department of Computer Science of the University of Victoria.
This entry was posted in Academia. Bookmark the permalink.

6 Responses to “Weather” and “climate” in ethnographic work

  1. Your time scale comparison is interesting and the Colbert quip comedic. I’m not sure how applicable the comparison is to small software teams, though. Sure, there are things we can learn about small software teams by studying, but it may be harder to make a claim about any particular group, even one that you are studying.

    What is the turnover rate of the team? One person a year? And surely members of the team are experiencing life-changing events. Relationships between individuals are constantly evolving. Even if you observe a team from its start until its dissolution, you’ll likely find it’s much like a stream: you can never step into the same stream twice.

    • Jorge says:

      Sure, teams are not static and may actually be quite unstable. To speak of long-term regularities in any one of them might be overreaching. But we can still see patterns of behaviour that are present in some of them, and we can attempt to generalize based on them.

  2. Jan says:

    Your time scale comparison is really really interesting. Thanks!

  3. ferzaga says:

    Your research method is valid, to get some –statistically significance- just choose a diverse sample and it’s ok. Qualitative method isn’t necessarily better, people lie in questionnaires too, and nobody likes an extensive one and they (everybody) began to answer anything fast.

    When I make a interview y listen it a lot of times and well… your have to believe in the informants and try lo reed between the lines

    Ethnography it is subjective you already know Feyeranbend use it to defend your method

    • Jorge says:

      Well I *can’t* achieve statistical significance with a qualitative study, but that’s OK if what I’m trying is to generalize to a theory, not a population.

      I do love Feyerabend though; thanks for the reminder!

  4. ferzaga says:

    I have a typo I mean Quantitative method

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s