Yesterday, a grad student asked me whether I could answer a survey he was doing for his research. I’ve struggled getting participants in the past, and seeing that the survey would only take a couple of minutes, I accepted.
It was a survey about the urban design of a particular place at the University of Toronto, which was fine with me. Halfway through the survey, though, I realized he was trying to put some answers in my mouth. He asked me whether I liked that place, and when I said I did he replied: “Really? There’s nothing to like there.” I insisted that I liked the landscape around it; he objected, pointing out that it was just a grass field. I kept insisting, and he grudgingly wrote down my answer.
We went through this process several times. The last straw fell when I said the lighting at that place was good and he responded that “it’s pretty dark there right now,” circling the “bad lighting” answer. He only erased that answer after I lost my patience and told him that these were my answers and if he wanted others he should ask someone else.
I know how this survey’s results will look like. Some large percentage of users of this space, it will say, are terribly dissatisfied with it –hence providing support for whatever project this student is designing. What gets me is that results from a survey as poorly and dishonestly executed as this one will carry greater weight than any non-quantitative arguments simply because they produce a percentage number in the end. We’re in love with quantitative evidence, no matter how poorly it is constructed.
As I left the place that evening I looked around with a critical eye. There were definitely some areas that could be improved. Come to think about it, I thought, it was plain to see that lighting was actually pretty bad — and no survey results will convince me otherwise.