I Work Hard to Doubt Your Research

classified research

I’ve been known to read a study or two. I can back up my point with this research or the other. Today, I was in a meeting where I easily pulled up 20 years’ worth of research to make my point. And while I’m not statistician or economist, I can evaluate a study’s worthiness of my attention better than most folks I run into. Treatment and control groups, T-tests, P levels, pseudo-experiments – thanks to more semesters of graduate level statistics courses than I’d ever intended on completing, I am functionally literate.

So, even though I appreciate a randomized-controlled trial and can revel in rejecting the null hypothesis, it may seem surprising that I work so hard on maintaining a bias of doubting even the most well-constructed study.

When it comes to what I privilege as a belief, I’ll point you to the sociologists and anthropologists who examine a phenomenon closely, take care to understand as much of everything around it as they can and present their findings by saying, “This thing happened, and here are the elements and conditions that happened when it happened.” Then, they turn around and return to watching, calling back as they leave, “We are going to keep watching to find out if it still happens when other things happen.”

Why, though, do I work so hard to maintain a bias in favor of this descriptivist approach? I think of it the other way around. I’m resisting the sexiness of numbers. An implied or inferred certainty can creep in when numbers are used to explain why something happens. Whatever quantitative study you choose to believe is basically saying, “If X, Y, and Z are equal, then we can say with this level of certainty that this thing will happen when you do that other thing.” It’s that first part of the statement that keeps me suspicious of education research. Tell me the the last time a teacher was able to control for all relevant variables when deciding which practice to employ in her classroom.

This is not to say I throw in with the sociologists’ ability to predict the future. It is only to say I take comfort in the implied humility in reporting your results by acknowledging they are the conclusions at which you arrived when trying to figure things out by watching this time.

It is also not to say I poo poo a well-constructed experimental study. I hear and read each one I encounter as, “Here’s a pretty good guess of what will happen when you do these things and know this about the population to which you’re doing it.”

All of this is how I think about dictionaries. Dictionaries are descriptivist tools. Adding a new word to an edition of a dictionary does not freeze that word in time, prescribing how it is to be used in language forevermore. Like the work of a sociologist, a dictionary’s contents are meant as a snapshot of language putting newly-deployed words alongside those already in existence. When Homer Simpson’s “d’oh” first found its way into Webster’s, few (if any) people started using it in their everyday speech or formal writing as a result.

For words with which I’m not initially familiar, the dictionary can act as our statisticians’ studies. Looking up “fat” after my early-90s self was told that’s how I looked would help me to understand what had been meant by the statement. Here too there’s a flaw. Without knowing the context inferred by the dictionary’s definition, I may walk away thinking the statement meant I was corpulent when it was meant to imply I was “phat”. The definition was the dictionary’s best guess.

I rely on dictionaries to help me navigate new terms in the same way I look to the results of well-designed studies to tell me about new ideas of practices – with a bias of believing they are providing me the best guess at the time.


This post is part of a daily conversation between Ben Wilkoff and me. Each day Ben and I post a question to each other and then respond to one another. You can follow the questions and respond via Twitter at #LifeWideLearning16.