We are not recruiting our teachers from the bottom third of high school students going to college.
– McKinsey & Company
Of every sobering statistic bouncing around the halls here, that we recruit our teaching population from the bottom third of high school students going to college is one I can’t go a week without hearing cited.
The problem, people are arguing, is our teachers aren’t smart enough to do the job.
I’m done with it.
The same people we are asking to improve schools, to whom we entrust our children, and whom we consistently ask to work harder for more hours are also also supposed to do so while we call them stupid?
Then, beautifully, we stand around and wonder why we can’t attract more candidates to the field.
“Only the worst performing people want this job,” we say to potential recruits, “Come apply.”
As tempted as I am to look into the math of all of this, others have taken care of it. David Wees has this compelling post that pulls apart the implicit meaning of the statistic, writing, “I think the US public should be very insulted by this argument rather than being up in arms about how poorly qualified their teachers are.”
Things really heat up in the comments section.
Wees also points to a post by Larry Ferlazzo who actually digs around to understand the math of the claim and tears greatly at its validity.
At the end of it all, Ferlazzo writes, “In other words, this bottom-third thing does seem to me to be a bunch of baloney.”
I don’t want to re-write Wees or Ferlazzo’s argument or this relevant piece from Scott McLeod.
What has been troubling me is the glomming on to the idea that a student’s grades in school are reflections of his intelligence rather than reflections at how well that student played at school.
I’m not discounting the idea that a student who received strong grades in school could also possess other, more disruptive intelligences.
I’m asking us not to discount the idea that the student who received weaker grades in school might be just as intelligent.
In the comments of Ferlazzo’s post, the question is raised of the correlation between higher ACT and SAT scores and a person’s effectiveness as a teacher. A moderate or even high correlation wouldn’t surprise me. I can see little reason why a person who scored well on a standardized test wouldn’t be more effective at helping others learn to score well on such tests.
And, finally, I wonder about anyone’s willingness to use a test administered to 16-18 year olds as a measure for their intelligence as adults. From a developmental standpoint, their brains and bodies are in a state of absolute flux. I’m more intelligent than I was at 17, but I don’t know that I’d do as well on the ACT.
I am certain 30-year-old me or even 22-year-old me was better prepared to lead a classroom than 17-year-old me.
The McKinsey report from which the statistic originated has been around since 2007 and has probably been used to denegrate teachers since then. Any of the people I’ve heard cite the stat this semester could have spent the same 10 minutes I did researching to understand its flaws.
I worry they didn’t.