If You’re Going to Do Test Prep, Don’t Be Horrible (20/365)

Photo by ShareGrid on Unsplash

We will, at a later date, examine me railing against test preparation. For today, though, let’s take a look at the problem below.

It is not unlike a problem a well-meaning teacher might put in front of their students. Because exposure/preparation/familiarity. Our well-meaning teacher would take this or something like it from a released bank of items on Standardized Exam Ultra and give it to their students to complete. Why not? The items are aligned to standards, the students will see things like this when they take the real Standardized Exam Ultra, so let’s get them started.

Pedagogy aside (and it’s quite difficult not to go off that way), a completed and scored set of these practice items would give the teacher little to no useful information by which he could shift his practice.

Let’s say 90 percent of the class misses question three. What would scoring this question tell our teacher? Well, it would tell him, in a class of 30 students, 27 got the question wrong. What he’d be likely to claim is this shows his students don’t understand verb tenses, commas, and apostrophes. That’s quite a bit to fit into a single question. In reality, there are at least 27 different paths each of these incorrect students could have taken to reaching the wrong answer. Getting this question wrong only informs instruction insofar as our teacher was wondering whether students got it wrong or right.

If our teacher insists on using this form of assessment, I’d point to a few easy tweaks that would provide greater clarity while increasing the cognitive load on students. I’d even go so far as to claim these tweaks would increase the likelihood of the students arriving at the correct answer.

My first alteration is to include a space under each question asking students how they reached the answer they chose. This reveals the paths students took not only to incorrect answers, but correct answers as well. Suddenly, our teacher is able to better understand process. It also moves student reflection into the assessment itself. Rather than asking students why they missed a problem post hoc, we are building a mechanism for students to consider the why of their choices in the moment. Here is the first place I’d argue our group of 27 is more likely to shrink as students pause to think about what they’re doing in ways exercises like these don’t naturally demand. We even put value on answers of “I don’t know.”

What if the shift above isn’t the only move we make? What if we add another space for feedback. This time, though, our students have passed their papers to another classmate and our teacher has asked each classmate to look at the answer and justification of her peer and then reply with her thoughts on the justification? At this point, we’ve begun a conversation, if small, where our students are no longer looking at the answers, but looking at the reasoning and asking if they arrived anywhere worth going.

Think, then, of the information our teacher is working with when he collects these papers. Rather than simply knowing if a given student got a question correct or incorrect, he now also knows how they got there and has given all the class an opportunity to chime in on that thinking at the same time.

The next day, he might pull together students who took similar approaches to reaching the incorrect answer and offer some targeted instruction in correcting their missteps. He might not point out the errors, but put our three correct students in a group with 9 other peers and have them work in small groups to consider how they answered and why they made the choices they did.

It is conceivable our class might grow frustrated with our teacher’s lack of simply telling the correct answer. This is good. I’m assuming these students have computers disguised as phones in their pockets and bookbags. When the discussion reaches its highest frustration, our teacher might say, “Okay, see if you can find help online.”

Here, we’ve taken something boring and inauthentic and built a community around it. We’ve manufactured value to something that originally would have told us if students picked the correct letter or the incorrect letter. While this is not preferable to some authentic, contextualized tasks that would also lead students to practice these skills, it is certainly better than we found it.

7/365 What If We Considered What We Want Students to Believe?

A friend of mine, a scientist, was talking to me the other day about the beauty of the scientific method. “You do an experiment,” he said, “to find out what happens.”

The conversation was centering on the idea of not trying to find a specific thing, but trying to find something. I pointed out that any scientific experiment was trying to find a specific thing, the difference is that my way of thinking was upfront about what it was attempting to find. His was looking for something, but didn’t show it until it had been found.

After a break caused by classwork and assignments, I’m back to Eleanor Duckworth’s The Having of Wonderful Ideas. The latest chapter focuses on the beliefs we want to curate in our students and the implicity of such wants.

Duckworth identifies the following four tenets of beliefs:

photo (1)Most interesting is Duckworth’s assertion that we want to do/learn things because “it’s fun” is not the same as an assertion that learning should be fun.

Hitting home for me was Duckworth’s assertion that we want to play to all of the reasons for beliefs throughout anything we are teaching, but that three of the four fall away when it comes time for assessment due to ease of execution. Yes, we want you to be interested in something because it is fun, but we will assess you based on your understanding of the real world.

Duckworth outlines beliefs as being vested in:

  • The way things are.
  • It’s fun.
  • I-can.
  • People-can-help.

While most education may hold the attempt to help students believe in all four as their driving forces, Duckworth argues (and I agree) that we end up assessing student knowledge based on their understanding of “the way things are.”

For the first few years, I wanted students to investigate reading for all four of the reasons listed above, but my projects/tests bore remarkable witness to the importance of the first only.

Later, I made the love of reading and texts my goal for each year of teaching the others were supplemental and the reading and learning in the classroom were better.

It all makes me want to turn to teachers and ask them to look at their tests. Which of the four are you looking at in your assessments? If it’s the world as it is, are you preparing students to create a world as it should be?

 

Things I Know 311 of 365: Schools need question portfolios

Always the beautiful answer who asks a more beautiful question.

– e.e. cummings

I stood in the snack food aisle today, in awe of what we can do to a potato. Beyond ridges or smooth, the modern potato chip can look like pretty much anything we want it to look like and taste like pretty much anything we want it to taste like.

Humankind has mastered the potato.

Take that, blight!

After the awe, I started to wonder. How do we do it? How do we make this batch of potato chips taste like dill pickles and that batch taste like prawns? When I buy ketchup-flavored potato chips, is it because they used ketchup or they found the chemicals necessary to make potatoes taste like ketchup? I had to start looking for the dishwashing liquid because the potato chips were too interesting.

On the drive home, I started thinking about potato chips and how we keep track of students’ learning.

Portfolio assessment has been around for a while and more resources have been devoted to its use and misuse than I care to plumb. What if we’re doing it wrong?

What if, instead of or in addition to student work, we were to keep a portfolio of the questions students asked?

Imagine a question portfolio that followed students throughout their time in school that reminded them and their teachers of the questions with which they’d wrestled as they learned. What would it look like if, attached to each question, was the latest iteration or the lineage of answers the student had crafted for that question?

What difference would it mean to create a culture of learning where parents were encouraged to ask their children, “What questions did you ask today in school?”

I have a suspicion that in valuing questions, we’d have no other choice but to make school into places where students had the space to answer the questions they thought most intriguing. It also seems likely to me that a student who has been taught the value of a good question and been given the support, resources, and space to seek answers will have no trouble learning anything that’s necessary throughout her life.

We do a decent job of telling kids there are no stupid questions, but a horrible job at showing them that the act of questioning isn’t stupid.

Once I got home, I remembered I’d read a passage about the science of potato chips in David Bodanis’s The Secret House. I found it on my shelf and started searching for answers to my grocery store questions.

What questions did you ask today?

Things I Know 260 of 365: I’m not sure what I did right

When we fail in this diagnostic role we begin to worry about ‘assessment.’

– David Hawkins

I’m struggling to write tonight. I’ve been struggling to write for the last few days.

I’ve an assignment due tomorrow – 8-10 pages, and I can’t get myself invested in it. Or, I’m too invested in it.

For the last assignment in this class, I submitted work of which I was proud. I spent time and thought on the assignment. I worked to refine my thinking and understand which other thinkers served as progenitors to my ideas.

My work was submitted with a feeling of having been thoughtful and diligent in my work. I had learned something new and refined   old thinking.

When I got my assignment back, I struggled to find positive comments. I struggled to find comments that were in response to my ideas.

I didn’t need praise lobbed at me or ego stroking. I just needed a clear sign of where I was on the right track; otherwise, I start to question if I was anywhere near that track.

Because I am who I am, I submitted a re-write of the assignment. Re-doubling my efforts, I consulted the rubric even more the second time than the first.

While my grade on the second attempt was higher than the grade on the first, I’m still sitting here stymied as I work to complete this new assignment.

It’s a horrible feeling.

I don’t know what I did well in the last assignment upon which I can build for this go-round. I have lists of things to avoid, but I don’t know what I’m good at in context of trying to do what’s been asked of me.

I’ll write more tonight.

I’ll write more tomorrow.

I’ll turn in my assignment tomorrow.

I’ll be hesitant to feel proud.

And the thing that kills me – that absolutely drive me batty – the work I did on the first assignment and the work I did for the re-write was fine work. I am still proud of that work.

But there’s a teacher’s opinion in there. There’s a teacher’s opinion muddying the waters of my learning.

And I’m really hating the fact that matters to me.

Things I Know 178 of 365: Report cards should be better

Zachary has been a joy to work with this year. I will miss him next fall.

– Mary Cavitt, my kindergarten teacher

From kindergarten forward, the majority of schools get progressively worse at telling students and parents what’s being learned and how well students are learning it.

A few days ago, my mom stumbled upon a folder marked “ZAC – School” while searching for immunization records.

Not the least of the documents in the folder were both my first and my last report cards as a K-12 student.

I found my final high school report card first.

When I saw it, my eyes flashed to class rank, then GPA, then a quick scan to remind me of my final courses and teachers.

A few pieces of paper later, 24 years after it was issued, I found my kindergarten report card.

It required a little more time for consumption. Nowhere did it tell me where I ranked among the other 5 and 6 year olds. I had no idea as to my kindergarten GPA either.

The only real use for the report card was a detailed accounting of my progression as a student throughout the year.

Groan.

I could remember every piece of information from that first year of K-12. I would be hard-pressed to recount half of 1% of anything covered in my senior macroeconomics class. I’d honestly forgotten I’d taken macroeconomics until I saw the report card.

In the comments section for each quarter, Mrs. Cavitt wrote a short message to my mom alerting her to my progress and letting her know I was being seen by my teacher.

In the “Comment Explanation” section of my senior report card – nothing.

As a kindergartener, I had little use for my report card. It was a document for the adults in my life to examine and use as a starting place for conversation.

In my later years, the report card held much value. It was a quarterly mile marker of my progress toward college and beyond. Still a communication between the adults in my life, it raised more questions than answers. I have no idea how I got that B in my first quarter of English IV, nor do I know what improved in the second quarter that led to an A.

I’m certain my parents asked questions on these very topics. I’m sure I stumbled through my answers and took stabs at the multitude of possible reasons for my grades.

I try to imagine, though, what would have transpired were I not as successful as a student.

If I’d been lost in the tall grass of high school with Cs, Ds and the occasional F, this report card would have served no purpose other than to reinforce my failures and dumbfound my parents.

If I’d not had such dedicated parents, the conversations would have stopped there and the frustrations would have continued to mount.

The modern middle and high school report card is an arcane relic made supremely ironic in light of the millions of dollars spent nationally in the name of gathering data.

I’m certainly aware of the systemic impediments in place, but improving communications with students and parents on individual learning need not include standardized tests and computer-generated reports.

At the end of my kindergarten year, in math, I could name the four basic shapes, count to 43, add, subtract, print my numerals and much more.

At the end of my senior year of high school, in math, I got a B.

Which measure focused on the learning?

Things I Know 131 of 365: If the thinking is good, I don’t care about citation

Old teachers never die, they just grade away.

– Unknown

Saturday, my mom graduates from her Master’s program.

Tonight, as we talked on the phone, she was checking her grades as they showed up online. She reported the points she’d earned on her assignments, and I logged in to my program’s website and looked at the points I’d earned in my last course.

We exchanged point information as badges of honor.

“I earned 388 of 390 points,” I said, “But, I lost those two points because of inconsistent APA citations.”

It’s true.

The less-than-perfect score with which I finished my last course was a result of formatting.

For a few entries on a list of works I’d referenced, I capitalized all of the first letters of the books’ titles rather than the first letter of only the first word as the American Psychological Association decrees.

In my defense, the books, themselves, had each first letter of each word capitalized.

While the Modern Language Association honors such formatting choices, the APA judges this level of capitalization as showy and ostentatious.

I remember when my score for that particular assignment came back to me with the notes from my instructor.

“The APA format of some entries need improvement.”

I was devastated.

It wasn’t for the reasons you’d think. Sure, my formatting was a bit off, but he’d scored my thinking as perfect.

In the last 30 years, I’ve had many thoughts. They’ve been varied in their depth and their breadth. Some were decent. Others were not so hot. I will admit now, not one single thought I’ve ever had has been perfect.

On that assignment and every other assignment for the course, I received perfect marks on my thinking and learning.

I began to worry I’d reached Maslow’s self-actualization, and it wasn’t all it had been cracked up to be.

There is, of course, at least one other possibility.

Given the portions of the assignment that had definite objective qualifiers, my instructor was able to give a less-than-perfect grade and feel justified in his thinking. There were standards, after all.

In the squishier, more subjective areas of the assignment where the quality of thinking, not the quality of writing or citation, was at question, leeway was abundant and doubt was given more benefit that it had earned.

I’m not saying I should have failed.

I earned an A for the course, and worked diligently for it.

My thoughts, though, were imperfect and should have been assessed as such. In some of my thinking, I was lazy. For some of my wording, I was imprecise. As each assignment unfolded, I learned such lackadaisical strategies would yield the same reward as strategies that were more detailed with both my language and my thinking.

I found the bar, sat atop it and never imagined what could be higher.

I’m working with my senior classes to help them practice their skills at close reading. Almost every day they analyze a piece of text for its linguistic, semantic, structural or cultural machinations.

It’s tough work and a skill to be refined.

As I assess their attempts, I’m tempted to give the same marks to  the “almost” answers as I would to the “exactly” answers.

I resist.

They can think more deeply.

They should think more deeply.

That will remain the skill I assess, and my standards will remain high.

If they cite their work with some strange bastardization of MLA and APA, I’ll be happy. So long as it’s thoughtful.

Things I Know 119 of 365: Report cards can be so much more

It is difficult to imagine a more potent lever for changing the priorities of schools than the evaluative measures we employ.  What we count counts.  What we measure matters.  What we test, we teach.

– Elliot W. Eisner, “The Meaning of Alternative Paradigms for Practice”

Writing narrative report cards is difficult. It is time-consuming and difficult.

At the end of the first and third quarters, SLA teachers write narrative report cards for each student.

Narratives don’t replace traditional report cards, they augment them.

Four years ago, my first round of narratives snuck up on me. I joined the faculty midway through the first quarter. I’d barely learned the students’ names and was being asked to write a few paragraphs about their strengths and weaknesses as well as set a few goals for the remainder of the school year.

While each student got a couple personalized sentences, that first round of narratives included a lot of copying and pasting.

It wasn’t until I sat in parent conferences with my advisees and read what my colleagues had written in their narratives that I started to understand what narratives could be for the students.

My second attempt was much better.

In year two, I learned that writing the narratives to the students rather than about the students helped me to feel I was connecting with them as I wrote. It also helped me to remember I was writing about a person, fighting off the slight tendency to write about my students in a dry and clinical manner.

In years three and four, I felt the greatest shift in my classroom practice as influenced by narratives. While looking at my grade book helped inform what I wrote to my students about their learning over the course of a quarter, the data it provided quantified students’ learning when I was trying to qualify it.

I began to use the note function with assignments in the grade book to track thinking that was particularly poignant. The use of Google Docs in the classroom made almost every piece of written work instantly searchable. I could copy and paste again, but this time it was excerpts from student work or comments I’d left that illustrated areas of strength or weakness in the quarter.

This past quarter, I asked students to keep longitudinal records of their thinking regarding the books they read in class. Each record had a section dedicated to a key literary concept. When writing narratives, I could track students’ abilities to articulate how a book’s author used figurative language to tell a story. If the records were blank or incomplete, I could comment on that as well.

Because narratives can be time-consuming and difficult, I’d created systems and structures throughout the quarter that could feed my reporting to offer a detailed assessment of student learning.

Because I’d built these systems and structures, my students and I could track the learning, reading and writing happening in the classroom. Not quite a portfolio, I’d built a web of data.

Writing a good narrative requires detail. I built assignments that supplied that detail. Multiple-choice, fill-in-the-blank, true-and-false, and matching assessments won’t work in my classroom. They don’t provide me with the deep understanding of my students’ progress I know I’ll need when sitting down to write narratives.

I changed my approach to teaching because I needed a better way to write about my students’ learning. Because I changed my approach, I came to better know about my students’ learning.

Writing narrative report cards is time-consuming and difficult.

I’m a better teacher for it.

Things I Know 30 of 365: Feedback can be tricky

Do not say a little in many words, but a great deal in a few.

– Pythagoras

For a pretty large chunk of the day, yesterday, I was in my office – lights off, bottle of lavender essence open, Balmorhea playing on iTunes.

I was working to complete an implementation plan for the inquiry project assigned as part of my grad program.

By the end of it all, my desk was covered in printed resources and my web browser was creaking under the weight of all my open tabs.

I submitted my 6 hours of work ahead of schedule, hopeful it rose to the challenge presented by the assignment.

For the plan, I’d suggested some ideas the practicality of which I was unsure. As I juggled them in my head, I was fairly certain I’d culled the best of the ideas. Still, I was uncertain.

This afternoon, I logged in to the course to find my assignment had been graded. I’d earned 45 out of 45 points. Relieved, I turned my attention to the comments field to see how the ideas had played out with my facilitator:

The plan summary clearly articulates a focused problem statement: the specific goals, which are measurable; the specific solutions you have chosen for t his project; the preparatory steps; and the expected outcomes for the inquiry project. The weekly plans are clear, creative, and appropriate with evidence of insight and thoughtful planning.

While I’m pleased with my score, it doesn’t doesn’t really do much for me as feedback.

Neither do the comments.

Two circumlocutious sentences with words that certainly sound as though they should mean something, but no.

Today, I had the honor of moderating a panel discussion on how schools can foster student innovation. While, I can carry on a conversation with a tree stump, I’ve never moderated anything. For 90 minutes, amid some interesting audio issues, I attempted to probe the minds of five deeply thoughtful educators. I was, in a word, nervous.

While the audience clapped when they were supposed to and several strangers told me “good job” when everything had concluded, I was uncertain of the job I’d done.

Later, sitting in the office snarfing a bag of popchips and downing lukewarm coffee, I checked in to twitter.

From Chris, I saw “@MrChase is an amazing moderator,” with a picture of the panel in progress.

Michael replied with, “So true…You are rocking, Zac.”

And from Ben, “You did an amazing job. Period. You=my hero.”

I realize they are tweets. Even re-typing them here, I feel a bit silly.

Still, those three lines contained more feedback than any of the acrobatic language from my facilitator.

I know these three. Through the relationships we’ve cultivated, I’ve come to understand their expectations and what it means to earn their approval. While I see the hyperbole in what they’ve said, I also know they do not offer up public praise lightly.

I understood their expectations, and they offered up their opinions using clear language.

I know I completed neither the implementation plan nor the panel moderation perfectly.

The feedback I received on both was positive. In fact, the implementation plan score implies I did nothing wrong.

Still, I’ll never message my facilitator seeking advice for improvement. The relationship is too distant, the language too obtuse.

Should I ever need to moderate again, though, I’ll seek the advice of these three, knowing they will evaluate me with a notion to help me be a better version of myself.