There’s a lot of talk these days about evidence-based instructional practices, so much that I’ve gotten worried we aren’t thinking enough about what that means.
HIGHER ED TEACHING STRATEGIES FROM MAGNA PUBLICATIONS
Interleaving is not a well-known term among those who teach, and it’s not a moniker whose meaning can be surmised, but it’s a well-researched study strategy with positive effects on learning. Interleaving involves incorporating material from multiple class presentations, assigned readings, or problems in a single study session. It’s related to distributed practice—studying more often for shorter intervals (i.e., not cramming). But it is not the same thing. Typically, when students study and when teachers review, they go over what was most recently covered, or they deal with one kind of problem at a time.
Brief—that pretty much describes exam debriefs in many courses. The teacher goes over the most commonly missed questions, and the students can ask about answers but generally don’t. These kinds of debriefs don’t take up a lot of class time, but that’s about all that can be said for them. For some time now, I’ve been suggesting that students, not the teacher, should be correcting the wrong answers. The students are the ones who missed the questions.
I’ve been ruminating lately about tests and wondering if our thinking about them hasn’t gotten into something of a rut. We give exams for two reasons. First, we use exams to assess the degree to which students have mastered the content and skills of the course. But like students, we can get too focused on this grade-generating function of exams. We forget the second reason (or take it for granted): exams are learning events. Most students study for them, perhaps not as much or in the ways we might like, but before an exam most students are engaged with the content. Should we be doing more to increase the learning potential inherent in exam experiences?
Have your students ever told you that your tests are too hard? Tricky? Unfair? Many of us have heard these or similar comments. The conundrum
I’ve been rethinking my views on quizzing. I’m still not in favor of quizzes that rely on low-level questions where the right answer is a memorized detail or a quizzing strategy where the primary motivation is punitive, such as to force students to keep up with the reading. That kind of quizzing doesn’t motivate reading for the right reasons and it doesn’t promote deep, lasting learning. But I keep discovering innovative ways faculty are using quizzes, and these practices rest on different premises. I thought I’d use this post to briefly share some of them.
Here are two frequently asked questions about exam review sessions: (1) Is it worth devoting class time to review, and (2) How do you get students, rather than the teacher, doing the reviewing? Instead of answering those questions directly, I decided a more helpful response might be a set of activities that can make exam review sessions more effective.
It was just a passing comment in a student’s email reply to me concerning some questions I had raised on her most recent paper. She answered my inquiries and then basically thanked me for “grading with such grace.” This is not a word that I have ever associated with my grading. Tough–yes; thorough–you bet, but grace? Doesn’t that imply my being too easy? Had I given more credit than the student deserved?
The current state of student assessment in the classroom is mediocre, vague, and reprehensibly flawed. In much of higher education, we educators stake a moral high ground on positivistic academics. Case in point: assessment. We claim that our assessments within the classroom are objective, not subjective. After all, you wouldn’t stand in front of class and say that your grading is subjective and that students should just deal with it, right? Can we honestly examine a written paper or virtually any other assessment in our courses and claim that we grade completely void of bias? Let’s put this idea to the test. Take one of your assessments previously completed by a student. Grade the assignment using your rubric. Afterwards, have another educator among the same discipline grade the assignment using your exact rubric. Does your colleague’s grade and yours match? How far off are the two grades? If your assessment is truly objective, the grades should be exact. Not close but exact. Anything else reduces the reliability of your assessment.
It was always the same scenario. I’d be feeling a great sense of accomplishment because I had spent hours grading a set of English papers—painstakingly labeling errors and writing helpful comments. Everything was crystal clear, and the class could now move on to the next assignment. Except it wasn’t, and we couldn’t. A few students would inevitably find their way to my office, plunk their papers down on my desk, and ask me to explain the grade. Something had to change. I knew exactly why I was assigning the grades, but I obviously needed to find a more effective way of communicating these reasons to my students.