For many faculty, adding a new teaching strategy to our repertoire goes something like this. We hear about an approach or technique that sounds like a good idea. It addresses a specific instructional challenge or issue we’re having. It’s a unique fix, something new, a bit different, and best of all, it sounds workable. We can imagine ourselves doing it.
HIGHER ED TEACHING STRATEGIES FROM MAGNA PUBLICATIONS
Engagement in a continuous, systematic, and well-documented student learning assessment process has been gaining importance throughout higher education. Indeed, implementation of such a process is typically a requirement for obtaining and maintaining accreditation. Because faculty need to embrace learning assessment in order for it to be successful, any misconceptions about the nature of assessment need to be dispelled. One way to accomplish that is to “rebrand” (i.e., change perceptions) the entire process.
If there’s a perfect grading system, it has yet to be discovered. This post is about point systems—not because they’re the best or the worst but because they’re widely used. It is precisely because they are so prevalent that we need to think about how they affect learning.
It would be nice if we had some empirical evidence to support our thinking. I’m surprised that so little research has been done on this common grading system. Does it promote more effective learning (as measured by higher exam scores or overall course grades) than letter grades or percentages? Does it motivate students to study? Does it make students more grade oriented or less so? Does it provoke more grade anxiety than other systems or less? Does make a difference whether we use a 100-point system or a 1,000-point system? We all have our preferences—and sometimes even reasons—for the systems we use, but where’s the evidence? I can’t remember reading anything empirical that explores these questions—if you have, please share the references.
There’s a lot of talk these days about evidence-based instructional practices, so much that I’ve gotten worried we aren’t thinking enough about what that means.
Interleaving is not a well-known term among those who teach, and it’s not a moniker whose meaning can be surmised, but it’s a well-researched study strategy with positive effects on learning. Interleaving involves incorporating material from multiple class presentations, assigned readings, or problems in a single study session. It’s related to distributed practice—studying more often for shorter intervals (i.e., not cramming). But it is not the same thing. Typically, when students study and when teachers review, they go over what was most recently covered, or they deal with one kind of problem at a time.
Brief—that pretty much describes exam debriefs in many courses. The teacher goes over the most commonly missed questions, and the students can ask about answers but generally don’t. These kinds of debriefs don’t take up a lot of class time, but that’s about all that can be said for them. For some time now, I’ve been suggesting that students, not the teacher, should be correcting the wrong answers. The students are the ones who missed the questions.
I’ve been ruminating lately about tests and wondering if our thinking about them hasn’t gotten into something of a rut. We give exams for two reasons. First, we use exams to assess the degree to which students have mastered the content and skills of the course. But like students, we can get too focused on this grade-generating function of exams. We forget the second reason (or take it for granted): exams are learning events. Most students study for them, perhaps not as much or in the ways we might like, but before an exam most students are engaged with the content. Should we be doing more to increase the learning potential inherent in exam experiences?
Have your students ever told you that your tests are too hard? Tricky? Unfair? Many of us have heard these or similar comments. The conundrum
I’ve been rethinking my views on quizzing. I’m still not in favor of quizzes that rely on low-level questions where the right answer is a memorized detail or a quizzing strategy where the primary motivation is punitive, such as to force students to keep up with the reading. That kind of quizzing doesn’t motivate reading for the right reasons and it doesn’t promote deep, lasting learning. But I keep discovering innovative ways faculty are using quizzes, and these practices rest on different premises. I thought I’d use this post to briefly share some of them.
Here are two frequently asked questions about exam review sessions: (1) Is it worth devoting class time to review, and (2) How do you get students, rather than the teacher, doing the reviewing? Instead of answering those questions directly, I decided a more helpful response might be a set of activities that can make exam review sessions more effective.