
Hearing Your Student Evaluations Differently?
Not long after I had received most of my student’s mid-semester survey results, I came across an AI tool that would create a song. The tool is

Not long after I had received most of my student’s mid-semester survey results, I came across an AI tool that would create a song. The tool is

Those who write about teaching persona (the slice of our identities that constitutes the “public teaching self”) encourage us to start by reflecting on the messages we want to send to students. A dialogue with ourselves is a useful beginning, but for the last days of a semester another option might be more intriguing and revealing.

No matter how much we debate the issue, end-of-course evaluations count. How much they count is a matter of perspective. They matter if you care about teaching. They frustrate you when you try to figure out what they mean. They haven’t changed; they are regularly administered at odds with research-recommended practices. And faculty aren’t happy with the feedback they provide. A survey (Brickman et al., 2016) of biology faculty members found that 41% of them (from a wide range of institutions) were not satisfied with the current official end-of-course student evaluations at their institutions, and another 46% were only satisfied “in some ways.”
It takes a certain amount of courage to talk with students about course evaluation results. I’m thinking here more about formative feedback the teacher solicits during the course, as opposed to what’s officially collected when it ends. Despite how vulnerable revealing results can make a teacher feel, there are some compelling reasons to have these conversations and a powerful collection of benefits that may result from doing so.
Shortly after 2000, higher education institutions started transitioning from paper and pencil student-rating forms to online systems. The online option has administrative efficiency and economics going for it. At this point, most course evaluations are being conducted online. Online rating systems have not only institutional advantages but also advantages for students: students can take as much (or little) time as they wish to complete the form, their anonymity is better preserved, and several studies have reported an increase in the number of qualitative comments when evaluations are offered online. Other studies document that overall course ratings remain the same or are slightly improved in the online format.
Those who write about teaching persona (the slice of our identities that constitutes the “public teaching self”) encourage us to start by reflecting on the messages we want to send to students. A dialogue with ourselves is a useful beginning, but for the last days of a semester another option might be more intriguing and revealing.
Recent research verifies that when looking at small differences in student ratings, faculty and administrators (in this case, department chairs) draw unwarranted conclusions. That’s a problem when ratings are used in decision-making processes regarding hiring, reappointment, tenure, promotion, merit increases, and teaching awards. It’s another chapter in the long, sad story of how research on student ratings has yet to be implemented in practice at most places, but that’s a book, not a blog post. Here, my goal is to offer some reminders and suggestions for when we look at our own ratings.
Reading students’ comments on official end-of-term evaluations—or worse, online at sites like RateMyProfessors.com—can be depressing, often even demoralizing. So it’s understandable that some faculty look only at the quantitative ratings; others skim the written section; and many others have vowed to never again read the public online comments. It’s simply too painful.
How else might you respond? Here are seven suggestions for soothing the sting from even the most hurtful student comments:
Course evaluations are often viewed as a chore; one of those unpleasant obligations we do at the end of each course. In the Teaching Professor Blog post “End-of-Course Evaluations: Making Sense of Student Comments,” Maryellen Weimer is bang-on in stating that the comments students dash off can be more confusing than clarifying.
With an increasing number of rating systems now online, the question of who completes those surveys (since not all students do) is one with important implications. Are those students dissatisfied with the course and the instruction they received more likely to fill out the online surveys? If so, that could bias the results downward. But if those students satisfied with the course are more likely to evaluate it, that could interject bias in the opposite direction.
Get exclusive access to programs, reports, podcast episodes, articles, and more!