If there’s a downside to another academic year coming to a successful close, it’s reading course evaluations. This post explores how we respond to those one or two low evaluations and the occasional negative comments found in answers to the open-ended questions. Do we have a tendency to over-react? I know I did.
HIGHER ED TEACHING STRATEGIES FROM MAGNA PUBLICATIONS
Are students taking their end-of-course evaluation responsibilities seriously? Many institutions ask them to evaluate every course and to do so at a time when they’re busy with final assignments and stressed about upcoming exams. Response rates have also fallen at many places that now have students provide their feedback online. And who hasn’t gotten one or two undeserved low ratings—say, on a question about instructor availability when the instructor regularly came early to class, never missed a class, and faithfully kept office hours? Are students even reading the questions?
Course evaluations are often viewed as a chore; one of those unpleasant obligations we do at the end of each course. In the Teaching Professor Blog post “End-of-Course Evaluations: Making Sense of Student Comments,” Maryellen Weimer is bang-on in stating that the comments students dash off can be more confusing than clarifying.
Incredible changes have occurred in the brief 25 years I have spent as a professor in higher education. In the area of technology alone, significant innovations have impacted the way people work, play, and learn. The benefits these technological advances bring to faculty and students are incalculable.
Yet, some areas of higher education have undergone very little change.
With each semester’s end comes the often-dreaded course evaluation process. Will the students be gentle and offer constructive criticism, or will their comments be harsh and punitive? What do students really want out of a course, anyway? A better time to think about course evaluations is at the beginning of the semester. At that point, an instructor can be proactive in three areas that I have found lead to better course evaluations.
Professors teach in a vacuum; we enter the classroom, deliver our lessons, and leave, and rarely get any feedback on the quality of our instruction before the end of the semester when formal faculty evaluations are completed by students. Other than grades on tests and other assessments, we really don’t know for sure if students are learning what we are teaching, and we often don’t have a good handle on whether our instruction is working.
Student ratings can provide helpful and legitimate feedback. Unfortunately, all too often, students give very little time or thought to end-of-course evaluations, or they use them as an opportunity to make mean-spirited comments about the instructor. And, all things being equal, an instructor who teaches a challenging course will score lower than an instructor whose course is less rigorous.
Two researchers used end-of-course ratings data to generate a cohort of faculty whose ratings in the same course had significantly improved over a three-year period. They defined significant improvement as a 1.5-point increase on an 8-point scale. In this cohort, more than 50 percent of faculty had improved between 1.5 and 1.99 points, another 40 percent between 2.0 and 2.99 points, and the rest even more.
Online Course Quality Assurance: Using Evaluations and Surveys to Improve Online Teaching and Learning
In order to improve online programs, courses, and instruction, you have to first determine your goals, select metrics that will tell you what we want to know, analyze these metrics for clues about needed changes, and then make those changes. It may sound simple, but it isn’t.
If evaluation sounds good in theory but feels bad in practice, it may be that you or others are operating under some common misconceptions.