student evaluations of instructors
It’s that time of the year when end-of-course ratings and student comments are collected. When the feedback arrives, the quality often disappoints—and if the feedback is collected online, fewer students even bother to respond. Most of the comments are dashed off half thoughts, difficult to decipher. Complaints aren’t accompanied with constructive suggestions. Yes, some do say really nice things, but others sound off with pretty awful comments. However, I don’t think students are entirely at fault here.
Course evaluations are often viewed as a chore; one of those unpleasant obligations we do at the end of each course. In the Teaching Professor Blog post “End-of-Course Evaluations: Making Sense of Student Comments,” Maryellen Weimer is bang-on in stating that the comments students dash off can be more confusing than clarifying.
At most colleges, courses are starting to wind down and that means it’s course evaluation time. It’s an activity not always eagerly anticipated by faculty, largely because of those ambiguous comments students write. Just what are they trying to say?
I think part of the reason for the vague feedback is that students don’t believe that the evaluations are taken all that seriously, not to mention they’re in the middle of the usual end-of-semester stress caused by having lots of big assignments due and final exams to face. It’s just not the best time to be asking for feedback and so students dash off a few comments which instructors are left to decipher.
Incredible changes have occurred in the brief 25 years I have spent as a professor in higher education. In the area of technology alone, significant innovations have impacted the way people work, play, and learn. The benefits these technological advances bring to faculty and students are incalculable.
Yet, some areas of higher education have undergone very little change.
With each semester’s end comes the often-dreaded course evaluation process. Will the students be gentle and offer constructive criticism, or will their comments be harsh and punitive? What do students really want out of a course, anyway? A better time to think about course evaluations is at the beginning of the semester. At that point, an instructor can be proactive in three areas that I have found lead to better course evaluations.
Professors teach in a vacuum; we enter the classroom, deliver our lessons, and leave, and rarely get any feedback on the quality of our instruction before the end of the semester when formal faculty evaluations are completed by students. Other than grades on tests and other assessments, we really don’t know for sure if students are learning what we are teaching, and we often don’t have a good handle on whether our instruction is working.
Student ratings can provide helpful and legitimate feedback. Unfortunately, all too often, students give very little time or thought to end-of-course evaluations, or they use them as an opportunity to make mean-spirited comments about the instructor. And, all things being equal, an instructor who teaches a challenging course will score lower than an instructor whose course is less rigorous.
Do you ever wonder what’s more important – educating your students or producing satisfied customers? When student ratings are the sole measurement of teaching assessment, many faculty start to wonder. This seminar will give you strategies for evaluating teacher effectiveness based on what really counts: student learning.
video Online Seminar • Recorded on Thursday, August 25th, 2011
The close of the academic year brings with it the end of courses and the usual student ratings of those courses. Among many concerns related to this activity are those pertaining to the presence of certain items on the form. They ask irrelevant questions, given what and how we teach. Of course, that doesn’t seem to prevent students from offering evaluations in those areas.
Two researchers used end-of-course ratings data to generate a cohort of faculty whose ratings in the same course had significantly improved over a three-year period. They defined significant improvement as a 1.5-point increase on an 8-point scale. In this cohort, more than 50 percent of faculty had improved between 1.5 and 1.99 points, another 40 percent between 2.0 and 2.99 points, and the rest even more.