With an increasing number of rating systems now online, the question of who completes those surveys (since not all students do) is one with important implications. Are those students dissatisfied with the course and the instruction they received more likely to fill out the online surveys? If so, that could bias the results downward. But if those students satisfied with the course are more likely to evaluate it, that could interject bias in the opposite direction.
HIGHER ED TEACHING STRATEGIES FROM MAGNA PUBLICATIONS
Incredible changes have occurred in the brief 25 years I have spent as a professor in higher education. In the area of technology alone, significant innovations have impacted the way people work, play, and learn. The benefits these technological advances bring to faculty and students are incalculable.
Yet, some areas of higher education have undergone very little change.
With each semester’s end comes the often-dreaded course evaluation process. Will the students be gentle and offer constructive criticism, or will their comments be harsh and punitive? What do students really want out of a course, anyway? A better time to think about course evaluations is at the beginning of the semester. At that point, an instructor can be proactive in three areas that I have found lead to better course evaluations.
Professors teach in a vacuum; we enter the classroom, deliver our lessons, and leave, and rarely get any feedback on the quality of our instruction before the end of the semester when formal faculty evaluations are completed by students. Other than grades on tests and other assessments, we really don’t know for sure if students are learning what we are teaching, and we often don’t have a good handle on whether our instruction is working.
Student ratings can provide helpful and legitimate feedback. Unfortunately, all too often, students give very little time or thought to end-of-course evaluations, or they use them as an opportunity to make mean-spirited comments about the instructor. And, all things being equal, an instructor who teaches a challenging course will score lower than an instructor whose course is less rigorous.
Two researchers used end-of-course ratings data to generate a cohort of faculty whose ratings in the same course had significantly improved over a three-year period. They defined significant improvement as a 1.5-point increase on an 8-point scale. In this cohort, more than 50 percent of faculty had improved between 1.5 and 1.99 points, another 40 percent between 2.0 and 2.99 points, and the rest even more.
If evaluation sounds good in theory but feels bad in practice, it may be that you or others are operating under some common misconceptions.
Unless they have a real problem with how the course was run, most students fill out end-of-course evaluations so quickly there’s often very little valuable information in them. Here are two ways that Wayne Hall, psychology professor at San Jacinto College in Texas, elicits helpful feedback on his courses:
Online instructors receive poor evaluations for any number of reasons, including lack of experience, inadequate training, and poor communication skills. Other times, the poor reviews are more reflective of the course design than the instructor who’s teaching the course. That distinction is unimportant to the students.
Despite all the high-tech communication technologies available to online instructors today — discussion boards, email, IM, wikis, podcasts, blogs, vlogs, etc. — every once in awhile Dr. B. Jean Mandernach likes to use a tool that was invented way back in 1876. The telephone.