The close of the academic year brings with it the end of courses and the usual student ratings of those courses. Among many concerns related to this activity are those pertaining to the presence of certain items on the form. They ask irrelevant questions, given what and how we teach. Of course, that doesn’t seem to prevent students from offering evaluations in those areas.
The collection of items on student rating forms can be thought of as an operational definition of good teaching and we all know that good teaching can be defined in many different ways. One interesting thing you can do with these instruments is to go through and cross off or change the items that don’t fit with your definition. I’m not suggesting you revise the instrument and then administer it, but making the changes for your own edification enables you to see where you agree and disagree with the definition proposed by the instrument.
Too many rating instruments remind me of that old expression that a camel looks like a horse designed by a committee. They are assembled via a political process where those with labs want questions about labs and those with studio courses want questions about those. Usually the compromise involves including both.
When an instrument is empirically constructed the process of deciding what goes on it involves something called validity. David Kember and Doris Leung offer a simple description of validity saying it is established “if an instrument actually provides a measure of what it purports to measure.” (p. 342) There are two kinds of validity; face validity and content validity. Face validity means the wording of the items refers what is being measured. That’s pretty straightforward and not really a problem on most course evaluation instruments. Content validity implies that an instrument includes all the dimensions, aspects or parts of the construct and that those parts are represented in a balanced way. That’s a problem with something like good teaching where the definitions are different and not universally agreed upon.
Empirically developed rating instruments assemble the collection of items based on the reports of various interested parties. In the case of teaching, that has meant students (current and former), teachers with a special emphasis on the views of good teachers, and administrators. In the mid 70s, Ken Feldman, whose meta-analyses on various aspects of ratings are legendary, reviewed the research on the ingredients and components of effective teaching (as reported by these groups) and from that large and not well-organized data base derived a set of 19 characteristics. His work and others like it justifies the inclusion of items that commonly appear on rating forms—things like teacher’s preparation and organization of the course; teacher’s enthusiasm for the subject; and teacher’s availability and helpfulness.
Times change and as Kember and Leung point out the characteristics that emerged out of Feldman’s analysis of the literature focus on teaching. Eleven of Feldman’s 19 characteristics begin with “teacher’s” and four more deal with the content and its presentation. “The model is of the teacher-centred content-oriented type [of instruction]. The dimensions fit well with didactic teaching, but it is hard to see the applicability of many of the dimensions to other more student-centred forms of teaching.” (p. 342) Kember and Leung’s new instrument is more attuned to the goals and objectives of learner-centered teaching. If that’s of interest, their article can be consulted.
My goal here is to make sure when you take a look at your results, you consider how the instrument is defining good teaching. How closely does that definition correspond with your own? And second, you recognize that definitions of teaching are not all equally acceptable. If you’re using an instrument to acquire feedback for yourself, then you can and should ask students for feedback in areas relevant to your definition. But if the instrument is being used to assess teaching across the institution, then the item selection process should be governed by what is known about aspects of instructional practice that can be linked to learning outcomes.
Reference: Kember, D. and Leung, D. Y. P. (2008). Establishing the validity and reliability of course evaluation questionnaires. Assessment & Evaluation in Higher Education, 33 (4), 341-353.