HIGHER ED TEACHING STRATEGIES FROM MAGNA PUBLICATIONS
The relatively new Scholarship of Teaching and Learning in Psychology journal has a great feature called a “Teacher-Ready Research Review.” The examples I’ve read so far are well organized, clearly written, full of practical implications, and well referenced. This one on multiple-choice tests (mostly the questions on those tests) is no exception. Given our strong reliance on this test type, a regular review of common practices in light of research is warranted.
Given class sizes, teaching loads, and a host of other academic responsibilities, many teachers feel as though multiple-choice tests are the only viable option. Their widespread use justifies a regular review of those features that make these tests an effective way to assess learning and ongoing consideration of those features that compromise how much learning they promote.
Recent pedagogical interests have me wading through research on multi-tasking and revisiting what’s happening with cheating. In both cases, most of us have policies that prohibit, or in the case of electronic devices, curtail the activity. Evidence of the ineffectiveness of policies in both areas is pretty overwhelming. Lots of students are cheating and using phones in class. Thinking about it, I’m not sure other common policies such as those on attendance, deadlines, and participation are all that stunningly successful either. I’m wondering why and guessing there’s a whole constellation of reasons.
For many professors, student assessment is one of the most labor-intensive components of teaching a class. Items must be prepared, rubrics created, and instructions written. The work continues as the tests are scored, papers read, and comments shared. Performing authentic and meaningful student assessment takes time. Consequently, some professors construct relatively few assessments for their courses.
While most faculty stick with the tried-and-true quiz and paper assessment strategies for their online courses, the wide range of technologies available today offers a variety of assessment options beyond the traditional forms. But what do students think of these different forms?
Scott Bailey, Stacy Hendricks, and Stephanie Applewhite of Stephen F. Austin State University experimented with different assessment strategies in two online courses in educational leadership, and surveyed students afterward on their impressions of each one. The students were asked to score the strategies using three criteria: 1) enjoyment, 2) engagement with the material, and 3) transferability of knowledge gained to practice. The resulting votes allowed investigators to rank the various strategies from least to most preferred by students.
When an exam approaches, virtually all students agree they need to study and most will, albeit with varying intensity. Most will study the same way they always have—using the strategies they think work. The question students won’t ask is: How should I study for this exam? They don’t recognize that what they need to learn can and should be studied in different ways.
I admit that I’m an assessment geek, nerd, or whatever name you’d like to use. I pore over evaluations, rubrics, and test scores to see what kinds of actionable insights I can glean from them. I’ve just always assumed that it’s part of my job as a teacher to do my very best to make sure students are learning what we need them to learn.
As a new teacher, one of the resources I found most helpful in shaping my grading practices was Grant Wiggins’s advice on feedback and assessment. Meaningful feedback, he suggests, is much more than assigning a grade or even offering recommendations for improvement. Rather, meaningful feedback is descriptive, “play[ing] back” the student’s performance and connecting it to the learning outcomes of the course.
The following conceptions of feedback were offered by a group of students studying to become physical therapists. They were asked to recall a situation during their time in higher education when they felt they’d experienced feedback. Then they were asked a series of questions about the experience and about feedback more generally: “What is feedback? How would you describe it? How do you go about getting it? How do you use it?” (p. 924) The goal of the study was to investigate students’ conceptions of feedback. Student conceptions involve underlying personal beliefs, views, and ideas, unlike student perceptions, which explore how the feedback is understood. Analysis of transcripts from the interviews reveal four conceptions of feedback held by this student group