assessing student learning
Editor’s note: The following is an excerpt from Student-Generated Reading Questions: Diagnosing Student Thinking with Diverse Formative Assessments, Biochemistry and Molecular Biology Education, 42 (1), 29-38. The Teaching Professor Blog recently named it to its list of top pedagogical articles.
As instructors, we make a myriad of assumptions about the knowledge students bring to our courses. These assumptions influence how we plan for courses, what information we decide to cover, and how we engage our students. Often there is a mismatch between our expectations about what students know and how students actually think about a topic that is not uncovered until too late, after we examine student performance on quizzes and exams. Narrowing this gap requires the use of well-crafted formative assessments that facilitate diagnosing student learning throughout the teaching process.
I’m “reflecting” a lot these days. My tenure review is a few months away, and it’s time for me to prove (in one fell swoop) that my students are learning. The complexity of this testimonial overwhelms me because in the context of the classroom experience, there are multiple sources of data and no clear-cut formula for truth.
Chemistry professor Steven M. Wright has written a one-page essay about his niece, Julia, learning how to downhill ski. She was ready for her first ride on the chairlift and Wright was helping her. He’s a professor so he covered the topic in a well-organized, easy-to-understand way. It was a short, five minute lecture that ended with a repeat of the main point, “keep your ski tips up when you get on the lift.”
I remember with horror and embarrassment the first multiple-choice exam I wrote. I didn’t think the students were taking my course all that seriously, so I decided to use the first exam to show just how substantive the content really was. I wrote long, complicated stems and followed them with multiple answer options and various combinations of them. And it worked. Students did poorly on the exam. I was pleased until I returned the test on what turned out to be one of the longest class periods of my teaching career. I desperately needed the advice that follows here.
As Ron Berk (known for his pithy humor) observes, the multiple-choice question “holds world records in the categories of most popular, most unpopular, most used, most misused, most loved and most hated.” According to one source I read, multiple-choice questions were first used around the time of World War I to measure the abilities of new Army recruits. As class sizes have grown and the demands on teacher time expanded, they have become the favorite testing tool in higher education.
When you are a math teacher you are often faced with the dilemma of whether to assign partial credit to a problem that is incorrect, but that demonstrates some knowledge of the topic. Should I give half-credit? Three points out of five? My answer has typically been to give no credit…at first. However, taking a page from my colleagues in the English department (and grad school), I do allow for revisions, which ends up being a much better solution.
Last semester I implemented a different kind of final exam. In the past I have used the standard multiple-choice and short-answer exams. I was thinking about making a change when I discovered Beyond Tests and Quizzes: Creative Assessment in the College Classroom, edited by Richard J. Mezeske and Barbara A. Mezeske. The second chapter, “Concept Mapping: Assessing Pre-Service Teachers’ Understanding and Knowledge,” describes an assessment method that tests higher-level thinking. The author shared his experience using concept maps as a final exam, included an example of the final exam project, offered rubrics for grading, and discussed the advantages and disadvantages of the strategy. I decided this was the change I was going to make.
It seems like everyone is talking about the flipped classroom. But how do you use this new model to construct lessons and assessments that reinforce student learning?
We give students exams for two reasons: First, we have a professional responsibility to verify their mastery of the material. Second, we give exams because they promote learning. Unfortunately, too often the first reason overshadows the second. We tend to take learning outcomes for granted. We assume the learning happens, almost automatically, provided the student studies. But what if we considered how, as designers of exam experiences, we might maximize their inherent potential? Would any of these possibilities make for more and better learning from the exams your students take?
It’s a conversation most faculty would rather not have. The student is unhappy about a grade on a paper, project, exam, or for the course itself. It’s also a conversation most students would rather not have. In the study referenced below, only 16.8 percent of students who reported they had received a grade other than what they thought their work deserved actually went to see the professor to discuss the grade.