A college student opens the double doors and walks into a large conference room full of 65 long tables, set end-to-end and stacked six rows deep. Taking it all in, he asks his classmate, “How do we know where to put our projects?” before realizing large instructions with randomly assigned locations are projected up on the screen for all to see. He carefully places his project down onto spot #45, along with his required “Executive Summary,” a two-page document that provides his self-assessment and rationale about why he chose his project, what class content it caused him to research and learn more deeply, and how his project directly helped fulfill the four overall stated course outcomes.
HIGHER ED TEACHING STRATEGIES FROM MAGNA PUBLICATIONS
Assessment for Learning (AfL), sometimes referred to as “formative assessment” has become part of the educational landscape in the U.S. and is heralded to significantly raise student achievement, yet we are often uncertain what it is and what it looks like in practice in higher education. To clarify, AfL includes the formal and informal processes that faculty and students use during instruction to gather evidence for the purpose of improving learning. The aim of AfL is to improve students’ mastery of the content and to equip and empower them as self-regulated, life-long learners.
A common rhetorical move we professors make when students object to a grade is to reframe the discussion. We’ll say, “Let’s be clear. I didn’t give you this grade. You earned it.” And if it were appropriate we might underscore our zinger with a smugly snapped Z. But stop and think about it. When we make the “you earned it” move, it’s simply an attempt to shift the debate away from the fairness or interpretation of our standard and onto students to justify their effort by our standard, which really wasn’t their complaint.
It’s good to regularly review the advantages and disadvantages of the most commonly used test questions and the test banks that now frequently provide them.
Students are stressed. A recent survey revealed that mental health issues, including severe stress, are on the rise. In 2016, 65% of students reported experiencing overwhelming anxiety during the previous 12 months, which is an increase of more than 7% from the 2013 data (National College Health Assessment, 2016). We also know from decades of research that arousal levels are strongly related to performance: not enough arousal and you don’t perform well, but too much arousal (which becomes stress/anxiety) and your performance is negatively impacted (Colman, 2001). Therefore, anything we can do as instructors to reduce students’ stress should have a positive impact on their mental health and academic performance.
Here’s an interesting way to incorporate collaboration in a quizzing strategy, with some pretty impressive results.
Beginning with the mechanics: students took three quizzes in an introductory pharmaceutical science course. First, they completed the quiz individually. After answering each question, they indicated how confident they were that their answer was correct—5 for absolutely certain and 1 for not knowing and guessing. Then for a period of time (length not specified in the article), they were allowed to collaborate with others seated near them on quiz answers. After that discussion, they could change their quiz answers, if they desired. At that point, they again rated their confidence in the correctness of the answers. Quiz answer sheets and confidence levels were then turned in. Immediately, correct quiz answers were revealed and once again students had the opportunity to discuss answers with each other.
If there’s a downside to another academic year coming to a successful close, it’s reading course evaluations. This post explores how we respond to those one or two low evaluations and the occasional negative comments found in answers to the open-ended questions. Do we have a tendency to over-react? I know I did.
The relatively new Scholarship of Teaching and Learning in Psychology journal has a great feature called a “Teacher-Ready Research Review.” The examples I’ve read so far are well organized, clearly written, full of practical implications, and well referenced. This one on multiple-choice tests (mostly the questions on those tests) is no exception. Given our strong reliance on this test type, a regular review of common practices in light of research is warranted.
Given class sizes, teaching loads, and a host of other academic responsibilities, many teachers feel as though multiple-choice tests are the only viable option. Their widespread use justifies a regular review of those features that make these tests an effective way to assess learning and ongoing consideration of those features that compromise how much learning they promote.