There is growing interest in the pedagogical literature in something called feedforward. It is, as the name implies, the opposite of feedback, which provides input after the fact. Feedforward offers input focused on the future. It lets students know what they should be doing or could be doing differently next time. If it’s a similar assignment, the “do differently” is specific advice on changes that will improve the next assignment. If it’s a different assignment, the “do differently” identifies what’s not the same about the next assignment and what needs to be done in a different way.
Recent research verifies that when looking at small differences in student ratings, faculty and administrators (in this case, department chairs) draw unwarranted conclusions. That’s a problem when ratings are used in decision-making processes regarding hiring, reappointment, tenure, promotion, merit increases, and teaching awards. It’s another chapter in the long, sad story of how research on student ratings has yet to be implemented in practice at most places, but that’s a book, not a blog post. Here, my goal is to offer some reminders and suggestions for when we look at our own ratings.
I just finished putting together some materials on grading policies for a series of Magna 20-Minute Mentor programs, and I am left with several important take-aways on the powerful role of grading policies. I’m not talking here about the grades themselves, but instead the policies we choose as teachers.
Is this situation at all like what you’re experiencing? Class sizes are steadily increasing, students need more opportunities to practice critical thinking skills, and you need to keep the amount of time devoted to grading under control. That was the situation facing a group of molecular biology and biochemistry professors teaching an advanced recombinant DNA course. They designed an interesting assessment alternative that addressed what they were experiencing.
Flipped learning environments offer unique opportunities for student learning, as well as some unique challenges. By moving direct instruction from the class group space to the individual students’ learning spaces, time and space are freed up for the class as a learning community to explore the most difficult concepts of the course. Likewise, because students are individually responsible for learning the basics of new material, they gain regular experience with employing self-regulated learning strategies they would not have in an unflipped environment.
Students perform poorly in our courses for a variety of reasons. Here are some students you’ve likely encountered over the years, as well as a few ideas on the type of feedback that best helps them turn things around.
A colleague of mine recently engaged with a new technology tool that has changed her life. She purchased and became a vigilant user of the fitness band. This wristband tracks her movement and sleep. Although fitness bands are cool tech tools, their “magic” is rooted in the continuous feedback they provide on one’s progress toward fitness goals determined by age, height/weight, and activity level. This amazing device has helped my colleague lose 40 pounds and increase her activity level fourfold in the last seven months. Watching her response and seeing her success have caused me to revisit what we know about the power of formative assessment as a learning tool.
If you’re a regular reader of this blog, you’re already aware that flipped instruction has become the latest trend in higher education classrooms. And for good reason. As it was first articulated by Bergmann and Sams, flipped instruction personalizes education by “redirecting attention away from the teacher and putting attention on the learner and learning.” As it has evolved, the idea of flipped instruction has moved beyond alternative information delivery to strategies for engaging students in higher-level learning outcomes. Instead of one-way communication, instructors use collaborative learning strategies and push passive students to become problem solvers by synthesizing information instead of merely receiving it. More recently on this blog, Honeycutt and Garrett referred to the FLIP as “Focusing on your Learners by Involving them in the Process” of learning during class, and Honeycutt has even developed assessments appropriate for flipped instruction. What’s been left out of the conversation about flipped classrooms, however, is why and how we might also need to flip assessment practices themselves.
Editor’s note: The following is an excerpt from Student-Generated Reading Questions: Diagnosing Student Thinking with Diverse Formative Assessments, Biochemistry and Molecular Biology Education, 42 (1), 29-38. The Teaching Professor Blog recently named it to its list of top pedagogical articles.
As instructors, we make a myriad of assumptions about the knowledge students bring to our courses. These assumptions influence how we plan for courses, what information we decide to cover, and how we engage our students. Often there is a mismatch between our expectations about what students know and how students actually think about a topic that is not uncovered until too late, after we examine student performance on quizzes and exams. Narrowing this gap requires the use of well-crafted formative assessments that facilitate diagnosing student learning throughout the teaching process.