There is growing interest in the pedagogical literature in something called feedforward. It is, as the name implies, the opposite of feedback, which provides input after the fact. Feedforward offers input focused on the future. It lets students know what they should be doing or could be doing differently next time. If it’s a similar assignment, the “do differently” is specific advice on changes that will improve the next assignment. If it’s a different assignment, the “do differently” identifies what’s not the same about the next assignment and what needs to be done in a different way.
HIGHER ED TEACHING STRATEGIES FROM MAGNA PUBLICATIONS
Is this situation at all like what you’re experiencing? Class sizes are steadily increasing, students need more opportunities to practice critical thinking skills, and you need to keep the amount of time devoted to grading under control. That was the situation facing a group of molecular biology and biochemistry professors teaching an advanced recombinant DNA course. They designed an interesting assessment alternative that addressed what they were experiencing.
Tests and quizzes are often the primary means of assessing online learner performance; however, as Rena Palloff and Keith Pratt, online instructors and coauthors of numerous online learning books, including Lessons from the Virtual Classroom: The Realities of Online Teaching (2013), point out, there are more effective and less problematic alternatives.
After going out for tacos, our students can review the restaurant on a website. They watch audiences reach a verdict on talent each season on American Idol. When they play video games—and they play them a lot—their screens are filled with status and reward metrics. And after (and sometimes while) taking our classes, they can go online to www.ratemyprofessors.com.
Despite almost universal agreement that critical thinking needs to be taught in college, now perhaps more than ever before, there is much less agreement on definitions and dimensions. “Critical thinking can include the thinker’s dispositions and orientations; a range of specific analytical, evaluative, and problem-solving skills; contextual influences; use of multiple perspectives; awareness of one’s own assumptions; capacities for metacognition; or a specific set of thinking processes or tasks.” (p. 127)
Sometimes, in informal conversations with colleagues, I hear a statement like this, “Yeah, not a great semester, I doled out a lot of C’s.” I wonder, did this professor create learning goals that were unobtainable by most of the class or did this professor lack the skills to facilitate learning? I present this provocative lead-in as an invitation to reflect upon our presuppositions regarding grading.
“In this article, we describe an easily adoptable and adaptable model for a one-credit capstone course that we designed to assess goals at the programmatic and institutional levels.” (p. 523) That’s what the authors claim in the article referenced below, and that’s what they deliver. The capstone course they write about is the culmination of a degree in political science at a public university.
“Creating a climate that maximizes student accomplishment in any discipline focuses on student learning instead of assigning grades. This requires students to be involved as partners in the assessment of learning and to use assessment results to change their own learning tactics.” (p. 136) The authors of this comment continue by pointing out that this assessment involves the use of formative feedback and that feedback has the greatest benefit when it addresses multiple aspects of learning. This kind of assessment should contain feedback on the product (the completed task) and feedback on progress (the extent to which the student is improving over time). The article then describes a number of formative feedback activities that illustrate how students can be involved as partners in the assessment process. Their involvement means that formative feedback can be given more frequently.
In the mid-1990s, college faculty members were introduced to the concept of classroom assessment techniques (CATs) by Angelo and Cross (1993). These formative assessment strategies were learner-centered, teacher-directed ongoing activities that were rooted in good teaching practice. They were designed to provide relatively quick and useful feedback to the faculty member about what students did and did not understand in order to enhance the teaching and learning process.
Stronger than multiple choice, yet not quite as revealing (or time consuming to grade) as the essay question, the short answer question offers a great middle ground – the chance to measure a student’s brief composition of facts, concepts, and attitudes in a paragraph or less.