Learning to Fail or Failing to Teach?

student presentations

I recently took a position with a new institution and was asked to teach a senior seminar course. I determined that the best method for the students to show synthesis of knowledge was for them to develop a series of presentations investigating topics, problems, or issues in the field of kinesiology. The students were tasked with investigating and developing solutions from the current body of research. The first semester I taught the course, students were given a checklist of requirements and rubric for each presentation. I spent a short amount of time discussing the presentations during class and answering any questions students had. After each presentation they received their rubric and written feedback on their performance. As you can see in the table, the average grade actually went down with each presentation students gave during my first semester teaching the course

Average (AVE) grades for each presentation and standard deviation (SD) listed by year

Class Presentation 1 Presentation 2 Presentation 3
 Fall 2016 91.54 3.67 89.76 3.46 89.06 5.61
 Spring 2017 85.64 5.83 91.38 3.43 92.29 3.33
 Fall 2017 92.92 5.58 96.15 2.74 91.91 5.94

During that same time, I was also investigating the implementation of low-stakes assignments and “learning to fail” as a pedagogical approach. Instead of counting the presentations equally, as I did in the first semester (20% each), for the Spring 2017 semester I instituted a weighted grading system (10%, 20%, and 30% respectively), which allowed students to fail in the first presentation but still recover their overall grade with higher stakes in the second and third presentations. I provided only the checklist and rubric for the presentation but did not spend time in class explaining my expectations. After everyone finished their first presentation I went into more detail of my expectations and changes they could make before the second and the third presentations. While this group performed 6% lower on the first presentation, they rebounded to 2% and 3% higher on the second and third presentations, compared to students in the Fall 2016 class. I was very excited to see the improvement in grades and effort, but I also wondered if having the same intervention before the first presentation would have similar results.

This past semester I implemented my strategy. I found that by providing students with an example and explaining where students have failed in the past produced increased student performance and improvement across all three presentations. This final group of students accomplished higher performance grades than all previous classes on the first (1% & 7% respectively) and second (6% & 4%) presentations and similar to my second semester class on the final presentation (0.3% lower).

I believe that part of the reason this approach worked well in the course was the dedicated time I set aside during one class to explain to students where previous cohorts had struggled. While a larger sample size would be needed to statistically prove this conclusion, I believe this could suggest that the threat of failure and specific instruction on avoiding failure helped students to produce higher quality work. However, it should also be noted that the lowest standard deviations in the presentation grades were found in the second group with a truer experience of failure during the first presentation. When students and instructors have different expectations for an assignment, the resulting assessment is also misaligned. Thus, it is essential to find a way to provide instruction and feedback to the student to facilitate clarity between expectations of the instructor and understanding of the student (Nicol & Macfarlane-Dick, 2006). In this way, if the goal is to help all students reach similar levels of success then learning to fail might be the pedagogical kick in the butt necessary to facilitate student achievement.

Nicol, David, & Macfarlane-Dick, Debra. “Formative Assessment and Self-regulated Learning: A Model and Seven Principles of Good Feedback Practice.” Studies in Higher Education, 31, No. 2 (2006): 199-218.

Christopher Viesselman is an assistant professor of kinesiology and director of the Master of Science in athletic training at Grand View University.