
Grading for Growth: Reconsidering Points, Purpose, and Proficiency
“Will this be on the test?” If that question immediately makes your heart race, muscles tense, or your face do an unflattering cringe type of
“Will this be on the test?” If that question immediately makes your heart race, muscles tense, or your face do an unflattering cringe type of
If you teach, you know about learning outcomes. Unless you inherited your courses from someone else, you’ve developed lists of them. You’ve probably had to submit these lists to the administration to be reviewed and possibly revised.
I’ve sat on the Curriculum Committee at two different higher education institutions. I’ve also participated in college assessment committees and accreditation committees at both the
After a fifteen-year hiatus from teaching musicianship classes (I typically teach undergraduate music theory core classes and graduate classes), I taught Musicianship 1 last semester
For both new and veteran faculty, inheriting a syllabus to teach from is like being blindfolded on a long journey and being told, “Don’t worry, you’ll know it when we get there.” There’s a lot of trust required in order to follow someone else’s map. There are road hazards the mapmaker may not be aware of; there may be alternate routes that might get you there more directly; and it may even be prudent to choose another mode of transportation to get there.
We give students exams for two reasons: First, we have a professional responsibility to verify their mastery of the material. Second, we give exams because they promote learning. Unfortunately, too often the first reason overshadows the second. We tend to take learning outcomes for granted. We assume the learning happens, almost automatically, provided the student studies. But what if we considered how, as designers of exam experiences, we might maximize their inherent potential? Would any of these possibilities make for more and better learning from the exams your students take?
There are those in the academic community who dread hearing and reading about assessment. But aside from the mandatory reporting required by credentialing and accreditation agencies, how can faculty members be sure that all of the assessment activities they are required to report actually produce change and are not just more paperwork?
Sometimes, in informal conversations with colleagues, I hear a statement like this, “Yeah, not a great semester, I doled out a lot of C’s.” I wonder, did this professor create learning goals that were unobtainable by most of the class or did this professor lack the skills to facilitate learning? I present this provocative lead-in as an invitation to reflect upon our presuppositions regarding grading.
Students frequently wonder and sometimes ask, “Why are we doing this? Why do I need to know this? Why are we spending so much time on this? Why do we have to do this busywork?”
When students don’t see the connection between the content and activities of the course and their future lives, they question what’s happening and what we ask them to do. Research confirms that perceived relevance is a critical factor in maintaining student interest and motivation. It also contributes to higher student ratings on course evaluations.
There’s a tacit rule that most college teachers abide by: I won’t mess with your course if you agree not to mess with mine. Gerald Graff observes and asks, “This rules suits the teacher, but how well does it serve students?” (p. 155)
Get exclusive access to programs, reports, podcast episodes, articles, and more!