Recent pedagogical interests have me wading through research on multi-tasking and revisiting what’s happening with cheating. In both cases, most of us have policies that prohibit, or in the case of electronic devices, curtail the activity. Evidence of the ineffectiveness of policies in both areas is pretty overwhelming. Lots of students are cheating and using phones in class. Thinking about it, I’m not sure other common policies such as those on attendance, deadlines, and participation are all that stunningly successful either. I’m wondering why and guessing there’s a whole constellation of reasons.
HIGHER ED TEACHING STRATEGIES FROM MAGNA PUBLICATIONS
For many professors, student assessment is one of the most labor-intensive components of teaching a class. Items must be prepared, rubrics created, and instructions written. The work continues as the tests are scored, papers read, and comments shared. Performing authentic and meaningful student assessment takes time. Consequently, some professors construct relatively few assessments for their courses.
While most faculty stick with the tried-and-true quiz and paper assessment strategies for their online courses, the wide range of technologies available today offers a variety of assessment options beyond the traditional forms. But what do students think of these different forms?
Scott Bailey, Stacy Hendricks, and Stephanie Applewhite of Stephen F. Austin State University experimented with different assessment strategies in two online courses in educational leadership, and surveyed students afterward on their impressions of each one. The students were asked to score the strategies using three criteria: 1) enjoyment, 2) engagement with the material, and 3) transferability of knowledge gained to practice. The resulting votes allowed investigators to rank the various strategies from least to most preferred by students.
When an exam approaches, virtually all students agree they need to study and most will, albeit with varying intensity. Most will study the same way they always have—using the strategies they think work. The question students won’t ask is: How should I study for this exam? They don’t recognize that what they need to learn can and should be studied in different ways.
I admit that I’m an assessment geek, nerd, or whatever name you’d like to use. I pore over evaluations, rubrics, and test scores to see what kinds of actionable insights I can glean from them. I’ve just always assumed that it’s part of my job as a teacher to do my very best to make sure students are learning what we need them to learn.
As a new teacher, one of the resources I found most helpful in shaping my grading practices was Grant Wiggins’s advice on feedback and assessment. Meaningful feedback, he suggests, is much more than assigning a grade or even offering recommendations for improvement. Rather, meaningful feedback is descriptive, “play[ing] back” the student’s performance and connecting it to the learning outcomes of the course.
The following conceptions of feedback were offered by a group of students studying to become physical therapists. They were asked to recall a situation during their time in higher education when they felt they’d experienced feedback. Then they were asked a series of questions about the experience and about feedback more generally: “What is feedback? How would you describe it? How do you go about getting it? How do you use it?” (p. 924) The goal of the study was to investigate students’ conceptions of feedback. Student conceptions involve underlying personal beliefs, views, and ideas, unlike student perceptions, which explore how the feedback is understood. Analysis of transcripts from the interviews reveal four conceptions of feedback held by this student group
For many faculty, adding a new teaching strategy to our repertoire goes something like this. We hear about an approach or technique that sounds like a good idea. It addresses a specific instructional challenge or issue we’re having. It’s a unique fix, something new, a bit different, and best of all, it sounds workable. We can imagine ourselves doing it.
Engagement in a continuous, systematic, and well-documented student learning assessment process has been gaining importance throughout higher education. Indeed, implementation of such a process is typically a requirement for obtaining and maintaining accreditation. Because faculty need to embrace learning assessment in order for it to be successful, any misconceptions about the nature of assessment need to be dispelled. One way to accomplish that is to “rebrand” (i.e., change perceptions) the entire process.
If there’s a perfect grading system, it has yet to be discovered. This post is about point systems—not because they’re the best or the worst but because they’re widely used. It is precisely because they are so prevalent that we need to think about how they affect learning.
It would be nice if we had some empirical evidence to support our thinking. I’m surprised that so little research has been done on this common grading system. Does it promote more effective learning (as measured by higher exam scores or overall course grades) than letter grades or percentages? Does it motivate students to study? Does it make students more grade oriented or less so? Does it provoke more grade anxiety than other systems or less? Does make a difference whether we use a 100-point system or a 1,000-point system? We all have our preferences—and sometimes even reasons—for the systems we use, but where’s the evidence? I can’t remember reading anything empirical that explores these questions—if you have, please share the references.