I’ve sat on the Curriculum Committee at two different higher education institutions. I’ve also participated in college assessment committees and accreditation committees at both the
HIGHER ED TEACHING STRATEGIES FROM MAGNA PUBLICATIONS
student learning outcomes
For both new and veteran faculty, inheriting a syllabus to teach from is like being blindfolded on a long journey and being told, “Don’t worry, you’ll know it when we get there.” There’s a lot of trust required in order to follow someone else’s map. There are road hazards the mapmaker may not be aware of; there may be alternate routes that might get you there more directly; and it may even be prudent to choose another mode of transportation to get there.
We give students exams for two reasons: First, we have a professional responsibility to verify their mastery of the material. Second, we give exams because they promote learning. Unfortunately, too often the first reason overshadows the second. We tend to take learning outcomes for granted. We assume the learning happens, almost automatically, provided the student studies. But what if we considered how, as designers of exam experiences, we might maximize their inherent potential? Would any of these possibilities make for more and better learning from the exams your students take?
There are those in the academic community who dread hearing and reading about assessment. But aside from the mandatory reporting required by credentialing and accreditation agencies, how can faculty members be sure that all of the assessment activities they are required to report actually produce change and are not just more paperwork?
Sometimes, in informal conversations with colleagues, I hear a statement like this, “Yeah, not a great semester, I doled out a lot of C’s.” I wonder, did this professor create learning goals that were unobtainable by most of the class or did this professor lack the skills to facilitate learning? I present this provocative lead-in as an invitation to reflect upon our presuppositions regarding grading.
Students frequently wonder and sometimes ask, “Why are we doing this? Why do I need to know this? Why are we spending so much time on this? Why do we have to do this busywork?”
When students don’t see the connection between the content and activities of the course and their future lives, they question what’s happening and what we ask them to do. Research confirms that perceived relevance is a critical factor in maintaining student interest and motivation. It also contributes to higher student ratings on course evaluations.
There’s a tacit rule that most college teachers abide by: I won’t mess with your course if you agree not to mess with mine. Gerald Graff observes and asks, “This rules suits the teacher, but how well does it serve students?” (p. 155)
My son Alex is an average 20-year-old college sophomore. He gets OK grades, and like many people his age, seems more interested in video games than school. Looking at him, you might think that nothing in particular excites him.
Student learning outcomes assessment can be defined in a lot of different ways, but Lisa R. Shibley, PhD., assistant vice president for Institutional Assessment and Planning at Millersville University, has a favorite definition. It’s from Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education by Barbara E. Walvoord and states that student learning outcomes assessment is “the systematic collection of information about student learning, using time, knowledge, expertise, and resources available in order to inform decisions about how to improve learning.”