student learning outcomes
We give students exams for two reasons: First, we have a professional responsibility to verify their mastery of the material. Second, we give exams because they promote learning. Unfortunately, too often the first reason overshadows the second. We tend to take learning outcomes for granted. We assume the learning happens, almost automatically, provided the student studies. But what if we considered how, as designers of exam experiences, we might maximize their inherent potential? Would any of these possibilities make for more and better learning from the exams your students take?
There are those in the academic community who dread hearing and reading about assessment. But aside from the mandatory reporting required by credentialing and accreditation agencies, how can faculty members be sure that all of the assessment activities they are required to report actually produce change and are not just more paperwork?
Sometimes, in informal conversations with colleagues, I hear a statement like this, “Yeah, not a great semester, I doled out a lot of C’s.” I wonder, did this professor create learning goals that were unobtainable by most of the class or did this professor lack the skills to facilitate learning? I present this provocative lead-in as an invitation to reflect upon our presuppositions regarding grading.
Students frequently wonder and sometimes ask, “Why are we doing this? Why do I need to know this? Why are we spending so much time on this? Why do we have to do this busywork?”
When students don’t see the connection between the content and activities of the course and their future lives, they question what’s happening and what we ask them to do. Research confirms that perceived relevance is a critical factor in maintaining student interest and motivation. It also contributes to higher student ratings on course evaluations.
There’s a tacit rule that most college teachers abide by: I won’t mess with your course if you agree not to mess with mine. Gerald Graff observes and asks, “This rules suits the teacher, but how well does it serve students?” (p. 155)
My son Alex is an average 20-year-old college sophomore. He gets OK grades, and like many people his age, seems more interested in video games than school. Looking at him, you might think that nothing in particular excites him.
Student learning outcomes assessment can be defined in a lot of different ways, but Lisa R. Shibley, PhD., assistant vice president for Institutional Assessment and Planning at Millersville University, has a favorite definition. It’s from Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education by Barbara E. Walvoord and states that student learning outcomes assessment is “the systematic collection of information about student learning, using time, knowledge, expertise, and resources available in order to inform decisions about how to improve learning.”
“Self-regulation is not a mental ability or an academic performance skill; rather it is the self-directive process by which learners transform their mental abilities into academic skills.” (p. 65) That definition is offered by Barry Zimmerman, one of the foremost researchers on self-regulated learning. It appears in a succinct five-page article that offers a very readable overview of research in this area.
Online courses are increasingly being developed by a team of instructional designers, curriculum specialists, and instructional technologists. In the majority of cases, these courses feature standardized content such as a common syllabus and assignments, and reusable course modules and learning objects.
Are our students learning? Are they developing? Are we having an impact? These questions are only a small sample of those that faculty ask before, during, and after each course that they teach. Faculty often attempt to answer such questions using the evidence they have—student remarks during class and office hours, student performance on examinations or homework assignments, student comments solicited via teaching evaluations, and their own classroom observations. While these forms of evidence can be useful, such informal assessments also can be misleading, particularly because they are generally not systematic or fully representative.