assess student learning
Sometimes feedback leads to better performance, but not all the time and not as often as teachers would like, given the time and effort they devote to providing students feedback. It’s easy to blame students who seem interested only in the grade—do they even read the feedback? Most report that they do, but even those who pay attention to it don’t seem able to act on it—they make the same errors in subsequent assignments. Why is that?
Sometimes, in informal conversations with colleagues, I hear a statement like this, “Yeah, not a great semester, I doled out a lot of C’s.” I wonder, did this professor create learning goals that were unobtainable by most of the class or did this professor lack the skills to facilitate learning? I present this provocative lead-in as an invitation to reflect upon our presuppositions regarding grading.
Of all the activities that go into educational assessment, ironically two of most rewarding also are two of the most overlooked: 1). sharing the results with stakeholders and 2). using the results to effect change.
After devoting so much time and energy to creating assessments, far too often what happens is someone takes the data that’s been gathered and compiles a dense, statistics-laden report that is difficult to find, read, or understand. Meanwhile everyone else turns their attention to more pressing matters; happy they finally got rid of that annoying pebble in their shoe.
You put a lot of hard work into creating student assessments. And then what? With all the time spent developing and administering assessments, it’s a shame not to reap the benefits of your efforts. This seminar will teach you how to summarize and use your assessment results.
audio Online Seminar • Recorded on Wednesday, July 13th, 2011
Most college teachers assume that more tests are better than a few. Why? What caused us to decide on three or four unit tests followed by a final? Is there evidence that students don’t do as well in courses where there are only a midterm and a final? Why do we think that more tests might be better? And what do we mean by better? Higher grades? More learning?
Student learning outcomes assessment can be defined in a lot of different ways, but Lisa R. Shibley, PhD., assistant vice president for Institutional Assessment and Planning at Millersville University, has a favorite definition. It’s from Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education by Barbara E. Walvoord and states that student learning outcomes assessment is “the systematic collection of information about student learning, using time, knowledge, expertise, and resources available in order to inform decisions about how to improve learning.”
Are our students learning? Are they developing? Are we having an impact? These questions are only a small sample of those that faculty ask before, during, and after each course that they teach. Faculty often attempt to answer such questions using the evidence they have—student remarks during class and office hours, student performance on examinations or homework assignments, student comments solicited via teaching evaluations, and their own classroom observations. While these forms of evidence can be useful, such informal assessments also can be misleading, particularly because they are generally not systematic or fully representative.
Using multiple test trials was something I had never considered until found myself in a newly assigned course with an old syllabus. The previous course, which consisted of 310 total points, included 140 (45 percent) testing-based points. In addition to a 100-point final exam, there were four 10-point quizzes. I was intrigued by the quiz design format that allowed students to take the quiz up to three times over the course of a week, with the average score added to the grade book.
It’s a new year, but the same old challenges exist. Given today’s financial challenges, colleges and universities are all working harder than ever to be careful stewards of limited resources and to demonstrate their effectiveness to stakeholders, constituents, and the public.
In their new book, Designing Effective Assessment: Principles and Profiles of Good Practice, Trudy Banta, Elizabeth Jones, and Karen Black provide assessment profiles from a wide variety of institutions and units. In advance of her online seminar titled Principles and Profiles of Good Practice in Assessment. Dr. Banta answered questions about the book and some of the topics she will discuss next week’s seminar.