Grading serves multiple purposes. While the most obvious purpose is to evaluate students’ work — as a measure of competency, achievement, and meeting the expectations of the course — grading can also be a key to communication, motivation, organization and faculty/student reflection. It’s for that reason that Virginia Johnson Anderson, EdD, calls grading “a context-dependent, complex process.”
HIGHER ED TEACHING STRATEGIES FROM MAGNA PUBLICATIONS
assessing student learning
A new edition of a classic book on the curriculum suggests eight lessons from the learning literature with implications for course and curriculum planning. Any list like this tends to simplify a lot of complicated research and offer generalizations that apply most, but certainly not all, of the time. Despite these caveats, lists like this are valuable. They give busy faculty a sense of the landscape and offer principles that can guide decision making, in this case about courses and curricula.
Assessing institutional effectiveness is a noble pursuit, but measuring student learning is not always easy. As with so many things we try to quantify, there’s
Curriculum, instruction, and assessment: the three fundamental components of education, whether online or face to face. Author Milton Chen calls these the “three legs of the classroom stool” and reminds us that each leg must be equally strong in order for the “stool” to function properly, balanced and supportive. Habitually, the questions What am I going to teach and How am I going to teach it? weigh heavier on an instructor’s mind than How will I assess? As a result, the assessment “leg” of the classroom stool is often the weakest of the three, the least understood and least effectively implemented.
Academically Adrift is provoking plenty of discussion throughout American higher education, and with good reason. While there are valid concerns about the methodology, instrumentation and overreaching inferences of Richard Arum’s and Josipa Roksa’s research study, many of their conclusions are important ones that have been confirmed by others.
Student learning outcomes assessment can be defined in a lot of different ways, but Lisa R. Shibley, PhD., assistant vice president for Institutional Assessment and Planning at Millersville University, has a favorite definition. It’s from Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education by Barbara E. Walvoord and states that student learning outcomes assessment is “the systematic collection of information about student learning, using time, knowledge, expertise, and resources available in order to inform decisions about how to improve learning.”
“Self-regulation is not a mental ability or an academic performance skill; rather it is the self-directive process by which learners transform their mental abilities into academic skills.” (p. 65) That definition is offered by Barry Zimmerman, one of the foremost researchers on self-regulated learning. It appears in a succinct five-page article that offers a very readable overview of research in this area.
When graded papers get a quick glance before being shoved into a backpack or deposited into the trash can on the way out of class, it’s often hard for teachers to summon the motivation to write lots of comments on papers. That’s why I was pleased to find evidence in two studies that students do value written comments on their work.
Are our students learning? Are they developing? Are we having an impact? These questions are only a small sample of those that faculty ask before, during, and after each course that they teach. Faculty often attempt to answer such questions using the evidence they have—student remarks during class and office hours, student performance on examinations or homework assignments, student comments solicited via teaching evaluations, and their own classroom observations. While these forms of evidence can be useful, such informal assessments also can be misleading, particularly because they are generally not systematic or fully representative.
Using multiple test trials was something I had never considered until found myself in a newly assigned course with an old syllabus. The previous course, which consisted of 310 total points, included 140 (45 percent) testing-based points. In addition to a 100-point final exam, there were four 10-point quizzes. I was intrigued by the quiz design format that allowed students to take the quiz up to three times over the course of a week, with the average score added to the grade book.