Tests and quizzes are often the primary means of assessing online learner performance; however, as Rena Palloff and Keith Pratt, online instructors and coauthors of numerous online learning books, including Lessons from the Virtual Classroom: The Realities of Online Teaching (2013), point out, there are more effective and less problematic alternatives.
After going out for tacos, our students can review the restaurant on a website. They watch audiences reach a verdict on talent each season on American Idol. When they play video games—and they play them a lot—their screens are filled with status and reward metrics. And after (and sometimes while) taking our classes, they can go online to www.ratemyprofessors.com.
Despite almost universal agreement that critical thinking needs to be taught in college, now perhaps more than ever before, there is much less agreement on definitions and dimensions. “Critical thinking can include the thinker’s dispositions and orientations; a range of specific analytical, evaluative, and problem-solving skills; contextual influences; use of multiple perspectives; awareness of one’s own assumptions; capacities for metacognition; or a specific set of thinking processes or tasks.” (p. 127)
Are your students too answer oriented? Are they pretty much convinced that there’s a right answer to every question asked in class? When preparing for exams, do they focus on memorizing answers, often without thinking about the questions?
To cultivate interest in questions, consider having students write exam questions. Could this be a way to help teachers generate new test questions? Don’t count on it. Writing good test questions — ones that make students think, ones that really ascertain whether they understand the material — is hard work. Given that many students are not particularly strong writers to begin with, they won’t write good test questions automatically. In fact, you probably shouldn’t try the strategy if you aren’t willing to devote some time to developing test writing skills.
Sometimes, in informal conversations with colleagues, I hear a statement like this, “Yeah, not a great semester, I doled out a lot of C’s.” I wonder, did this professor create learning goals that were unobtainable by most of the class or did this professor lack the skills to facilitate learning? I present this provocative lead-in as an invitation to reflect upon our presuppositions regarding grading.
“In this article, we describe an easily adoptable and adaptable model for a one-credit capstone course that we designed to assess goals at the programmatic and institutional levels.” (p. 523) That’s what the authors claim in the article referenced below, and that’s what they deliver. The capstone course they write about is the culmination of a degree in political science at a public university.
“Creating a climate that maximizes student accomplishment in any discipline focuses on student learning instead of assigning grades. This requires students to be involved as partners in the assessment of learning and to use assessment results to change their own learning tactics.” (p. 136) The authors of this comment continue by pointing out that this assessment involves the use of formative feedback and that feedback has the greatest benefit when it addresses multiple aspects of learning. This kind of assessment should contain feedback on the product (the completed task) and feedback on progress (the extent to which the student is improving over time). The article then describes a number of formative feedback activities that illustrate how students can be involved as partners in the assessment process. Their involvement means that formative feedback can be given more frequently.
In the mid-1990s, college faculty members were introduced to the concept of classroom assessment techniques (CATs) by Angelo and Cross (1993). These formative assessment strategies were learner-centered, teacher-directed ongoing activities that were rooted in good teaching practice. They were designed to provide relatively quick and useful feedback to the faculty member about what students did and did not understand in order to enhance the teaching and learning process.
Stronger than multiple choice, yet not quite as revealing (or time consuming to grade) as the essay question, the short answer question offers a great middle ground – the chance to measure a student’s brief composition of facts, concepts, and attitudes in a paragraph or less.
I started using an online grade book as a convenience for myself. Here, finally, was a grade book that couldn’t get lost or stolen, and it would be automatically backed up by the IT department every night. The accumulated scores could also be downloaded directly into a spreadsheet for calculation of grades, a shortcut that reduced the possibility of errors.