All enterprises require measurement in order to enable their management. Testing, grading and evaluation of education programs are the metrics by which we measure students, teachers and schools.
There are two main forms of assessment often used within the online classroom. Both formative and summative assessments evaluate student learning and assist instructors in guiding instructional planning and delivery. While the purpose of a summative assessment is to check for mastery following the instruction, formative assessment focuses on informing teachers in ways to improve student learning during lesson delivery (Gualden, 2010). Each type of assessment has a specific place and role within education, both traditional and online.
We give students exams for two reasons: First, we have a professional responsibility to verify their mastery of the material. Second, we give exams because they promote learning. Unfortunately, too often the first reason overshadows the second. We tend to take learning outcomes for granted. We assume the learning happens, almost automatically, provided the student studies. But what if we considered how, as designers of exam experiences, we might maximize their inherent potential? Would any of these possibilities make for more and better learning from the exams your students take?
The guidelines suggested below propose how critical thinking skills can be assessed “scientifically” in psychology courses and programs. The authors begin by noting something about psychology faculty that is true of faculty in many other disciplines, which makes this article relevant to a much larger audience. “The reluctance of psychologists to assess the critical thinking (CT) of their students seems particularly ironic given that so many endorse CT as an outcome…” (p. 5) Their goal then is to offer “practical guidelines for collecting high-quality LOA (learning outcome assessment) data that can provide a scientific basis for improving CT instruction.” (p. 5) The guidelines are relevant to individual courses as well as collections of courses that comprise degree programs. Most are relevant to courses or programs in many disciplines; others are easily made so.
April 22 - Assessing Assessment: Five Keys to Success
There are those in the academic community who dread hearing and reading about assessment. But aside from the mandatory reporting required by credentialing and accreditation agencies, how can faculty members be sure that all of the assessment activities they are required to report actually produce change and are not just more paperwork?
After going out for tacos, our students can review the restaurant on a website. They watch audiences reach a verdict on talent each season on American Idol. When they play video games—and they play them a lot—their screens are filled with status and reward metrics. And after (and sometimes while) taking our classes, they can go online to www.ratemyprofessors.com.
The liberal arts college where I teach recently underwent review for accreditation. Like many other colleges and universities, we were criticized for our lack of assessment. Faculty resistance, it seems, may be the biggest barrier to implementing institutional assessment measures (Katz, 2010; Weimer, 2013). Both Weimer and Katz accredited faculty resistance to fears that assessment data could be used for “comparison shopping” and “educational consumerism.” While these fears are justified, at my college another fear prevails; the fear that assessment will lead to hand-holding strategies that will discourage independent thought in our students and result in failure to adequately prepare them for professional life.
March 1 - The Effects of Collaborative Testing
Although letting students work together on exam questions is still not a common instructional practice, it has been used more than might be expected and in a variety of ways. Sometimes students work together in groups; other times with a partner. Sometimes those groups are assembled by the instructor and sometimes students are allowed to select their partners or group members. Sometimes the groups share multiple exam experiences; other times they work collaboratively only once. Sometimes the group submits one exam with everyone in the group receiving that grade; other times students may talk about exam questions and answers but submit exams individually.
“We ought to be up to the task of figuring out what it is that our students know by the end of four years at college that they did not know at the beginning.” That’s how Stanley Katz begins a well-written essay that explores the assessment movement in higher education.
January 3 - Critical Thinking: Definitions and Assessments
Despite almost universal agreement that critical thinking needs to be taught in college, now perhaps more than ever before, there is much less agreement on definitions and dimensions. “Critical thinking can include the thinker’s dispositions and orientations; a range of specific analytical, evaluative, and problem-solving skills; contextual influences; use of multiple perspectives; awareness of one’s own assumptions; capacities for metacognition; or a specific set of thinking processes or tasks.” (p. 127)
November 1 - Should Student Effort Count?
We’ve all had conversations with students who want effort counted in their grade: “But I tried so hard … I studied for hours … I am really working in this course.” The question is, should effort count? Less commonly asked, however, is whether it should count in both directions. Students want effort to count when they try hard but their performance doesn’t show it. But what about when an excellent performance results without much effort? Should this lack of effort lower the grade? Beyond these theoretical questions are the pragmatic ones: Can effort be measured fairly, objectively? If so, what criteria are used to assess it?