Most professors would admit that they’ve found themselves frustrated when grading papers. Yes, sometimes those frustrations might stem from students ignoring your clear, strategic, and
HIGHER ED TEACHING STRATEGIES FROM MAGNA PUBLICATIONS
“Any questions?” “Is everybody with me?” “Does this make sense?” I have asked my students these vague types of questions many times and the most common response was…silence. But how should I interpret the silence? Perhaps the students understand everything completely and therefore have no questions. Maybe they have questions but are afraid to ask them out of fear of looking stupid. Or it could mean that they are so lost they don’t even know what to ask! Only our boldest students would say; “Um, you lost me 10 minutes ago, can you repeat the whole thing again?”
Assessment for Learning (AfL), sometimes referred to as “formative assessment” has become part of the educational landscape in the U.S. and is heralded to significantly raise student achievement, yet we are often uncertain what it is and what it looks like in practice in higher education. To clarify, AfL includes the formal and informal processes that faculty and students use during instruction to gather evidence for the purpose of improving learning. The aim of AfL is to improve students’ mastery of the content and to equip and empower them as self-regulated, life-long learners.
For many professors, student assessment is one of the most labor-intensive components of teaching a class. Items must be prepared, rubrics created, and instructions written. The work continues as the tests are scored, papers read, and comments shared. Performing authentic and meaningful student assessment takes time. Consequently, some professors construct relatively few assessments for their courses.
There’s a lot to be gained from considering ideas and arguments at odds with current practice. In higher education, many instructional practices are accepted and replicated with little thought. Fortunately, there are a few scholars who keep asking tough questions and challenging conventional thinking. Australian D. Royce Sadler is one of them. His views on feedback and assessment are at odds with the mainstream, but his scholarship is impeccable, well-researched, and logically coherent. His ideas merit our attention, make for rich discussion, and should motivate us to delve into the assumptions that ground current policies and practices.
As a new teacher, one of the resources I found most helpful in shaping my grading practices was Grant Wiggins’s advice on feedback and assessment. Meaningful feedback, he suggests, is much more than assigning a grade or even offering recommendations for improvement. Rather, meaningful feedback is descriptive, “play[ing] back” the student’s performance and connecting it to the learning outcomes of the course.
Classroom Assessment Techniques, or CATs, are simple ways to evaluate students’ understanding of key concepts before they get to the weekly, unit, or other summative-type assessment (Angelo & Cross, 1993). CATs were first made popular in the face-to-face teaching environment by Angelo and Cross as a way to allow teachers to better understand what their students were learning and how improvements might be made in real time during the course of instruction. But the same principle can apply to online teaching as well.
“Enabling interaction in a large class seems an insurmountable task.” That’s the observation of a group of faculty members in the math and physics department at the University of Queensland. It’s a feeling shared by many faculty committed to active learning who face classes enrolling 200 students or more. How can you get and keep students engaged in these large, often required courses that build knowledge foundations in our disciplines?
The current state of student assessment in the classroom is mediocre, vague, and reprehensibly flawed. In much of higher education, we educators stake a moral high ground on positivistic academics. Case in point: assessment. We claim that our assessments within the classroom are objective, not subjective. After all, you wouldn’t stand in front of class and say that your grading is subjective and that students should just deal with it, right? Can we honestly examine a written paper or virtually any other assessment in our courses and claim that we grade completely void of bias? Let’s put this idea to the test. Take one of your assessments previously completed by a student. Grade the assignment using your rubric. Afterwards, have another educator among the same discipline grade the assignment using your exact rubric. Does your colleague’s grade and yours match? How far off are the two grades? If your assessment is truly objective, the grades should be exact. Not close but exact. Anything else reduces the reliability of your assessment.
Flipped learning environments offer unique opportunities for student learning, as well as some unique challenges. By moving direct instruction from the class group space to the individual students’ learning spaces, time and space are freed up for the class as a learning community to explore the most difficult concepts of the course. Likewise, because students are individually responsible for learning the basics of new material, they gain regular experience with employing self-regulated learning strategies they would not have in an unflipped environment.