HIGHER ED TEACHING STRATEGIES FROM MAGNA PUBLICATIONS
There’s a lot to be gained from considering ideas and arguments at odds with current practice. In higher education, many instructional practices are accepted and replicated with little thought. Fortunately, there are a few scholars who keep asking tough questions and challenging conventional thinking. Australian D. Royce Sadler is one of them. His views on feedback and assessment are at odds with the mainstream, but his scholarship is impeccable, well-researched, and logically coherent. His ideas merit our attention, make for rich discussion, and should motivate us to delve into the assumptions that ground current policies and practices.
As a new teacher, one of the resources I found most helpful in shaping my grading practices was Grant Wiggins’s advice on feedback and assessment. Meaningful feedback, he suggests, is much more than assigning a grade or even offering recommendations for improvement. Rather, meaningful feedback is descriptive, “play[ing] back” the student’s performance and connecting it to the learning outcomes of the course.
The current state of student assessment in the classroom is mediocre, vague, and reprehensibly flawed. In much of higher education, we educators stake a moral high ground on positivistic academics. Case in point: assessment. We claim that our assessments within the classroom are objective, not subjective. After all, you wouldn’t stand in front of class and say that your grading is subjective and that students should just deal with it, right? Can we honestly examine a written paper or virtually any other assessment in our courses and claim that we grade completely void of bias? Let’s put this idea to the test. Take one of your assessments previously completed by a student. Grade the assignment using your rubric. Afterwards, have another educator among the same discipline grade the assignment using your exact rubric. Does your colleague’s grade and yours match? How far off are the two grades? If your assessment is truly objective, the grades should be exact. Not close but exact. Anything else reduces the reliability of your assessment.
Flipped learning environments offer unique opportunities for student learning, as well as some unique challenges. By moving direct instruction from the class group space to the individual students’ learning spaces, time and space are freed up for the class as a learning community to explore the most difficult concepts of the course. Likewise, because students are individually responsible for learning the basics of new material, they gain regular experience with employing self-regulated learning strategies they would not have in an unflipped environment.
A few weeks ago, a colleague emailed me about some trouble she was having with her first attempt at blended instruction. She had created some videos to pre-teach a concept, incorporated some active learning strategies into her face-to-face class to build on the video, and assigned an online quiz so she could assess what the students had learned. After grading the quizzes, however, she found that many of the students struggled with the concept. “Maybe,” she wondered, “blended instruction won’t work with my content area.”
Measuring student success is a top priority to ensure the best possible student outcomes. Through the years instructors have implemented new and creative strategies to assess student learning in both traditional and online higher education classrooms. Assessments can range from formative assessments, which monitor student learning with quick, efficient, and frequent checks on learning; to summative assessments, which evaluate student learning with “high stakes” exams, projects, and papers at the end of a unit or term.