Editor’s note: The following is an excerpt from Student-Generated Reading Questions: Diagnosing Student Thinking with Diverse Formative Assessments, Biochemistry and Molecular Biology Education, 42 (1), 29-38. The Teaching Professor Blog recently named it to its list of top pedagogical articles.
As instructors, we make a myriad of assumptions about the knowledge students bring to our courses. These assumptions influence how we plan for courses, what information we decide to cover, and how we engage our students. Often there is a mismatch between our expectations about what students know and how students actually think about a topic that is not uncovered until too late, after we examine student performance on quizzes and exams. Narrowing this gap requires the use of well-crafted formative assessments that facilitate diagnosing student learning throughout the teaching process.
The explosion of educational technologies in the past decade or so has led everyone to wonder whether the landscape of higher education teaching and learning will be razed and reconstructed in some new formation. But whatever changes might occur to the learning environments we construct for our students, the fundamental principles according to which human beings learn complex new skills and information will not likely undergo a massive transformation anytime soon. Fortunately, we seem to be in the midst of a flowering of new research and ideas from the learning sciences that can help ensure that whatever type of approach we take to the classroom—from traditional lecture to flipped classes—can help maximize student learning in our courses.
A few weeks ago, a colleague emailed me about some trouble she was having with her first attempt at blended instruction. She had created some videos to pre-teach a concept, incorporated some active learning strategies into her face-to-face class to build on the video, and assigned an online quiz so she could assess what the students had learned. After grading the quizzes, however, she found that many of the students struggled with the concept. “Maybe,” she wondered, “blended instruction won’t work with my content area.”
Measuring student success is a top priority to ensure the best possible student outcomes. Through the years instructors have implemented new and creative strategies to assess student learning in both traditional and online higher education classrooms. Assessments can range from formative assessments, which monitor student learning with quick, efficient, and frequent checks on learning; to summative assessments, which evaluate student learning with “high stakes” exams, projects, and papers at the end of a unit or term.
Many instructors will argue that student participation in class is important. But what’s the difference between participation and engagement? What does good participation or engagement look like? How can you recognize it? And how can you tell if a student is not engaged?
Online teaching is growing at a rapid pace. To meet the increasing demand of online education, many courses have been designed to enable the instructor to be more of a facilitator rather than an active participant in the classroom space (Ragan, 2009). However, building an active, student-centered learning environment in online classes is needed to prevent instructors from becoming stagnant and to motivate and inspire them to take on a variety of roles as the students’ “guide, facilitator, and teacher” (Ragan, 2009, p. 6). This article will discuss the unique needs of the online student and suggest three strategies to meet these needs through effective, innovative online instruction.
It seems like everyone is talking about the flipped classroom. But how do you use this new model to construct lessons and assessments that reinforce student learning?
The liberal arts college where I teach recently underwent review for accreditation. Like many other colleges and universities, we were criticized for our lack of assessment. Faculty resistance, it seems, may be the biggest barrier to implementing institutional assessment measures (Katz, 2010; Weimer, 2013). Both Weimer and Katz accredited faculty resistance to fears that assessment data could be used for “comparison shopping” and “educational consumerism.” While these fears are justified, at my college another fear prevails; the fear that assessment will lead to hand-holding strategies that will discourage independent thought in our students and result in failure to adequately prepare them for professional life.
This study begins with some pretty bleak facts. It lists other research documenting the failure rates for introductory courses in biology, chemistry, computer science, engineering, mathematics, and physics. Some are as high as 85 percent; only two are less than 30 percent. “Failure has grave consequences. In addition to the emotional and financial toll that failing students bear, they may take longer to graduate, leave the STEM [science, technology, engineering and math] disciplines or drop out of school entirely.” (p. 175) The question is whether there might be approaches to teaching these courses (and others at the introductory level) that reduce failure rates without decreasing course rigor.
“Creating a climate that maximizes student accomplishment in any discipline focuses on student learning instead of assigning grades. This requires students to be involved as partners in the assessment of learning and to use assessment results to change their own learning tactics.” (p. 136) The authors of this comment continue by pointing out that this assessment involves the use of formative feedback and that feedback has the greatest benefit when it addresses multiple aspects of learning. This kind of assessment should contain feedback on the product (the completed task) and feedback on progress (the extent to which the student is improving over time). The article then describes a number of formative feedback activities that illustrate how students can be involved as partners in the assessment process. Their involvement means that formative feedback can be given more frequently.