June 4, 2014
“I Tried It and It Didn’t Work!”
Someone sought me out recently to say that she’d tried something I had recommended and it didn’t work. “You need to stop recommending that to people,” she told me. “How many times did you try it?” I asked. “Once and the students hated it,” she responded. This rather direct feedback caused me to revisit (and revise) a set of assumptions that can create more accurate expectations when implementing new instructional approaches.
No strategy, policy, activity or assignment “works” the same way for every student.
Students experience course events and activities differently, depending on their background knowledge, prior learning experiences, and what happened before class at home, on the job or during the weekend. For some of them the new instructional activity will be one of those great learning experiences, for others it will be satisfactory, and for some it will be less than either of you had hoped. It’s hard to predict how many students will fall into each of these categories or the spaces in between.
No strategy, policy, activity or assignment “works” for every teacher.
How you plan and execute a new activity matters. If you spend time on the design and implementation details, it will likely be a great learning experience for more students. Doing it well also means doing it your way. What you heard or read is how someone else made it work. Planning for an instructional change should include figuring out what will make it work for you. If you try something and it doesn’t work it could be a bad idea, but it could also be that it doesn’t fit comfortably with how you teach.
No strategy, policy, activity, or assignment “works” in every course.
What you teach also plays a role in what you can do and how successful it will be. Some content is easier to discuss. Some lends itself to demonstration. Some content can be mastered in groups. It’s useful to consider how the configuration of content implicates instructional method. The shape of what we teach doesn’t lock us into a predetermined set of approaches, but it does make some things easier to implement than others. It’s important to consider what works in light of what you teach.
Make predictions but don’t be surprised by the outcome.
You want to select those new strategies, policies, or activities with high probabilities of success. And you want to enhance those chances further with careful planning, by preparing students, and giving it your best effort. But don’t assume doing everything right guarantees success, and the implication here goes both ways. There’s also merit in trying some things that don’t seem all that likely to work. Instructional changes often produce surprising results.
No new approach is the best it can be the first time you try it.
How many times should you try something that doesn’t work? Do we tell students to give up after one try? And once is definitely not enough if the decision to abandon is based on student feedback focused on their feelings. New approaches can look good on paper and turn out to be not so good in practice. But often they can fixed—by fussing with the design details and trying it again.
The success of any strategy, policy, activity, or assignment ought to be measured by how well it promotes learning.
We can’t ignore student response to what’s happening in class. Nobody wants to teach a class where students “hate” everything, and a class where students “like” everything is just as problematic. We do have professional standards to uphold. But how students feel about an instructional approach shouldn’t be the main measure of its success. Did the new approach get students dealing with content and developing relevant skills? To answer that question, teachers must look at their learning outcomes and make some predictions about how the change will help students meet those outcomes. Then they must decide what kind of evidence indicates that learning has occurred. Lastly, they must collect and analyze the evidence. Then, and only then, is it an appropriate time to decide how successful the change has been. That way, any “it didn’t work” conclusions can be more fact based than feeling based.
My thinking about “what works” continues to be significantly influenced by this great article: Tanner, K. D. “Reconsidering ‘What Works.’” Cell Biology Education—Life Sciences Education, 2011, 10 (Winter), 329-333.
© Magna Publications. All Rights Reserved.