Just as pocket calculators, personal computers, and smartphones have posed threats to students learning math skills, AI (artificial intelligence) seems to be the new tool poised to undermine the use of writing assignments to assess student learning.
In November 2022, a tool called ChatGPT made headlines for its ability to “write” any content. As an instructional designer, I immediately heard from worried faculty that the sky may be falling, wondering what chance they had in the face of robots that could write student papers.
After some reflection, I have come to believe that, in the long run, worrying about how students might use AI to cheat is not the most productive question to focus on. The better question is, even in the era of AI, how can we best teach our students? Below are three methods of designing writing assignments in the face of an AI incursion.
Method 1: Ignorance is bliss
On the extreme responses, we have “ignorance is bliss” and “resistance is futile” approaches. These attitudes are lumped together because both favor avoiding the core issue. In the former, an instructor may simply be unaware that students can now type a writing prompt into a website and copy the answer it generates into a document to submit. In the latter, an instructor may be aware of AI’s ability to write, but may metaphorically throw up their arms at the overwhelming notion that they can no longer know whether a student has written a submitted paper.
At worst, instructors with this mindset could resign themselves to grading work written by AI and hope most students are still writing their own papers and learning from feedback. For instructors who evaluate to help students develop their writing skills, it would be a waste of time to respond to anything their students did not write – and these students would have little invested in reviewing the feedback.
For instructors who are aware of AI’s ability to write a paper but who don’t feel ready to tackle the robot head-on, the key strategy is one already used to thwart students from passing off another’s work as their own.
- Employ plagiarism checkers. Just as we have never known for sure that a students’ classmate or sibling didn’t write their paper, we now fear we will not be able to discern if a computer has done their work. Many instructors already rely on plagiarism checkers. But while a plagiarism detector cannot tell us who wrote a paper if it is not in a database of papers to be checked against, there is now at least one plagiarism detector dedicated to sniffing out AI-generated content. If an epidemic of AI work is submitted in school, or even if instructors are convinced of the possibility, there will probably be a proliferation of tools to detect AI writing. As promising as this may sound, I want to add a caveat: In over ten years of teaching freshman English, I learned that the more I policed student work, the less energy I had to be a good teacher. Be prudent in how much effort you devote to this strategy.
Method 2: Know the enemy
Second is the “know thy enemy” approach. AI isn’t going away. It’s going to expand and improve and become more nuanced. Instead of focusing solely on detection, instructors can work to circumvent the submission of AI text in the first place. The strategies of this method rely on designing work that AI cannot perform. Here is a representative sample, in order of increasing promise.
- In-class writing. Use in-class writing prompts. The popular conception is that if you watch your students write, they can’t cheat. But in-class writing doesn’t produce every type of writing or engage every skill we want to assess. It might preclude the writing process in favor of a product and it might well assess how someone writes under pressure. Although in-class writing can successfully be adopted to measure comprehension and subject matter knowledge, it does not appear to be the best method of assessing various forms of writing.
- Writing alternatives. Assign visual organizers or other assignments instead of papers. In time, AI will probably generate any form of assignment we can devise. For now, though, instructors could measure how well a student’s thesis is supported by ideas, evidence, and arguments, and whether optimal organization is used. This could lead to presentations in place of written papers, or even collaborative writing sessions during class, if appropriate for the course outcomes.
- Topics that avoid AI’s wheelhouse. Assign highly specific prompts. AI is less likely to convincingly address prompts written with granular specificity. This is even more true if the prompt relates to a discussion that occurred in class or some other content that students encountered (guest speakers, peer presentations, field trips, in-class debates, etc.), of which the AI is not aware. If you require students to include unique and specific knowledge in their writing, AI has little chance of including the content you require.
- Writing based on human experience. Assign writing that relies on student perspective, experience, and cultural capital. This approach aligns with a diversity, equity, and inclusion model of designing writing assignments that could result in the most meaningful analysis and synthesis of information. The instructor is just as likely to learn from their students’ work as the students are. One underlying premise here is that AI will not produce texts with resonant personal perspective; but even if AI can replicate this type of writing, a second premise is that a writing assignment that invites students to share the ways in which their lives intersect with academia will motivate students to write their own papers.
Perhaps the final suggestion in this list harkens back to the “ignorance is bliss” approach, in which instructors hope students write their own papers. I see a difference, though, and suspect students will, too, in that the motivation behind the two methods is different, with the latter seeking ways to evolve and improve the student experience of the assignment.
Method 3: If you can’t beat them, join them
Finally, we have the “if you can’t beat them, join them” approach, in which instructors embrace the reality of AI-written content and work with their students to demystify and deconstruct the textual artifacts AI produces. This approach is best suited to classes that have ample time to perform a rhetorical analysis of AI writing and the expectations and assessments of writing assignments.
- Rhetorical analysis. Deconstruct the very act of AI writing. Discuss how AI “learns” to write. What assumptions about good writing are revealed when AI writing is analyzed? What is AI incapable of doing in its writing? Are there writing situations where AI should be more or less trusted? What is the role of the human in generating and proofreading AI text?
- Peer review. Conduct a peer review and/or class discussion of AI writing. Analyze what it writes. What content does AI include? What does it not include? How does AI organize its writing? What sentence structures does AI favor? Analyze the style in terms of voice, tone, diction, and syntax. Is there rhythm in AI language? Can the full rhetorical situation be deduced by analyzing an AI text? How could the text better address the rhetorical situation?
- Revision. Revise an AI generated text. Aside from correcting factual errors, have students experiment with re-arranging the contents of an AI written piece. Have students expand the paragraphs, combine the sentences, add support, and rewrite conclusions. Use the AI text as a starting point, as an opportunity. Students may find it difficult to improve upon “perfection,” but also may find it easier to revise the writing of a soulless program than that of their peers.
- Class presentations. Present a comparison/contrast of AI versus human writing. Without knowing the author, can students tell which text is written by a human and which by AI? Who writes better? Which writing “sounds” better? Compare line-by-line, thesis statements, voice, organization, evidence and support, arguments and logic, overall impact, and persuasiveness of the pieces.
- Refinement. Try to make AI refine its writing with a focus on the rhetorical situation. Have students compose several variations of the same prompt to fine tune the result that AI produces. Are there limits to how much we can refine the writing? Are there trade-offs of one element being sacrificed when another is included or enhanced? Have students try to dial in the rhetorical situation by adjusting for audience, purpose, voice, tone, etc. Ultimately, is it easier to have AI write the perfectly appropriate text for a specific situation or to write it on our own?
There is no wrong or right method of addressing the advent of AI in a writing class. Any instructor might employ a variety of these strategies. The ideas presented here are not exhaustive, but are offered to promote thought and add perspective. There is so much more to writing than the act of composing sentences that I do not think we need to fear AI will be the death knell of composition in education.
In fact, AI may encourage a brave new exploration of higher-order thinking skills. There are surely larger conversations to have about the role of composition courses in higher education—and of assessments in all courses—but the argument can be made that AI is a tool and students who learn to use that tool are learning a valuable skill.
In ten years, maybe Skynet will be writing everyone’s five paragraph essays and none of this will matter. Or maybe we’re panicking about another Y2K. AI will certainly still be plugging away in the next generation. We can adjust now to facilitate it doing so hand-in-hand with higher education.
Eric Prochaska taught English for over ten years before pivoting to instructional design. He currently works at Mt. Hood Community College in Oregon, where he helps faculty design online courses and activities.