I remember talking to my students about GenAI for the first time, almost three years ago. We covered the uses and limitations of some tools. We talked about the danger of losing our own voice. We talked about the importance of writing in our own words, even if the end product isn’t as neat as AI-generated prose. I remember taking over the conversation, asserting what GenAI is good for and what it’s bad at. I fell into a pattern of announcing “this is fine” and “don’t do that.”
At the time, it was the best I could do.
Since then, I have spent a lot of time refining my approach to GenAI in the classroom. I realized that my initial approach – though sound at the time – deprived my students of agency. I allowed my students to share their ideas about the technology. But then, I stepped in to make all the decisions that really mattered.
My approach now is very different. I ask my students to design their own writing processes, implement them, and reflect on them. Some of my students work GenAI into the process, while others prefer to leave it alone. I empower my students to decide whether and how GenAI should play a role in their work. At the same time, I want them to be mindful of how easy it is to give our voices away when we use this technology.
The key to making this all work is AI Transparency Statements.
We can use these statements to encourage our students to use GenAI responsibly and to think more deeply about their relationships to this technology, while also building a culture of trust and accountability. They are invaluable.
In the rest of this article, I will cover the two actions I ask students to complete in their AI Transparency Statements. Feel free to replicate and adapt them.
Step 1: Students describe how they used GenAI, according to our AI scale
As mentioned above, my students design multi-step processes for completing their projects. In the first part of the AI Transparency Statement, I ask my students to go back to those steps and self-report if and how they used AI. And depending on the context, I sometimes ask them to submit chat transcripts with their statement.
I refer them to the AI Assessment Scale (AIAS), which was developed by Leon Furze, Mike Perkins, Jasper Roe, and Jason MacVaugh. This scale assigns numbers for levels of GenAI usage, ranging from 1 to 5. A 1 means the student used no GenAI. A 2 means that the student used GenAI for pre-task activities like planning and brainstorming. A 3 means that the student used GenAI for assistance in completing the task itself. For example, some students use GenAI programs to get feedback. Or they use it for rephrasing some sentences or a paragraph. A 4 means that the student used GenAI to complete the task, with their direction. A 5 means that the student is using GenAI to find innovative solutions to a problem and to boost their own creativity.
Often, professors treat the AIAS as a faculty-facing tool. For example, we might tell students to stick to level 2 for an essay outline: they can use a GenAI program for brainstorming and getting started, but we want them to create the outline themselves.
But for me, the real power of the AIAS is as a student-facing tool. We can ask students to give the different parts of their process a rating, and then walk through why they gave them that rating. This encourages responsible AI usage, as well as meta-awareness.
Step 2: Students defend their use or non-use of GenAI
Once students assign AIAS numbers to their process, I then ask them to defend their use or non-use of GenAI. Why did they feel that level-3 usage of GenAI was appropriate for that part in the process? What would have been lost if they went down to a level-2 usage or up to a level-4 usage?
As they defend their use or non-use of GenAI, I want them to think about power and personal voice. Did that specific use of GenAI empower them or did it take their power away?
The goal, here, is to encourage students to link their decisions to larger questions about personal empowerment. After all, the responsible use of AI isn’t just about upholding academic integrity. It’s about making decisions that don’t give away our voice.
Final Thoughts
Using AI Transparency Statements doesn’t AI-proof assignments, if such a thing were even possible. Rather, these are a chance to highlight the importance of self-reflection and intentionality as we interact with these systems — not only in classes, but in our everyday lives.
Without these skills, it’s incredibly easy to lose ourselves.
Jason Gulya is a Professor of English and Communications at Berkeley College, where he chairs the AI and Academic Integrity Committee and serves on the AI Task Force. His first book was titled “Allegory in Enlightenment Britain: Literary Abominations” (Palgrave Macmillan, 2022). He also co-wrote a book titled “Artificial Intelligence, Real Literacy.” (2025) with Paul Matthews. He’s currently researching the rise of GenAI, process-based teaching, and alternative assessment.
Reference
Furze, Leon. “Updating the AI Assessment Scale,” https://leonfurze.com/2024/08/28/updating-the-ai-assessment-scale/