“Now is the time to understand more, so that we may fear less.”
—Marie Curie
In my role as a fellow at my university’s Center for Teaching and Learning, I’ve had dozens of conversations with faculty across disciplines—and one pattern has become impossible to ignore. The gulf between those working to integrate AI into their teaching and those swearing off its use entirely is growing wider by the month. It’s not just about comfort with technology; it’s about pedagogical identity, ethics, trust, and the role of higher education in a rapidly changing world.
Some faculty are experimenting with co-written essays and AI-graded orals. Others are defaulting to analog tools like in-class handwritten exams. Still others are choosing not to address AI at all—perhaps hoping it will fade, or that someone else will lead the conversation. But as AI tools become more deeply embedded in how students learn, communicate, and imagine their futures, “not engaging” isn’t neutrality. It’s a message. And our students are listening.
AI may or may not upend higher education, but in the meantime, it’s prompting urgent questions: What are we assessing? What do we value? How do we prepare students not just to perform, but to think, reflect, and adapt in a world where generative tools are the norm?
This isn’t a call to panic, or to blindly adopt new technologies. Instead, it’s an invitation: to engage, to listen, and to rethink what meaningful learning looks like in the age of AI.
Head in the Sand Isn’t Helping
Faculty skepticism toward AI isn’t unfounded. Many of the concerns I’ve heard from colleagues are serious and principled: worries about surveillance and data privacy, the environmental toll of training large language models, the ethical murkiness of how datasets are scraped and whose labor powers “intelligent” systems. Others raise deeply humanistic questions: What happens to student creativity when a machine can draft a paper in seconds? What becomes of learning when shortcuts are so easy to take?
These aren’t trivial. But neither is pretending this technology doesn’t exist.
I recently reached out to a student who had taken a “W” in one of my courses. When I asked why he withdrew, he cited discomfort with the role AI was playing in the class. Not because of cheating or confusion, but because the presence of AI made him question the point of the course—and maybe of college altogether. If AI could do the work, what was the value of his contribution? Why were we still doing things the old way if the world had already changed?
That moment hit hard. Not because it revealed something broken in my class, but because it surfaced the very questions many students are too hesitant—or too disillusioned—to ask aloud.
Higher education is at an inflection point. If we refuse to engage with the forces shaping our students’ futures, we risk becoming irrelevant not because AI replaces us, but because we’ve chosen not to show up.
The Concerns Are Real—and Bigger Than Cheating
It’s easy to reduce the AI debate in education to one issue: cheating. And yes, generative AI makes it easier than ever to outsource writing, coding, or even lab reports. But the ethical landscape of AI is much broader, and in many ways, more troubling, than academic dishonesty alone.
There’s the environmental cost. Training a single large language model can consume more electricity and water than some small towns (though geography plays a key role in the cost and availability of these resources). As educators asking students to confront the realities of climate change, it’s fair to ask whether adopting energy-intensive technologies contradicts our values.
There’s also the labor behind the intelligence. Many AI models rely on vast troves of human-generated data, scraped without consent, and cleaned or labeled by underpaid workers in precarious conditions. Even as these systems are hailed as revolutionary, they’re often built atop invisible, exploitative structures.
And then there are the questions of voice and bias. Who gets to decide what “good writing” or “correct” analysis looks like when the algorithms were trained on dominant cultural norms? What knowledge gets privileged, and what gets erased?
These are valid reasons to hesitate. To question. To push back.
But they’re also reasons to talk—to surface the complexities with our students rather than shielding them from the conversation. Because AI isn’t just a technological shift; it’s a mirror reflecting what we value in education, labor, and society at large. And the only way to use that mirror well is to look into it—together.
Engagement Starts with Transparency
Even if you don’t use AI in your teaching—and even if you don’t want students to use it—ignoring it isn’t a neutral act. In today’s classroom, silence sends a message. And more often than not, students interpret that message as either indifference or confusion.
That’s why the most important place to start is also the simplest: your syllabus.
Whether you fully embrace AI, permit it in limited ways, or prohibit it entirely, make your expectations visible. Be specific about when, how, and why students are or are not allowed to use generative tools. If AI is restricted for certain assignments, explain the rationale. If it’s allowed, clarify what constitutes appropriate use—and what crosses the line into misrepresentation.
This isn’t just about compliance or classroom management. It’s about modeling critical thinking. When we articulate our stance on AI, especially with nuance, we teach students how to approach emerging technologies with intention rather than fear or opportunism. We show them that tools are never neutral, and that ethical use requires context, purpose, and reflection.
A well-crafted AI policy in the syllabus isn’t just a rules section. It’s a pedagogical opportunity. It invites students to see learning as more than task completion—and faculty as more than enforcers of boundaries.
Co-Creating AI Policies with Students
One of the most illuminating experiences I’ve had with AI in the classroom wasn’t about a tool—it was about a conversation.
At the beginning of the term, I asked my students to help draft our class AI policy. Not just to vote on what was allowed or not, but to actually engage with the deeper questions:
- What is AI good for?
- When does using it help us learn, and when might it interfere?
- Why might we choose not to use it, even if it’s available?
Some students were hesitant. Others were skeptical. A few openly admitted they hadn’t given the issue much thought. But the discussion that unfolded was rich, not because we arrived at a perfect consensus, but because we didn’t.
In fact, when I posed the question, “Can we come to a single, consensus class policy we all agree on?” several students said no. That moment, of disagreement, was a gift. It opened up a space to explore divergent opinions and values. What emerged was a shift from a single rule to individually described and justified policies: each student reflecting on how they might or might not use AI in their own work, and under what conditions.
The results were powerful. Students who had initially seen AI as a shortcut began to see it as a tool, one that like any other required skill, responsibility, and judgment. Others who feared AI’s presence in the classroom said they felt more respected, more seen, and more in control after having the chance to express their discomfort and shape the rules that would govern their learning.
Co-creating policy didn’t make the complexity go away. But it did something more important: it positioned students as thinking partners, not passive recipients. It made clear that education is not about enforcing compliance—it’s about wrestling with hard questions together.
Rethinking Assessment in an Age of Automation
If there’s one place where AI has rattled the foundations of higher education, it’s assessment. From auto-generated essays to AI-coded solutions, the fear is clear: how do we know students are actually doing the work themselves?
Some instructors have responded by doubling down: returning to blue books, in-class timed writing, or closed-note exams. But as one colleague of mine pointed out, even that has limits. Most of our students aren’t used to writing by hand anymore. For some, these methods feel less like rigor and more like an arbitrary barrier to demonstrating what they know.
Others are experimenting. A colleague now uses oral exams (with AI tools assisting in grading) to reduce bias and create a more authentic picture of student understanding. The approach is labor-intensive, but it’s also more human. It asks students to speak to what they’ve learned, to explain their reasoning, to think on their feet.
Is that scalable for every class? No. But it raises an important question: What exactly are we assessing, and why? Are we testing memory, or synthesis? Output, or process? Compliance, or curiosity?
AI is forcing us to reconsider not just how we assess learning, but what we think learning is.
Maybe it’s time to move beyond one-size-fits-all exams and instead explore multimodal assessments: things like revision portfolios, audio reflections, collaborative projects, or scaffolded assignments that make “cheating” harder not through restriction, but through design.
Yes, that takes time. Yes, it takes creativity. But perhaps what’s most needed isn’t new tools or tighter rules, but the courage to leave behind methods that no longer serve our students or our goals.
AI as Catalyst, Not Catastrophe
It’s tempting to view AI as the problem. And in some ways, it is, especially when it accelerates inequities, obscures labor or undermines trust. But it’s also a mirror. It reflects our uncertainty about what we’re teaching, how we’re measuring it, and why any of it matters.
For those of us in higher education, that reflection can feel uncomfortable. But discomfort isn’t the enemy. It’s often the beginning of clarity.
AI has surfaced long-standing tensions, between efficiency and depth, standardization and creativity, performance and learning. It’s challenged our assumptions about what knowledge looks like and who gets to demonstrate it. And in doing so, it’s offered us a rare opportunity: to rethink what we’re really doing in the classroom.
This doesn’t mean rushing to adopt every new tool. It doesn’t mean ignoring legitimate ethical concerns. It means refusing to retreat into defensiveness or nostalgia. It means showing up, for our students, for each other, and for the evolving work of teaching and learning.
Our students don’t need us to have all the answers. They need us to model how to live with the questions. They need to see that thoughtful, ethical, human learning is still possible, especially in a world full of algorithms.
So no, this moment doesn’t call for panic. But it does call for pedagogy. And purpose. And the willingness to meet the future, not with fear but with imagination.
Demian Hommel, PhD, teaches introductory and upper-division human geography courses in the College of Earth, Ocean, and Atmospheric Sciences at Oregon State University. He is also a fellow with the institution’s Center for Teaching and Learning, working to push the mission of excellence in teaching and learning across his campus and beyond.