When The Atlantic asks how far colleges should go to limit AI’s harms, Tyler Austin Harper answers: as far as it takes—even to campus-wide device bans, cutting Wi-Fi, and corralling the internet in supervised labs. It’s a bracing proposal. It’s also the wrong fight. The most profound problems in higher education are structural and pedagogical, not technological. Prohibition doesn’t fix these problems; it hides them.
If an assignment collapses the moment a chatbot exists, the problem is the assignment, not the century. Treating AI as incompatible with the liberal-arts mission is a false binary that confuses means and ends. Our goals remain judgment, originality, evidence, and ethical responsibility. Students in every era have pursued those ends using the tools of their time, from slide rules and calculators to databases and search engines. The real question isn’t “AI or liberal arts?” It’s “What kinds of teaching and assessment cultivate liberal learning in a world where AI exists?”
Too many courses still prioritize generic products over transparent processes. A single, high-stakes paper submitted at 11:59 p.m., with no drafting, defense, data audit, or reflection, is an open invitation to outsourcing, whether to friends, ghostwriters, or models. That’s not a moral failing of students; it’s a predictable outcome of design. Changing the rules changes the incentives. When we shift toward process portfolios, version histories, live problem-solving, studio critiques, and oral exams, the calculus shifts as well. Students can still consult tools, but they must now demonstrate what they know, how they got there, and why it holds up.
Blanket bans offer the comfort of clarity without the work of reform. They centralize policing when what we need is pedagogical redesign. They valorize scarcity where we need literacy. And they mistake ignorance for innocence. Graduates will enter professions already saturated with AI-mediated workflows. Refusing to teach critical use (when to abstain, how to verify, how to cite, and how to protect privacy) isn’t principled; it’s abdication.
The liberal arts aim to form people who can govern technologies, not be governed by them. That requires practice: evaluating outputs, spotting hallucinations, interrogating bias, and weighing labor and intellectual property questions. You don’t teach that by turning a campus into a Faraday cage. You teach it by building judgment in the open.
If cheating is easy, change the game. The path forward is neither capitulation nor cloister. Teach the technology. Fix the system.
Reclaim the Why
If the debate over AI in classrooms sounds panicked, it’s because we’ve let the means eclipse the ends. We rarely say out loud what college is for. When the mission gets fuzzy, tools feel existential. When the mission is clear, tools become governable.
Higher education does three things: it forms judgment, teaches method, and builds capability. Everything else: facilities, schedules, platforms, and even cherished assignments, serve those three aims. Seen this way, the question is not whether AI “belongs” but how our policies advance those ends. Banning technology doesn’t form judgment; it withholds the very practice by which judgment is learned.
We want students who can be discerning about tools, not be ruled by them. That means deciding when not to use them, verifying when they do, attributing properly, protecting privacy, and surfacing the social and environmental costs of computation. We want students to trace claim to evidence, to model uncertainty, to submit work to critique. We want graduates whose communication, analysis, and collaboration skills are stronger because they learned to work with contemporary systems without surrendering agency.
Reasserting purpose lets us replace purity tests with learning goals, blacklists with blueprints, and panic with pedagogy. If our mission is formation, method, and capability, then the task ahead is clear: teach the tech, redesign the tasks, and make integrity normal rather than heroic.
Redesign the Work, Reduce the Temptation
If AI accelerates shortcuts to polished products, make the process visible and valuable. Begin by grading the arc, not just the artifact. Many STEM disciplines already do this through problem-set reasoning, lab notebooks, and partial-credit rubrics that reward demonstrated thinking. Humanities and social sciences can learn from that rigor. Ask students to submit outlines, drafts, prompt histories, and short decision logs narrating what they tried, where they used tools, and how they verified claims. When process counts, shortcuts lose value.
Pair this with small public explanations: a five-minute vignette, a whiteboard walkthrough, a short code demo, a “teach-back” to a peer. Understanding has a sound: invite students to let us hear it.
Localize knowledge so generic outputs no longer fit. Draw from local archives, campus data, or partner organizations. When assignments are rooted in place and process, unexamined text has little worth.
Finally, trade the lonely, high-stakes drop for iterative work. Proposals, drafts, critique, revision: these make learning visible and reduce panic, which is the breeding ground for misconduct. Students can still brainstorm with AI but must defend their claims, show sources, and own choices. The goal is not “AI-proof” assignments, but learning-proof ones, robust enough that learning happens even when tools exist.
Across disciplines, the principle holds. Historians defend a thesis in short oral exams. Engineers explain design trade-offs and unit checks. Computer scientists annotate AI-assisted code and present live demos. The technology doesn’t disappear; judgment takes center stage.
Equity Isn’t Optional
Calls for total bans on devices and AI may sound decisive, but they are inequitable and, in many cases, exclusionary. Students who rely on screen readers, captioning, translation, or speech-to-text tools depend on these technologies to access the curriculum at all. During a “tech blackout,” they are effectively barred from participation. That is not equity, it’s erasure.
An inclusive standard begins with Universal Design for Learning: multiple ways to engage content and demonstrate understanding while holding the same outcomes. It continues with disclosure rather than prohibition. Require students to label their use of both assistive and generative tools, explain how they verified accuracy, and describe privacy safeguards. Such transparency preserves integrity without denying access.
There’s also a moral imperative. Community and attention are built by pedagogy and shared intellectual risk, not by cutting networks. Withholding contemporary tools does not create equality; it withholds the means of participation. If we are serious about inclusion, technology must be treated as an enabler, not a privilege.
Teach the Problems, Don’t Hide Them
Yes, models hallucinate. Yes, there are risks to privacy, intellectual property, energy, and labor. These are not reasons to retreat but to teach.
Make verification a habit. Have students test model outputs against sources and reward them for finding errors. Normalize transparent attribution: a short “AI use” note listing prompts, outputs retained, transformations made, and checks performed. Treat model assistance as you would credit a research assistant whose name you can’t print.
Governance matters too. Never upload sensitive data to consumer tools. Use institutionally governed options and procurement standards that prioritize accessibility, privacy, auditability, and labor transparency. If energy and computing costs concern us, as they should, teach mitigation. The concept of a compute or query budget mirrors open-book exams: the available knowledge is limited, but intentionally so. It trains students to think critically about trade-offs and efficiency.
Fix the System, Not Just the Rules
Misuse blooms where the system invites it: oversized classes, generic prompts, single-shot grading, and little time for practice or feedback. To reduce misconduct, we must fix capacity, not just add policies. That means investing in TA lines, grader training, writing support, and smaller class sizes where students can receive meaningful interaction.
The labor implications are real. If we design assignments that resemble thesis projects rather than scantron tests, we must fund the human infrastructure to support them. The end of the automated grading machine will require more educators, not fewer, and that is a feature, not a flaw, of genuine learning.
A culture of integrity depends on alignment between expectations and assessment. Honor codes alone are rhetoric; transparent, iterative assessment makes integrity practical and visible.
A Practical Blueprint for Campuses
Universities don’t need Faraday cages. They need governance that is shared, legible, and enforceable. A tiered policy helps: some courses (e.g., clinical competencies) forbid AI entirely; others permit it with disclosure; advanced courses teach domain-specific integration. Every submission should include a short statement of tool use and verification steps, with non-disclosure treated as a violation.
Support faculty through assessment studios where instructors redesign assignments, rubrics, and disclosure language. Offer a short AI literacy credential for students, covering bias, verification, privacy, ethical use, and sustainability. Establish vetted tool lists with procurement standards for accessibility, privacy, IP, and energy reporting. Where appropriate, set efficiency or compute budgets and teach students how to meet them.
None of this is glamorous. All of it is necessary.
Educate for the World We Live In
The image of a tech-free campus is romantic, but liberal education was never about hiding from the world. It was about learning to navigate it. Our graduates will design, code, analyze, and create with AI-mediated systems. Shielding them from these realities doesn’t protect learning; it produces fragility.
Prohibition is an admission of curricular defeat. We can do better. Teach the tech. Fix the system. Build habits of mind that make tools serve truth instead of replacing it. That is not surrender to our moment; it is leadership in it.
Demian Hommel, PhD, teaches introductory and upper-division human geography courses in the College of Earth, Ocean, and Atmospheric Sciences at Oregon State University. He is also a fellow with the institution’s Center for Teaching and Learning, working to push the mission of excellence in teaching and learning across his campus and beyond.
Reference
Harper, Tyler Austin. “The Question All Colleges Should Ask Themselves About AI.” The Atlantic, September 11, 2025. https://www.theatlantic.com/culture/archive/2025/09/ai-colleges-universities-solution/684160/