The rapid integration of generative artificial intelligence (GenAI) into higher education has reignited a familiar moral panic around academic dishonesty. While much of the immediate institutional response has centered on detection and enforcement, this reaction echoes a familiar pattern—one that treats academic dishonesty as an individual moral failure rather than a symptom of broader systemic issues (Bertram Gallant, 2008). Students, faced with overloaded schedules, mounting debt, and an unforgiving emphasis on grades, often make pragmatic decisions about where to invest their time and effort gravitating toward efficiency and certainty over exploration and risk. In this environment, the ethical clarity around “cheating” becomes murky—not because students don’t understand right from wrong, but because the structures around them reward efficiency and penalize risk. AI tools such as ChatGPT or Grammarly simply fit into this equation as time-saving devices—much like calculators, Google, or even essay mills before them. What has shifted is not the motivation, but the accessibility. A student’s use of AI is a reflection of the transactional nature of how learning is often experienced in contemporary institutions.
The proliferation of AI tools has not so much introduced a new problem as it has exposed the fragility of existing educational structures. Before we focus on what AI changes, we must confront what it reveals: that higher education has long incentivized a culture of performance over learning. Research has shown that when students perceive academic work as irrelevant, overly procedural, or divorced from their goals and identities, they disengage—regardless of whether AI is available (Ambrose et al., 2010). In this light, the question is not “How do we catch them?” but rather, “What kind of learning environments have we built that make outsourcing feel logical or even necessary?” This reframing shifts the pedagogical conversation from one of compliance and control to one of capacity-building and curiosity. If we can use this moment to ask how to foster a culture of learning that values growth over grades, process over product, and understanding over output, then AI may serve not as a threat to education but as a catalyst for its reinvention.
Why Students Turn to AI
When students turn to generative AI to complete assignments, they are not necessarily demonstrating a lack of understanding or unethical behavior—they are often responding rationally to the conditions of their academic lives. In surveys and interviews, students routinely cite overwhelming workloads, unclear expectations, time scarcity, and mental health struggles as reasons they seek shortcuts or support tools (McCabe, Treviño, & Butterfield, 2001; Pascoe, Hetrick, & Parker, 2020). AI fits seamlessly into a landscape where efficiency is often more highly rewarded than curiosity. Faced with tasks that feel repetitive, decontextualized, or performative, many students learn that their survival depends on doing just enough, not necessarily doing it deeply.
When a student uses AI to write a discussion post or summarize an article, the act is not just a form of evasion—it’s often a judgment about the value of the task itself. If the assignment doesn’t ask them to do something personally meaningful, intellectually stimulating, or clearly useful for their future goals, it becomes an obligation to manage rather than a learning opportunity to embrace. This disconnect is further amplified by the unspoken norms that teach students to prioritize grades, speed, and performance over exploration, failure, and growth (Margolis, 2001). In such a climate, using AI can feel less like cheating and more like optimization. Students aren’t necessarily trying to game the system—they’re playing the game as they perceive it was designed.
The challenge, then, is not to eliminate AI from the learning process, but to design learning experiences that make authentic engagement the easier, more meaningful choice. Rather than dismissing AI use as evidence of student laziness or misconduct, educators might ask: What does the pattern of AI usage tell us about how students experience our courses? What if the real issue isn’t student behavior but assignment design and instructional culture? By attending to these questions, we shift the focus from enforcement to empathy—from trying to control student actions to trying to understand their motivations and constraints.
Process Over Product
If generative AI makes it easier for students to produce work without deeply engaging in the learning process, it invites faculty to reconsider what their assessments are really measuring. Too often, traditional assessments prioritize polished outputs—essays, quizzes, presentations—over the messy, iterative, and uncertain processes that lead to genuine understanding. These conventional formats may reward conformity and correctness, but they do little to cultivate the flexibility, reflection, and critical thinking required in a world shaped by automation and ambiguity.
In contrast, AI offers a lens through which educators can reassess their learning goals: Are students being asked to think or to comply? To demonstrate mastery or to demonstrate growth? Increasingly, the competencies that matter most in both life and work—such as collaboration, problem framing, ethical reasoning, and the ability to adapt—are not easily captured through static, time-bound tasks (National Academies of Sciences, Engineering, and Medicine, 2018). Nor are they easily outsourced to AI. These are precisely the skills that thrive when assessment shifts from evaluating finished products to examining students’ thinking and decision-making processes along the way.
Emerging research on assessment design suggests that when students are asked to reflect on how they used AI, evaluate its limitations, compare its outputs to their own, or build on AI-generated responses with original insights, they begin to see these tools not as shortcuts but as thought partners (Mollick & Mollick, 2023; Popenici & Kerr, 2017). This approach not only fosters academic integrity, but also helps students develop the judgment, discernment, and contextual understanding that AI cannot replicate. Moreover, process-oriented assessments are more inclusive. They allow students with diverse learning styles, linguistic backgrounds, and levels of prior knowledge to demonstrate learning in multiple ways. They also encourage metacognition—students thinking about their own thinking—which is known to enhance long-term retention and transfer of learning (Ambrose et al., 2010). Rethinking assessment does not mean lowering standards; it means realigning them with the kinds of learning we claim to value. When assessments emphasize authentic engagement over mere output, they become more resilient to automation—and more relevant to the lives students are preparing to lead.
Disciplinary Innovation
While discussions about AI in education often remain abstract or policy-focused, meaningful change is already taking root within disciplines as faculty reimagine how to integrate AI into the logic of their fields. Rather than banning its use outright, many instructors are embedding AI into course design in ways that mirror authentic disciplinary practices. The goal is not just to accommodate new tools, but to cultivate students’ judgment in using them—aligning learning with the actual thinking, creating, and problem-solving expected in professional contexts.
Creative Disciplines
In design, media production, and the arts, instructors are moving beyond static portfolio reviews and embracing reflective, process-based documentation. Students are asked to keep iterative journals that detail how their ideas evolved, what role AI tools played in generating or refining concepts, and how creative decisions were made at each stage. This encourages fluency with new tools while reinforcing core disciplinary values: originality, intentionality, and critique. As Sullivan (2010) and McArthur & White (2021) argue, creative practice is inherently dialogic, and integrating AI into that dialogue allows students to develop their own voices in conversation with machine-generated suggestions.
Simulating Complexity
In fields such as public administration, business, and marketing, instructors are shifting from case-study write-ups to scenario-based simulations and role-playing exercises. These assessments ask students to evaluate data, propose interventions, justify decisions, and communicate across stakeholder perspectives. AI-generated content may be introduced as one data source among many, prompting students to critique its assumptions, identify potential biases, or integrate it with qualitative insights. This simulates real-world complexity and ethical ambiguity—core aspects of professional judgment that cannot be easily automated (Farrell, 2023; Wiggins, 2016).
Modeling Scientific Reasoning
In the sciences, instructors are piloting assignments that blend AI-assisted analysis with human critique. For example, students might be given an AI-generated interpretation of experimental data and asked to assess its validity, identify methodological flaws, or propose alternative explanations. These tasks promote skills fundamental to scientific reasoning: skepticism, data literacy, and peer review. As Wieman (2017) and Holmes, Wieman, & Bonn (2015) have shown, such approaches encourage deeper engagement with the epistemological foundations of science—what counts as evidence, how claims are justified, and why precision matters.
Building AI Fluency Across the Curriculum
To meet the demands of a world increasingly shaped by automation, higher education must move beyond treating AI as an external threat and begin cultivating AI fluency as a foundational component of 21st-century learning. AI fluency goes beyond knowing how to use tools like ChatGPT or Midjourney—it encompasses the ability to understand how AI systems work, critically evaluate their outputs, use them ethically, and reflect on their broader social and disciplinary implications (Long & Magerko, 2020; Luckin et al., 2016).
Integrating AI fluency into the curriculum means designing assignments that treat AI not as a banned shortcut or unexamined assistant, but as a subject of inquiry. Students should be taught to interrogate the design of AI tools, including how they are trained, what kinds of data they rely on, and how they reflect or reinforce existing social biases. Courses across disciplines can include assignments that ask students to compare AI-generated outputs with human-created work, evaluate their quality and assumptions, or explore how different prompts lead to different results. These tasks cultivate what Holmes, Bialik, & Fadel (2019) describe as “cognitive partnership”—the ability to think with, through, and about technology, rather than simply accepting its outputs.
Building AI fluency involves normalizing reflective practice. Students should be encouraged to articulate how and why they used AI tools in the completion of an assignment: What choices did they make? What did the AI get right—or wrong? How did it shape their thinking? This kind of metacognitive work not only deepens learning, but also demystifies AI as a black box and empowers students to use it responsibly (Mollick & Mollick, 2023).
Crucially, AI fluency should not be siloed within STEM or computer science. Every field—from history to healthcare, literature to law—will be transformed by AI in some way, and students in every discipline deserve the opportunity to engage critically with those transformations. Developing AI fluency as a form of literacy positions students to become not just competent users of technology, but thoughtful citizens capable of shaping its role in society.
The task before educators is not merely to regulate AI use, but to reimagine what it means to be educated in an age of intelligent machines. By embedding AI fluency into course outcomes, assessment practices, and institutional values, higher education can help students navigate this new landscape with agency, insight, and integrity.
The text was generated in collaboration with Gemini (2.5 Flash), Google’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.
References
Ambrose, Susan A., Michael W. Bridges, Michele DiPietro, Marsha C. Lovett, and Marie K. Norman. How Learning Works: Seven Research-Based Principles for Smart Teaching. San Francisco: Jossey-Bass, 2010.
Bertram Gallant, Tricia. Academic Integrity in the Twenty-First Century: A Teaching and Learning Imperative. San Francisco: Jossey-Bass, 2008.
Farrell, Paul. “Integrating AI into Public Administration Education: Risks and Rewards.” Journal of Public Affairs Education 29, no. 1 (2023): 45–58.
Holmes, Wayne, Maya Bialik, and Charles Fadel. Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Boston: Center for Curriculum Redesign, 2019.
Holmes, N. G., Carl E. Wieman, and Doug A. Bonn. “Teaching Critical Thinking.” Proceedings of the National Academy of Sciences 112, no. 36 (2015): 11199–204.
Long, David, and Brian Magerko. “What Is AI Literacy? Competencies and Design Considerations.” In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–16. New York: ACM, 2020.
Luckin, Rose, Wayne Holmes, Mark Griffiths, and Laurie B. Forcier. Intelligence Unleashed: An Argument for AI in Education. London: Pearson, 2016.
Margolis, Eric. The Hidden Curriculum in Higher Education. New York: Routledge, 2001.
McArthur, Janine A., and Betty White. “Redesigning Creativity: AI, Agency, and Reflective Practice in the Visual Arts.” International Journal of Art & Design Education 40, no. 2 (2021): 437–51.
McCabe, Donald L., Linda Klebe Treviño, and Kenneth D. Butterfield. “Cheating in Academic Institutions: A Decade of Research.” Ethics & Behavior 11, no. 3 (2001): 219–32.
Mollick, Ethan, and Lilach Mollick. “Assigning AI: Seven Approaches for Students with Prompts.” Working paper. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321 (accessed July 8, 2025).
National Academies of Sciences, Engineering, and Medicine. How People Learn II: Learners, Contexts, and Cultures. Washington, D.C.: The National Academies Press, 2018.
Pascoe, Michaela C., Rosanna M. Hetrick, and Alexandra G. Parker. “The Impact of Stress on Students in Secondary School and Higher Education.” International Journal of Adolescence and Youth 25, no. 1 (2020): 104–12.
Popenici, Stefan C., and Sharon Kerr. “Exploring the Impact of Artificial Intelligence on Teaching and Learning in Higher Education.” Research and Practice in Technology Enhanced Learning 12, no. 1 (2017): 1–13.
Sullivan, Graeme. Art Practice as Research: Inquiry in the Visual Arts. Thousand Oaks, CA: SAGE Publications, 2010.
Wieman, Carl E. “The Similarities between Research in Education and Research in the Hard Sciences.” Educational Researcher 46, no. 6 (2017): 319–29.
Wiggins, Grant. “A True Test: Toward More Authentic and Equitable Assessment.” Educational Leadership 70, no. 6 (2016): 28–33.