Artificial intelligence (AI) tools such as ChatGPT, Claude, and Gemini are increasingly prevalent in higher education, raising questions about how students can engage with AI meaningfully. Rather than using AI as a shortcut to complete assignments, students must develop strong prompting skills that enhance their analytical and problem-solving abilities. AI prompting encourages critical thinking by fostering problem-solving, analysis, and synthesis while promoting ethical reasoning (Mollick and Mollick, 2023). By guiding students in crafting effective prompts and critically evaluating AI-generated content, educators can help them leverage AI as a thinking partner rather than a content generator.
Why AI Prompting Matters for Critical Thinking
Engaging AI effectively requires intentionality. Students who learn to create precise, open-ended, and iterative prompts can use AI to support their cognitive engagement. AI prompting helps students refine their problem-solving skills by encouraging them to iterate on their inquiries and seek deeper insights. It also strengthens analysis and synthesis by requiring students to compare viewpoints, justify responses, and critically engage with information. Moreover, effective prompting can highlight ethical concerns, such as AI bias and misinformation, which promotes digital literacy (Kasneci et al. 2023). Instead of resisting AI, educators should focus on integrating AI into coursework in ways that support critical thinking and intellectual engagement.
Teaching Students How to Interact with AI
Teaching students how to interact with AI begins with understanding prompt engineering. A strong AI prompt consists of clarity, specificity, and iterative refinement. Faculty can guide students by demonstrating the difference between open-ended and closed-ended prompts. For example, rather than asking, “What is photosynthesis?” which may yield a generic response, students can ask, “Explain how photosynthesis affects global climate patterns.” Additionally, layering prompts can deepen AI engagement. By refining responses through additional constraints, such as “Explain photosynthesis in the context of deforestation and its impact on atmospheric carbon levels,” students gain a more nuanced understanding of the topic. Encouraging iteration is also key; students should adjust their prompts based on AI responses and reflect on how modifications shape their learning (Mollick, 2023).
Another essential aspect of AI literacy is the critical evaluation of AI-generated content. Since AI outputs are not always reliable, students should be trained to fact-check information and compare AI-generated responses with scholarly sources. Educators can incorporate assignments that require students to verify AI responses, identify bias, and analyze conflicting perspectives. For example, students can ask AI, “What are the benefits and drawbacks of AI in hiring?” and then assess whether the response reflects any biases (Bender et al., 2021). By engaging students in these exercises, educators can help them develop the skills necessary to evaluate AI-generated information critically.
Designing AI-Driven Assignments for Active Learning
Faculty can also integrate scaffolded AI-driven assignments into coursework to encourage critical thinking. AI-assisted Socratic questioning can be a valuable tool in this regard. For example, students can use AI to generate counterarguments for their thesis statements and then evaluate the quality of AI-generated responses. Similarly, debate preparation can be enhanced by prompting AI to act as an opponent, challenging students’ positions on a topic. AI-generated case studies offer another useful application, allowing students to analyze and refine AI-created scenarios to improve their critical reasoning skills. Additionally, using AI as a brainstorming partner—while requiring students to justify their acceptance, modification, or rejection of AI suggestions—promotes deeper engagement with the material.
Ethical Considerations for AI Use
Ethical considerations must also be addressed beyond the practical applications of AI prompting. Avoiding over-reliance on AI is essential, and faculty should ensure that AI complements rather than replaces student thought processes (Cotton, Cotton, and Shipway, 2023). Transparency is another crucial element of responsible AI use; students should be encouraged to disclose when they use AI and reflect on how it influences their learning. Academic integrity policies should provide clear guidelines on the ethical use of AI in coursework while fostering an environment that encourages learning rather than punitive measures.
Best Practices for Faculty Implementation
To successfully integrate AI prompting skills into higher education, faculty should provide structured AI engagement opportunities. Assignments should be designed with specific AI-driven tasks that align with learning objectives. Additionally, educators can use AI for formative feedback, allowing students to refine their work before submitting final assignments. Assessing students’ ability to critically engage with AI is also crucial. Rather than focusing on AI detection, faculty should evaluate how well students interpret, critique, and refine AI-generated content.
Conclusion
As AI continues to reshape education, students must develop the skills necessary to engage with it in meaningful ways. By fostering effective prompting strategies, promoting critical evaluation, and designing assignments that require intellectual interaction with AI, educators can transform AI from a passive tool into an active partner in learning. The goal should not be to resist AI but to harness its potential to enhance critical thinking and prepare students for an AI-driven future.
Rick Holbeck, EdS, is the Executive Director of the Department of Online Teaching and Learning at Grand Canyon University. With extensive experience in instructional technology, faculty development, and online education, he specializes in integrating emerging technologies into teaching and learning. His expertise includes AI-driven pedagogy, instructional design, and strategies for engaging online learners. Rick regularly presents on topics such as AI literacy, academic integrity in the digital age, and best practices for faculty in online education.
References
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922.
Cotton, Debby R., Paul A. Cotton, & James R. Shipway. (2023). “Chatting and cheating: Ensuring academic integrity in the era of ChatGPT.” Innovations in Education and Teaching International, 0(0), 1- 12. https://doi.org/10.1080/14703297.2023.2190148.
Kasneci, Enkelejda, Klaus Seßler, Niklas Kühl, Urs Gasser, Andreas Lampert, Tobias Widjaja, and Gjergji Kasneci. 2023. “ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education.” Learning and Individual Differences, 103: 102274. https://doi.org/10.1016/j.lindif.2023.102274.
Mollick, Ethan. 2023. Co-Intelligence: Living and Working with AI. Harvard Business Review Press.
Mollick, Ethan, and Lilach Mollick. 2023. “Assigning AI: Seven Approaches for Students with AI Tools Like ChatGPT.” Harvard Business Review.