Artificial Intelligence (AI) is driving one of the most significant transformations in academic publishing since the advent of peer review. There has been a steady increase in the use of AI in research manuscripts since 2022, when OpenAI launched ChatGPT. Recent data demonstrate that up to 22% of computer science papers indicate the usage of large language models (LLMs). Another study on LLM use in scientific publishing revealed that 16.9% of peer review texts contained AI content.
The integration of AI tools in every stage of publication has been met with appreciation and criticism in equal measure. Proponents advocate AI as a catalyst that boosts efficiency by automating mundane tasks such as grammar editing, reference formatting, and initial screening. However, critics warn against possible pitfalls in quality and ethical considerations. The question remains: is AI disrupting traditional publishing models, or is it fostering a natural evolution within the academic enterprise?
Rise of AI in Research and Publication
AI has become embedded throughout the research lifecycle, from idea generation to final manuscript submission. This pervasive adoption is reshaping researchers’ approach to their work; they use applications for grammar checks, plagiarism detection, format compliance, and even assessment of their research significance.
Recent studies show that LLMs have an unmatched ability to generate prose and summarize content, enabling researchers to write literature reviews and experimental descriptions more efficiently. In fact, many researchers utilize features of LLMs to assist them with brainstorming, rephrasing, and clarifying their arguments.
Besides writing assistance, AI-driven platforms can streamline other laborious tasks. For instance, searching massive databases to find relevant citations and assessing the context and reliability of references. Likewise, available text translation and summarization tools support 30+ languages— a feature that breaks down linguistic barriers and allows for international collaboration in research.
One perspective piece published in PLOS Biology notes: “A primary reason that science has not yet become fully multilingual is that translation can be slow and costly. Artificial intelligence (AI), however, may finally allow us to overcome this problem, as it can provide useful, often free or affordable, support in language editing and translation”.
The integration of AI tools in research and publication has fundamentally shifted research workflows. Automation technologies have quickened the pace of writing and made academic writing and publishing more accessible.
Ethical Implications and Risks
The increased efficiency and renewed research approach also come with significant ethical concerns. AI systems operate by learning patterns in existing data and can amplify hidden biases. If an AI tool is trained on past editorial outcomes, it may score submissions from well-known institutions or English-speaking authors more highly.
Stanford researchers found that even when non-native English researchers use LLMs to refine their submissions, peer reviewers still show bias against them. Another study revealed that AI text-detection tools often misidentify non-native English writing as machine-generated. In other words, if there are two authors with equal merits, they may face unequal scrutiny if one uses slightly different phrasing.
Beyond systemic biases, there is concern that AI integration into the peer review process can lead to over-reliance on its functionalities. An analysis of peer review highlighted that overdependence of editors and reviewers on AI-generated suggestions can result in deterioration in review quality, and not to mention, factual mistakes that slip through evaluation.
Even as AI is used for initial screening, the recommendations made by such tools about a manuscript’s quality or reviewer selection may be opaque or even include hallucinated reasoning. In the absence of transparency, it is difficult to identify and correct misjudgments. This accountability challenge can seriously undermine trust in the editorial process.
However, the most concerning risk is the potential of AI to create a feedback loop of maintaining the status quo. AIs are trained on existing published literature and are designed to evaluate new submissions based on the present data. Given this data, it is likely that the AI system may inadvertently suppress new and innovative ideas that do not coincide with established patterns.
The Irreplaceable Role of Human Editorial Judgment
Despite these technological advances, the fundamental responsibility for maintaining scientific integrity ultimately rests with human editors and reviewers. Academic publishers serve as gatekeepers of knowledge, shaping what research reaches the broader scientific community and, by extension, informing public understanding and policy decisions. This role carries immense responsibility. Editorial decisions can accelerate breakthrough discoveries or inadvertently stifle groundbreaking research that challenges conventional thinking.
Rather than delegating these critical judgements to algorithms, the academic publishing community must recognize this moment as a call to elevate editorial standards and practices. Editors must recommit to rigorous, nuanced evaluation that prioritizes scientific merit over efficiency metrics. The stakes are too high– the advancement of human knowledge and credibility of scientific enterprise itself– to entrust these decisions to systems that, however sophisticated, lack the contextual understanding, ethical reasoning, and innovative thinking that human expertise provides.
Dr. Su Yeong Kim is a Professor of Human Development and Family Sciences at the University of Texas at Austin. A leading figure in research on immigrant families and adolescent development, she earned the honor of being a Fellow on multiple national psychology associations and is the Editor of the Journal of Research on Adolescence. Dr. Kim has authored and published more than 160 works and is recognized with Division 45 of American Psychological Association’s Distinguished Career Contributions to Research Award. Her research, funded by the National Institutes of Health, National Science Foundation, and other bodies, covers bilingualism, language brokering, and cultural stressors in immigrant-origin youth. Dr. Kim is an enthusiastic mentor and community advocate, as well as a member of UT’s Provost’s Distinguished Leadership Service Academy.
References
Liang, Weixin, et al. “Quantifying Large Language Model Usage in Scientific Papers.” Nature Human Behaviour (2025). https://doi.org/10.1038/s41562-025-02273-8.
“How Much Research Is Being Written by Large Language Models?” Stanford Institute for Human-Centered Artificial Intelligence (HAI). https://hai.stanford.edu/news/how-much-research-being-written-large-language-models
Doskaliuk, B., et al. “Artificial Intelligence in Peer Review: Enhancing Efficiency …” J Korean Med Sci 40, no. 7 (2025). https://pubmed.ncbi.nlm.nih.gov/39995259/
Kim, H., J. C. Little, J. Li, B. Patel, and D. Kalderon. “Hedgehog-Stimulated Phosphorylation at Multiple Sites Activates Ci by Altering Ci–Ci Interfaces without Full Suppressor of Fused Dissociation.” PLOS Biology 23, no. 4 (2025): e3003105. https://doi.org/10.1371/journal.pbio.3003105
“How Language Bias Persists in Scientific Publishing Despite AI Tools.” Stanford Institute for Human-Centered Artificial Intelligence (HAI). https://hai.stanford.edu/news/how-language-bias-persists-in-scientific-publishing-despite-ai-tools
“What Are AI Hallucinations?” Google Cloud. https://cloud.google.com/discover/what-are-ai-hallucinations