The integration of artificial intelligence (AI) into academic peer review processes has raised significant ethical and procedural concerns. While AI tools like ChatGPT and other large language models (LLMs) offer efficiency and consistency, their application in reviewing scholarly manuscripts poses risks to the integrity of the peer review system.arxiv.org
Leading academic institutions and publishers have explicitly prohibited the use of AI in peer review due to confidentiality issues. The National Institutes of Health (NIH) mandates that reviewers must not use AI tools to analyze or critique grant applications, as this could lead to unauthorized data exposure and compromise the review process . Similarly, Elsevier advises against uploading manuscripts or peer review reports into AI tools, citing potential violations of authors' confidentiality and data privacy rights .semafor.com+2about.citiprogram.org+2sites.psu.edu+2reddit.com+1nature.com+1
Studies have shown that AI-generated reviews can be manipulated by authors to inflate ratings or downplay weaknesses in their manuscripts. A simulation revealed that subtle alterations in a manuscript could lead to significant changes in peer review outcomes, highlighting AI's susceptibility to strategic manipulation . Additionally, AI tools have been found to favor well-known authors and may not adequately assess the novelty or significance of research, potentially introducing biases into the review process.asm.org+5arxiv.org+5publicationethics.org+5
AI-assisted peer reviews have been associated with higher acceptance rates for manuscripts. A study examining the 2024 International Conference on Learning Representations (ICLR) found that papers receiving AI-assisted reviews were more likely to be accepted, even when compared to human-only reviews. This trend raises concerns about the objectivity and reliability of AI-generated evaluations .arxiv.org+1semafor.com+1
In response to these challenges, many academic journals and conferences are revising their policies to address AI's role in peer review. For instance, the International Conference on Learning Representations (ICLR) has updated its ethics code to clarify that large language models (LLMs) are not eligible for authorship, including in peer review comments . These measures aim to preserve the integrity and confidentiality of the peer review process.arxiv.org+2linkedin.com+2semafor.com+2
While AI has the potential to enhance the efficiency of academic peer review, its current application raises significant ethical and procedural concerns. The risks of confidentiality breaches, manipulation, and bias necessitate cautious and transparent integration of AI tools into scholarly evaluation processes. Academic institutions and publishers must continue to develop and enforce policies that safeguard the integrity of peer review while exploring the benefits of AI in supporting research evaluation.