Last year, Mohammad Hosseini, an artificial intelligence ethics researcher at Northwestern University, evaluated around 500 article submissions as an editor for the journal Accountability in Research. He noted that some papers appeared to have been obviously generated by AI, often marked by incoherence, excessive em-dash use, abrupt logical jumps, and disjointed text. However, as AI capabilities advance, these tools are increasingly integrated into scientific publishing, both in manuscript writing and in the peer review process.
AI tools like ChatGPT are helping researchers, particularly non-native English speakers, navigate large bodies of literature, streamline writing, and improve readability. Roy Perlis, editor in chief of JAMA+AI, called this a potential “game changer” for some scientists. Experts in science publishing agree that with proper human oversight, AI can enhance the quality, efficiency, and inclusivity of scholarly communication. Yet, AI also introduces risks, including quality compromise, confidentiality breaches in peer review, and the potential for fraudulent activity, such as fabricated articles or datasets.
Surveys show that AI is already widely used in scientific publishing. A Nature survey of 5,000 academics found that 8% used AI to draft, translate, or summarize articles, while 28% used AI to edit their work. A study in Science Advances analyzing over 15 million biomedical abstracts found that 13.5% of 2024 abstracts likely involved language models. Researchers report AI helps identify overlooked literature, translate drafts, and correct grammar, benefiting particularly those disadvantaged by language barriers.
Despite its advantages, AI carries risks such as hallucinations, inadvertent plagiarism, and misrepresentation of studies. Low-quality papers generated with AI have already burdened preprint servers, contributing to systemic overload. In peer review, AI can expand the reviewer pool and reduce human biases, but it may also reinforce historical biases present in training data. Studies show AI models favor authors from prominent institutions, though careful prompting can mitigate these biases. Editors remain cautious, emphasizing that human judgment is still crucial for assessing novelty and contribution.
In response, journals are establishing guidelines to regulate AI use. Among the top 100 scientific journals, 87% now provide guidance on AI use. Most permit AI for language editing and data analysis, provided authors disclose usage and validate outputs, while prohibiting AI co-authorship, manipulation of images, or submission of unpublished material to generative AI tools. Human oversight remains essential, with ultimate responsibility for content resting with the authors.
AI detection tools are also being employed, though their accuracy is limited. Overall, AI is reshaping scientific publishing by forcing journals, editors, and researchers to reconsider every aspect of manuscript preparation, peer review, and publication. While it presents both opportunities and challenges, its integration underscores the need for transparency, careful monitoring, and continuous adaptation in the scientific publishing landscape.






