Scholarly publishers, authors, and the use of AI

Subscribe to the weekly posts via email here

Vintage toy robots

Photo by Townsend Walton on Unsplash 

Artificial intelligence isn’t new, but the release of widely available, inexpensive tools such as ChatGPT, Gemini, and Claude, has made it easier to integrate generative AI into scholars’ research workflows. Researchers, already overstretched and under pressure to publish, are naturally drawn to tools that promise to streamline the process. What once felt like pure Star Trek fantasy now hums quietly in every researcher’s browser.  

As well, this technological leap arrives at a delicate moment, when the scholarly record is showing signs of strain. The academic publishing ecosystem faces some challenges, such as paper mills (e.g., AI tools combat paper mill fraud); data falsification, image manipulation and hijacked journals. These challenges have chipped away at the trust we place in the scholarly record. A 2024 study from Northwestern University uncovered a global marketplace selling authorship slots and fake papers to those desperate to meet performance metrics. The Committee on Publication Ethics (COPE) and the STM Association summarized the problem bluntly: “Paper mills produce fake manuscripts for researchers to easily publish or buy authorship, exploiting the pressure to publish in academia.” (COPE and STM Association, 2022).

Against this backdrop of limited, though damaging, cases of misconduct, generative AI complicates the picture further.  On one hand, AI tools can be misused to generate convincing fake images or churn out text at an industrial scale. On the other hand, the very same technologies are being harnessed to defend research integrity by detecting manipulation, verifying authenticity, and strengthening editorial oversight.

In response to these growing pressures on the scholarly record, a new field is taking shape—Forensic Scientometrics (FoSci).  This emerging discipline brings together data science, bibliometrics (the quantitative analysis of scholarly publications and their citations, applying statistical methods to describe and evaluate research outputs), and research integrity to detect and prevent manipulation in academic publishing. I first encountered this concept through a presentation by Dr. Leslie D. McIntosh at the BRIC conference (similar slides are available here) and in her article for The Scholarly Kitchen (McIntosh, 2024). FoSci uses machine learning, image forensics, and metadata analysis to identify irregularities: flagging duplicated images, improbable statistics, recycled text, and suspicious authorship networks before publication. 

What’s notable about the emerging field of Forensic Scientometrics is its ethical stance: it doesn’t assume wrongdoing. Instead, it treats anomalies as signals worth understanding, not proof of misconduct. When irregularities appear, such as duplicate images, unusual citation patterns, or improbable authorship networks, FoSci methods flag them for follow-up. The goal is to contact the appropriate editors, reviewers, or institutions to determine whether these patterns arise from error, misunderstanding, or, in rare cases, coercion or fraud. It’s an evidence-based approach grounded in fairness and transparency rather than accusation.

The goal isn’t to replace human judgment, but to support it. AI can uncover anomalies; but then trained editors, reviewers, or integrity officers must decide whether these patterns indicate misconduct or innocent error. As McIntosh notes, “Recent high-profile cases of research misconduct and security breaches highlight the need for established, evidence-based policies, tools, and common practices for ensuring research integrity.” 

The Committee on Publication Ethics (COPE) is the leading international body that guides journals, publishers, and editors in maintaining integrity across the scholarly record. Founded in 1997, COPE now represents more than 13,000 member journals and organizations worldwide. Its role is to help the research community navigate ethical challenges: authorship disputes, data falsification, plagiarism, peer-review misconduct, and now, the responsible use of AI.

COPE’s Principles of Transparency and Best Practice ask journals to make peer-review processes, policies, access terms, and correction mechanics public, so that readers can scrutinize not only findings, but the editorial pathway behind them. Openness about data (via robust availability statements and reproducibility expectations) widens that scrutiny, while clear descriptions of peer-review models help readers assess how claims were vetted.

Three persistent identifiers form the backbone of a transparent scholarly record:

  • A Digital Object Identifier (DOI) is a permanent URL, such as that remains constant, even if the journal changes platforms or service, keeps versions and corrections connected;
  • An ORCID iD (Open Researcher and Contributor ID ) is a persistent, trusted and unique digital ID for each researcher, helping tell people with the same name apart; 
  • and Research Organization Registry (ROR) is a global, community-led registry of open persistent identifiers for research and funding organizations, that help to clarify institutional provenance.

Together, openness and PIDs make legitimate authorship easier to verify, and manipulation harder to hide. 

COPE’s recommendations are voluntary but influential. Most reputable publishers align their editorial policies with COPE’s principles. When a journal says it “follows COPE guidelines,” it signals that it subscribes to internationally recognized standards of editorial integrity, transparency, and fairness. COPE’s stance on AI use allows helpful automation, forbids AI authorship, and keeps humans answerable. 

Across the scholarly publishing landscape, the Big Five—Elsevier, Springer Nature, Taylor & Francis, SAGE, and Wiley—have each released detailed policies to guide authors, reviewers, and editors in the responsible use of artificial intelligence. While their tone and depth vary, their principles align: humans remain accountable, transparency is required, and AI must never replace scholarly judgement.

AI as author. All five publishers categorically reject the idea that generative AI tools can be credited as authors. “LLMs do not currently satisfy our authorship criteria,” declares Springer Nature (Springer Nature, 2024), emphasizing that authorship implies accountability. Taylor & Francis echoes this: “Generative AI tools must not be listed as an author,” since they cannot consent to publication or assume responsibility (Taylor & Francis, 2024). Elsevier, too, forbids listing AI tools as authors or co-authors (Elsevier, 2025). SAGE takes a slightly different tone, recognizing AI’s potential but warning it cannot replace “human creative and critical thinking” (Sage, 2025). Wiley frames AI as a “companion to the writing process, not a replacement,” preserving the central role of the author’s voice (Wiley, 2025).

Disclosure. All five stress transparency, though they differ in what must be declared. Elsevier and Springer Nature exempt simple grammar or style corrections but require statements for any substantive AI use—Elsevier through an AI declaration statement and Springer via disclosure in the Methods section. Taylor & Francis takes the strictest stance: every use must be declared, including the tool’s name, version, purpose, and rationale. SAGE requires acknowledgment whenever AI contributes to text, images, or references. Wiley goes furthest in practice guidance, recommending that authors keep an AI use log and provide explicit disclosure at submission and within the work itself.

Permitted and prohibited uses. All five accept limited, supervised use of AI for idea generation, organization, or language improvement, but they prohibit substitution of human intellect or unverified content. Elsevier cautions that AI-generated references “can be incorrect or fabricated.” Taylor & Francis forbids using AI for synthetic data or unrevised text and code, while Wiley offers the most detailed “how-to” framework—encouraging experimentation but requiring that authors substantially revise AI output to preserve originality and avoid IP risks.

Images, figures, and data. Springer Nature bans almost all generative AI images and videos, except in rare, clearly labeled cases such as research about AI. Taylor & Francis and Elsevier both also prohibit AI-manipulated images, though Elsevier allows it if AI is part of the study’s methodology and the process is fully documented. Wiley offers a tiered policy: explanatory or conceptual diagrams may be AI-assisted, artistic cover designs are acceptable with proper rights, but factual or clinical images are strictly off-limits. SAGE aligns generally with Taylor & Francis, requiring disclosure and adherence to established image-integrity standards.

Peer review and editorial use. All publishers forbid uploading manuscripts into public AI systems. Elsevier is the most restrictive—reviewers and editors may not even use AI to improve phrasing. Taylor & Francis allows reviewers to polish language but not analyze content. Springer Nature asks reviewers to declare any AI support, while SAGE bars AI-generated review reports outright. Across all, human accountability and confidentiality remain non-negotiable.

In short, the Big Five converge on the essentials: no AI authorship, full transparency, human accountability, and strict protection of research integrity. The main differences lie in disclosure thresholds, image exceptions, and how tightly peer review is controlled. Each publisher is adapting policy as technology evolves, but their shared message is clear: AI may assist scholarship, but it may not replace it.


The Challenge

As a Reader: Spotting Red Flags — and What You Can Do

  1. Something feels off in the paper
    If figures repeat, statistics seem too perfect, or phrasing feels recycled, trust your instincts.
    Try this: do a quick Google Image search or check PubPeer to see if others have noticed the same issue. If it still looks suspicious, politely alert the journal editor with the DOI and your observations.
  2. The journal looks “almost right”
    Scam or hijacked journals often mimic real ones: look for odd URLs, missing editor information, or promises of lightning-fast peer review.
    Try this: verify the title on the publisher’s website or COPE’s member list. If it’s not there, flag it for your librarians or check Retraction Watch’s hijacked journal list.
  3. Authorship or affiliations don’t add up
    When the same authors publish in unrelated fields—or use generic email addresses—it can suggest paper-mill activity.
    Try this: look them up in ORCID to confirm that profiles and institutional links are real. If something still seems off, document what you’ve found and contact the journal’s editor with specific details, such as author names, article titles, and DOIs.

Librarians can also help. If you’re a member of the Western University Community, contact RSCLib@uwo.ca if you have concerns but aren’t sure where to send them.

As an author: looking ahead to your next publication. As AI tools continue to reshape the research landscape, it’s worth pausing before you submit your next manuscript. Take a moment to look at your target journal’s policy:

  • Does it mention AI use directly?
  • What must you disclose — and where?
  • Are AI-generated figures, data, or translations allowed?
  • How will you ensure that your use of these tools stays within both the letter and the spirit of the policy?

The publisher Wiley suggests keeping an AI log to help you keep track of where and how AI was used. They even offer a table template to get you started. Is this something you might use or modify to help you be accountable and transparent?

See “How can I track my AI use?” at this link.

Try it out!


Subscribe to the weekly posts via email here

References

  • Colbran, R., & Toker, A. (2023). Generative artificial intelligence in the Journal of Biological Chemistry. Journal of Biological Chemistry. https://doi.org/10.1016/j.jbc.2023.105008 
  • COPE and STM Association. (2022). Paper mills: Research report from COPE & STM. https://doi.org/10.24318/jtbG8IHL
  • DeVilbiss, M. B., & Roberts, L. W. (2023). Artificial intelligence tools in scholarly publishing: Guidance for Academic Medicine authors. Academic Medicine, 98(8), 865-866. https://doi.org/10.1097/ACM.0000000000005261
  • Ganjavi, C., Eppler, M., Pekcan, A., Biedermann, B., Abreu, A., Collins, G., Gill, I., & Cacciamani, G. (2023). Bibliometric analysis of publisher and journal instructions to authors on generative AI in academic and scientific publishing. arXiv:2307.11918 https://doi.org/10.48550/arXiv.2307.11918
  • Ganjavi, C., Eppler, M., Pekcan, A., Biedermann, B., Abreu, A., Collins, G., Gill, I., & Cacciamani, G. (2024). Publishers’ and journals’ instructions to authors on use of generative AI in academic and scientific publishing: Bibliometric analysis. British Medical Journal, 384. https://doi.org/10.1136/bmj-2023-077192
  • Gibney, E. (2025, October 14). AI bots wrote and reviewed all papers at this conference. Nature. https://doi.org/10.1038/d41586-025-03363-3
  • Inam, M., Sheikh, S., Minhas, A. M. K., Vaughan, E. M., Krittanawong, C., Samad, Z., Lavie, C. J., Khoja, A., D'Cruze, M., Slipczuk, L., Alarakhiya, F., Naseem, A., Haider, A. H., & Virani, S. S. (2024). A review of top cardiology and cardiovascular medicine journal guidelines regarding the use of generative artificial intelligence tools in scientific writing. Current problems in cardiology, 49(3), 102387. https://doi.org/10.1016/j.cpcardiol.2024.102387
  • Lee, T., Ding, J. Trivedi, H., Gichoya, J., Moon, J. & Li, H. (2024). Understanding radiological journal views and policies on large-language-model use in academic writing. Journal of the American College of Radiology, 21(4), 678-682. https://doi.org/10.1016/j.jacr.2023.08.001    
  • McIntosh, L. (2024, April 2). FoSci — The emerging field of forensic scientometrics. The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2024/04/02/guest-post-fosci-the-emerging-field-of-forensic-scientometrics/
  • Mondal, H., Mondal, S., & Behera, J. (2025). Artificial intelligence in academic writing: Insights from journal publishers’ guidelines. Perspectives in Clinical Research, 16(1), 56-57. https://doi.org/10.4103/picr.picr_67_24
  • Richardson, R., Hong, S, Byrne, J., Stoeger, T. & Amaral, L. (2025). The entities enabling scientific fraud at scale are large, resilient, and growing rapidly, Proc. Natl. Acad. Sci. U.S.A. 122 (32) e2420092122, https://doi.org/10.1073/pnas.2420092122
  • Tang, A., Li, K., Kin, O., Cao, L., Luong, S. Tam, W. (2023). The importance of transparency: Declaring the use of generative AI in academic writing. Journal of Nursing Scholarship, 56(2), 314-318. https://doi.org/10.1111/jnu.12938
  • Yoo, J. H. (2025). Defining the boundaries of AI use in scientific writing: Comparative review of editorial policies. Journal of Korean Medical Science, 40(23), e187. https://doi.org/10.3346/jkms.2025.40.e18

Publisher Policies

Disclosure

ChatGPT 5 was a writing partner in this blog post, especially for copy editing and language clarity. The free version of Elicit.com was used to help generate the resources list. All errors, misinterpretations, omissions are mine.

Your Challenger: Joanne Paterson