ChatGPT Generated Content

The academic community is stirred by consequences of potential uses of ChatGPT-generated content (Tang, 2023). As the texts produced by AI may successfully follow language patterns typical of the academic writing style and mimic research articles, there is a growing concern that unscrupulous researchers may be tempted to generate partially or more extensively texts, using ChatGPT. Authorship of such texts as submissions is raising doubts.


No doubt, the technologies are advantageous for non-native English-speaking authors or even native speakers as they may avoid weaknesses in their submissions related to the language quality. But can such a text be totally attributed to the researcher?


The plagiarism detecting tools can easily tell the ChatGPT-generated texts from human writings, though ChatGPT-produced texts are considered as original or newly produced. The way AI presence is found is connected to regular patterns and algorithms any AI-generated text is based on. Researchers may select to play around with the ChatGPT’s help throughout the writing, or only in some chunks of the article.


Many journals are revising their editorial policy regarding their authors’ use of AI in their submissions. They tend to disallow crediting ChatGPT or other artificial intelligence language models as a co-author. In early 2023 a few preprints and submissions, mainly in medicine, turned out to contain information on AI authorship. It launched a discussion on the possibility of AI authorship. Elsevier was among those publishers who pioneered new policies related to AI-assisted tools. Elsevier in their journals expects their authors to make a statement on the use of AI-assisted tools. In other publishing houses or journals, researchers should seek permission from their publisher or editor in case they use AI in any part of their submission or specify the sections where they used AI.


Elsevier’s Practices
In February 2023, updates on the use of artificial intelligence tools in the submissions were introduced into Elsevier’s authorship policy (Hufton, 2023). According to Elsevier policies and guidelines, authors, editors, and reviewers are to follow Publishing Ethics (https://beta.elsevier.com/about/policies-and-standards/publishing-ethics?trial=true) where the use of generative AI and AI-assisted technologies (ChatGPT, NovelAI, Jasper AI, Rytr AI, DALL-E, etc.) by all parts in scientific writing, in the journal peer-review and editorial process is described.

For Elsevier authors: The policy regarding AI-based technologies exclusively refers to the writing process, but not the research process. Authors may improve readability and language of their submission without reservations. General oversight and editing are the author’s responsibility. If AI is applied, the author is to make a statement. “Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author.” (https://beta.elsevier.com/about/policies-and-standards/publishing-ethics?trial=true)

For Elsevier editors and reviewers: As any submitted manuscript is confidential, no part of it may be uploaded into a generative AI tool. The latter may infringe the author’s confidentiality and data privacy rights. As correspondence with authors contains personal data, editors cannot upload it into a generative AI tool. Reviewers should not use AI-assisted tools in the scientific review as peer review is based on critical thinking that is missing from such tools. Moreover, generative AI technologies may produce incorrect or biased conclusions.

JLE editors share the stance of Elsevier and other publishers on the AI-related publishing ethics.

 

References

Tang, G. (2023). Academic journals cannot simply require authors to declare that they used ChatGPT. Irish Journal of Medical Science. https://doi.org/10.1007/s11845-023-03374-x