ChatGPT listed as author on research papers: many scientists … – Nature.com


Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Advertisement
Chris Stokel-Walker is a freelance journalist in Newcastle, UK.
You can also search for this author in PubMed  Google Scholar
You have full access to this article via your institution.

The artificial-intelligence chatbot ChatGPT is disrupting many industries, including academia.Credit: CHUAN CHUAN/Shutterstock
The artificial-intelligence (AI) chatbot ChatGPT that has taken the world by storm has made its formal debut in the scientific literature — racking up at least four authorship credits on published papers and preprints.
Journal editors, researchers and publishers are now debating the place of such AI tools in the published literature, and whether it’s appropriate to cite the bot as an author. Publishers are racing to create policies for the chatbot, which was released as a free-to-use tool in November by tech company OpenAI in San Francisco, California.

AI bot ChatGPT writes smart essays — should professors worry?
ChatGPT is a large language model (LLM), which generates convincing sentences by mimicking the statistical patterns of language in a huge database of text collated from the Internet. The bot is already disrupting sectors including academia: in particular, it is raising questions about the future of university essays and research production.
Publishers and preprint servers contacted by Nature’s news team agree that AIs such as ChatGPT do not fulfil the criteria for a study author, because they cannot take responsibility for the content and integrity of scientific papers. But some publishers say that an AI’s contribution to writing papers can be acknowledged in sections other than the author list. (Nature’s news team is editorially independent of its journal team and its publisher, Springer Nature.)
In one case, an editor told Nature that ChatGPT had been cited as a co-author in error, and that the journal would correct this.
ChatGPT is one of 12 authors on a preprint1 about using the tool for medical education, posted on the medical repository medRxiv in December last year.
The team behind the repository and its sister site, bioRxiv, are discussing whether it’s appropriate to use and credit AI tools such as ChatGPT when writing studies, says co-founder Richard Sever, assistant director of Cold Spring Harbor Laboratory press in New York. Conventions might change, he adds.
“We need to distinguish the formal role of an author of a scholarly manuscript from the more general notion of an author as the writer of a document,” says Sever. Authors take on legal responsibility for their work, so only people should be listed, he says. “Of course, people may try to sneak it in — this already happened at medRxiv — much as people have listed pets, fictional people, etc. as authors on journal articles in the past, but that’s a checking issue rather than a policy issue.” (Victor Tseng, the preprint’s corresponding author and medical director of Ansible Health in Mountain View, California, did not respond to a request for comment.)
An editorial2 in the journal Nurse Education in Practice this month credits the AI as a co-author, alongside Siobhan O’Connor, a health-technology researcher at the University of Manchester, UK. Roger Watson, the journal’s editor-in-chief, says that this credit slipped through in error and will soon be corrected. “That was an oversight on my part,” he says, because editorials go through a different management system from research papers.
And Alex Zhavoronkov, chief executive of Insilico Medicine, an AI-powered drug-discovery company in Hong Kong, credited ChatGPT as a co-author of a perspective article3 in the journal Oncoscience last month. He says that his company has published more than 80 papers produced by generative AI tools. “We are not new to this field,” he says. The latest paper discusses the pros and cons of taking the drug rapamycin, in the context of a philosophical argument called Pascal’s wager. ChatGPT wrote a much better article than previous generations of generative AI tools had, says Zhavoronkov.
He says that Oncoscience peer reviewed this paper after he asked its editor to do so. The journal did not respond to Nature’s request for comment.
A fourth article4, co-written by an earlier chatbot called GPT-3 and posted on French preprint server HAL in June 2022, will soon be published in a peer-reviewed journal, says co-author Almira Osmanovic Thunström, a neurobiologist at Sahlgrenska University Hospital in Gothenburg, Sweden. She says one journal rejected the paper after review, but a second accepted it with GPT-3 as an author after she rewrote the article in response to reviewer requests.
The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she says.
“We would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism,” says Holden Thorp, editor-in-chief of the Science family of journals in Washington DC.
The publisher Taylor & Francis in London is reviewing its policy, says director of publishing ethics and integrity Sabina Alam. She agrees that authors are responsible for the validity and integrity of their work, and should cite any use of LLMs in the acknowledgements section. Taylor & Francis hasn’t yet received any submissions that credit ChatGPT as a co-author.
The board of the physical-sciences preprint server arXiv has had internal discussions and is beginning to converge on an approach to the use of generative AIs, says scientific director Steinn Sigurdsson, an astronomer at Pennsylvania State University in University Park. He agrees that a software tool cannot be an author of a submission, in part because it cannot consent to terms of use and the right to distribute content. Sigurdsson isn’t aware of any arXiv preprints that list ChatGPT as a co-author, and says guidance for authors is coming soon.
There are already clear authorship guidelines that mean ChatGPT shouldn’t be credited as a co-author, says Matt Hodgkinson, a research-integrity manager at the UK Research Integrity Office in London, speaking in a personal capacity. One guideline is that a co-author needs to make a “significant scholarly contribution” to the article — which might be possible with tools such as ChatGPT, he says. But it must also have the capacity to agree to be a co-author, and to take responsibility for a study — or, at least, the part it contributed to. “It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” he says.
Zhavoronkov says that when he tried to get ChatGPT to write papers more technical than the perspective he published, it failed. “It does very often return the statements that are not necessarily true, and if you ask it several times the same question, it will give you different answers,” he says. “So I will definitely be worried about the misuse of the system in academia, because now, people without domain expertise would be able to try and write scientific papers.”
doi: https://doi.org/10.1038/d41586-023-00107-z
Kung, T. H. et al. Preprint at medRxiv https://doi.org/10.1101/2022.12.19.22283643 (2022).
O’Connor, S. & ChatGPT Nurse Educ. Pract. 66, 103537 (2023).
Article  PubMed  Google Scholar 
ChatGPT & Zhavoronkov, A. Oncoscience 9, 82–84 (2022).
Article  PubMed  Google Scholar 
GPT, Osmanovic Thunström, A. & Steingrimsson, S. Preprint at HAL https://hal.science/hal-03701250 (2022).
Download references
AI bot ChatGPT writes smart essays — should professors worry?
Are ChatGPT and AlphaCode going to replace programmers?
Young physicists say ethics rules are being ignored
News
How India’s caste system limits diversity in science — in six charts
News Feature
How UK science is failing Black researchers — in nine stark charts
News Feature
Largest-ever study of journal editors highlights ‘self-publication’ and gender gap
News
Multimillion-dollar trade in paper authorships alarms publishers
News
Preprint review should form part of PhD programmes and postdoc training
World View
Peking University (PKU)
Beijing, China
Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg
Luxembourg, Luxembourg
Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg
Luxembourg, Luxembourg
Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg
Luxembourg, Luxembourg
You have full access to this article via your institution.

AI bot ChatGPT writes smart essays — should professors worry?
Are ChatGPT and AlphaCode going to replace programmers?
An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Nature (Nature) ISSN 1476-4687 (online) ISSN 0028-0836 (print)
© 2023 Springer Nature Limited

source


Leave a Reply

Your email address will not be published. Required fields are marked *