Artificial intelligence (AI) is influencing the way we conduct and share scientific research and enables machines to perform tasks that traditionally require human intelligence. However, as with any new technology, there are ethical considerations we must consider when using these tools.
AI is excellent at analysing complex datasets and identifying patterns that may be difficult to detect when reviewing the information manually. By introducing machine learning algorithms, AI software can learn from this data and be trained to make prediction based on these patterns.
Meanwhile, natural language processing software enables researchers to quickly scan volumes of papers to understand a new topic, and other tools exist to both generate and review written and visual content.
As we explore these innovative technologies, we must also consider how and when to use them to maintain integrity in scientific publishing. To remain ethical and continue building trust in scientific research, the community must continue to show transparency, accountability, and credibility.
Preventing misinformation
Two months after its launch in 2022, artificial intelligence chatbot ChatGPT reached 100 million users.[1] When exploring its capabilities, some used the tool for fun, writing poems or asking for advice, while others used it to provide useful insights and generate content. In the scientific community, for example, some have used the tool to help generate parts of their research paper.
While innovative, AI has its limitations as the systems are only as good as the data they are trained on[2] — any biases or inaccuracies from the data will show in the generated content. For example, the first AI generated article in Men’s Health[3] was investigated for sharing many inaccuracies and falsehoods — despite the fact the content appeared to have academic-looking citations. In response, many publishers, such as Nature[4], adapted editorial policies to outline the restriction of using large language models (LLMs), such as ChatGPT, for generating content to scientific manuscripts.
Preventing misuse and AI ethics
The limitations of AI’s performance and transparency mean that humans cannot rely solely on this technology. There is a responsibility on the part of researchers, editors and publishing houses to verify the facts.
The emergence of paper mills — organisations that produce fabricated content — is a common example of AI misuse. While the exact percentage of paper mill articles in circulation is not known, there are significant concerns among publishers that this is a difficult-to-detect phenomenon that undermines the credibility of scientific publications.
However, reviewing scientific papers for suspected image manipulations related to paper mills can be a time-consuming task that is not always accurate, particularly because a paper could include hundreds of subimages.
AI can automate this process to detect instances of misuse or unintentional duplications before publication. Image integrity proofing software, for example, uses computer vision and AI to scan a manuscript and compare images in minutes, flagging any potential issues. Forensic editors can then investigate further, using the tool to find instances of cut and paste, deletions, or other forms of manipulation.
Conclusion
AI has many capabilities and will continue to improve, but we also cannot rely on the technology to act ethically of its own accord. As the scientific community increases its understanding of AI and its applications, integrity experts should collaborate to establish clear guidelines and standards for its use in content generation. Yet, in spite of these efforts, paper mills will persist. Publishers, therefore, should continue to invest in and adopt the most suitable technological solutions available at the time for reviewing manuscripts prior to publication. This, of course, should be complemented by a widespread endeavour to develop additional methods to prevent the flourishing of paper mills.
Tip
When looking to use AI tools, look at guidelines from editors, publishers and ethics committees about how you can use the tools to benefit you and maintain integrity. Publishers should also explore how AI tools, such as Proofig, can help them more effectively detect content produced by paper mills.
References
[2] Data centric AI models are only as good as their data pipeline [3] Magazine publishes serious errors in first AI-generated health article [4] Tools such as ChatGPT threaten transparent science; here are our ground rules for their use
Related blogs
Tags
Artificial Intelligence, AI, Image integrity, Paper mills, Image verification, Image manipulation
Commentaires