May 13, 2020
Publishers, including Elsevier and Springer Nature, have launched an initiative to tackle image manipulation in research papers.
A New Initiative to Standardize Image Integrity Checks
Leading academic publishers are joining forces to develop automated tools for detecting altered and duplicated images in research papers. This newly formed working group—marking the first coordinated cross-industry effort—aims to establish standardized software solutions to identify problematic images during peer review.
"The ultimate objective is to create an automated system capable of detecting image modifications," stated IJsbrand Jan Aalbersberg, head of research integrity at Elsevier and chair of the initiative led by the STM association, a global trade organization for publishers. This group, which convened in April, includes representatives from major publishers such as Elsevier, Wiley, Springer Nature, and Taylor & Francis.
Academic journal editors have long struggled with identifying image alterations, which can arise from honest mistakes, attempts to enhance visuals (e.g., adjusting contrast or color balance), or deliberate fraud. In a 2016 study, microbiologist Elisabeth Bik manually examined over 20,000 biomedical papers and found that nearly 4% contained suspicious image duplications. Despite the significance of this issue, most journals have not implemented systematic image checks due to resource constraints and the absence of large-scale automated solutions.
The working group seeks to define minimum requirements for image-checking software and explore ways to integrate it into the peer review process for large volumes of submissions. Additionally, they aim to classify different types of image-related concerns and establish guidelines for acceptable image modifications, ensuring transparency in research reporting. Aalbersberg emphasized that while some publishers are already testing software independently, achieving industry-wide standardization could take at least a year.
Testing AI for Image Screening
In recent years, several publishers have begun trialing AI-powered tools to detect manipulated images. Companies such as LPIXEL (Japan) and Proofig (Israel) have developed software that allows publishers and institutions to upload research papers and, within minutes, scan them for image duplications and alterations. These tools can identify instances where images have been rotated, flipped, stretched, or filtered.
Another organization, Resis (Italy), is also developing image verification software, while a research team led by Daniel Acuna at Syracuse University is working on a system that can compare images across multiple papers—an effort being tested by publishers and academic institutions.
For major publishers, the challenge lies in implementing software that can efficiently process high volumes of submissions while seamlessly integrating with peer-review workflows. Ideally, these tools should be capable of scanning large datasets to detect duplicate images across multiple papers, a computationally demanding task that current technology is still working to scale.
Addressing Systematic Research Misconduct
Elsevier's publishing services lead, Catriona Fennell, noted growing concerns about what she termed "industrialized cheating," referring to research fraud involving mass-produced papers. This issue includes clusters of papers containing nearly identical images and text, raising suspicions that they originate from commercial "paper mills" that generate fake research on demand.
In a recent case, Bik and other experts flagged over 400 suspicious papers published across multiple journals, exhibiting unusual similarities suggesting they may have come from a single fraudulent source. Identifying such cases during peer review is challenging, as reviewers may not be trained to spot manipulated images, and simultaneous submissions to different journals further complicate detection.
To address this, publishers are considering a collaborative approach similar to CrossCheck—a shared plagiarism detection system launched in 2010. This initiative enables journals to compare text across submissions. Establishing a similar database for scientific images could greatly enhance the ability to detect image reuse across papers.
Aalbersberg expressed confidence in the future adoption of such a system, stating, "Once the technology is ready, I have no doubt that industry-wide collaboration will follow."
Bik, who continues to uncover manipulated images in published papers, welcomed the initiative. "If we can implement image screening during peer review, it would be a major step forward. Hopefully, it will reduce the amount of problematic research making its way into publication," she said.