Wednesday, October 16, 2024

Challenges in Identifying Retracted Publications: Ensuring Integrity in Academic Databases

Research and StudiesChallenges in Identifying Retracted Publications: Ensuring Integrity in Academic Databases
Understanding reality is at risk. It is urgent to improve the identification of retracted publications in academic databases. Studies reveal significant deficiencies in identifying and notifying retracted scientific publications in various academic databases.

Information literacy for searching and analyzing information is fundamental in scientific research, where precision is essential to ensure efficacy. In this context, clarity in notifying academic articles’ retractions is extremely important for scientific development and society in general.

Despite this, inconsistencies in indexing, variability in coverage, and inaccurate labeling in academic databases highlight the urgency of improving these practices. Collaboration among editors, libraries, repositories, researchers, and other stakeholders is necessary to develop consistent policies and tools that ensure the integrity of scientific research and the reliability of evidence-based decisions.

A recent study, which reviewed 441 retracted publications in the field of public health across 11 different databases such as PubMed, Web of Science, and Scopus, revealed alarming inconsistency in the marking of these articles. Of the more than 2,800 records examined, less than 50% clearly indicated that the publication had been retracted, and less than 5% were marked as such in all the databases in which they appeared.

Why is this a problem?

This lack of uniformity and precision in identifying retracted articles raises severe concerns about the integrity of the scientific information available to researchers, professionals, and others interested in the topic.

Inadequately marked retractions significantly affect researchers’ ability to identify and avoid using articles that, for various reasons, have been deemed inappropriate.

This not only perpetuates misinformation but also undermines confidence in scientific literature. Public health decisions and policies, in general, must be based on data and scientific evidence, so accuracy in reporting retractions is more important than ever.

The causes of retractions

A systematic review also aimed to investigate the relationship between the retraction status of scientific publications and methodological quality in non-Cochrane systematic reviews that were retracted.

Researchers from the School of Allied Medical Sciences at Zahedan University of Medical Sciences, Iran, searched the PubMed, Web of Science, and Scopus databases using keywords such as “systematic review,” “meta-analysis,” and “retraction” or “retracted” up to September 2023.

No restrictions were imposed on time or language. The study included non-Cochrane medical systematic reviews that had been retracted. Data on the retraction status of the articles were obtained from retraction notices and Retraction Watch. Two independent researchers assessed methodological quality using the AMSTAR-2 checklist.

Of the 282 systematic reviews analyzed, the average time between publication and retraction was approximately 23 months, and nearly half of the non-Cochrane systematic reviews were retracted in the past four years. The most common reasons for retractions were falsification of peer reviews and the use of unreliable data.

Editors and publication officials were the most frequent retractors or requesters of retractions. More than 86% of the retracted non-Cochrane systematic reviews were published in journals with an impact factor above two and had critically low methodological quality.

Dr. Leila Keikha and her colleagues found a significant relationship between the reasons for retraction and methodological quality (P-value < 0.05). According to the results, using anti-plagiarism software and COPE guidelines can reduce the time for retraction.

In some countries, strict standards for researcher promotion increase the risk of misconduct. To avoid scientific errors and improve the quality of systematic reviews and meta-analyses (SRs/MAs), it is recommended that a protocol registry and retraction guidelines be created in each journal that publishes SRs/MAs.

Disparities in Retracted Literature Coverage

The study conducted by José Luis Ortega from the Institute of Advanced Social Studies (IESA-CSIC) and Lorena Delgado-Quirós from the Joint Research Unit on Knowledge Transfer and Innovation (UCO-CSIC) in Córdoba, Spain, compared the coverage and overlap of retracted publications, retraction notices, and withdrawals across seven major academic databases.

The objective was to identify discrepancies, determine their causes, and select the best database to capture retracted literature accurately. To this end, they searched seven academic databases for all retracted publications, retraction notices, and withdrawals since the year 2000. To ensure the relevance of the impact, they exclusively used web search interfaces, except for OpenAlex and Scilit.

The researchers’ findings demonstrate that non-selective databases, such as Dimensions, OpenAlex, Scilit, and The Lens, index a larger amount of retracted literature compared to databases that depend on the selection of publications, such as PubMed, Scopus, and Web of Science (WoS).

The main factors explaining these discrepancies are the indexing of withdrawals and conference papers. Additionally, the high coverage of OpenAlex and Scilit could be due to the incorrect labeling of retracted documents in Scopus, Dimensions, and The Lens. OpenAlex, Scilit, and WoS jointly cover ninety-nine percent of the sample.

The study suggests that research on retracted literature should consult more than one source and that it would be advisable to identify and label this literature in academic databases accurately.

Several conclusions can be drawn:

Significant disparities exist in the coverage and identification of retracted articles among databases. OpenAlex and Scilit yield the most withdrawn literature, while PubMed, Scopus, and Web of Science (WoS) collect the lowest percentage. According to the researchers, these differences can be attributed to two leading causes.

Firstly, how these products obtain their bibliographic metadata influences their coverage of retracted literature. The incomplete inclusion of withdrawals in PubMed, Scopus, and WoS largely explains the coverage discrepancies between publication selection-based databases and third-party source-based databases like Dimensions, OpenAlex, Scilit, and The Lens. Another crucial factor explaining these disparities is the indexing of conference papers.

Secondly, the way each database labels these documents affects coverage. The significant discrepancies among OpenAlex, Scilit, Dimensions, Scopus, and The Lens are thus the result of inadequate identification of retracted publications, which hindered proper retrieval.

Ortega and Delgado-Quirós conclude that any study on retracted documents needs to use more than one source to obtain a reliable picture of these publications due to the coverage gaps among databases. The findings indicate that 99% of the sample could be recovered using only three databases (OpenAlex, Scilit, and WoS) and 91% if only OpenAlex and WoS were combined.

Strategies to Verify Retractions

One of the fundamental problems in identifying and excluding retracted articles is the inconsistency in metadata related to retractions. Within the same database, there is a wide variety of options for marking an article as retracted. Some databases lack consistent policies on marking these articles, leading to confusion and the spread of incorrect information.

As a result, it is essential for data providers to increase transparency and consistency in metadata related to retractions to improve the reliability of scientific information.

There is a consensus that adherence to the Committee on Publication Ethics (COPE) guidelines is fundamental to ensuring the integrity of retraction notifications. However, less than half of retraction notices appear to comply with COPE requirements, and even fewer meet the criteria proposed by Retraction Watch. This lack of compliance is often due to unknown barriers for editors, suggesting the need for more education and resources to help them follow these guidelines more effectively.

Fortunately, there are resources that researchers and professionals, whether they are professionals or not, can use to verify the retraction status of articles. Tools like BrowZine and LibKey alert users about an article’s retraction status, while citation managers like EndNote and Zotero can also flag retracted articles. However, there is no foolproof strategy, so it is recommended to use multiple tools to ensure that the information obtained is accurate and up-to-date.

Some Conclusions

Despite the challenges ahead, interest in addressing the problem of improperly marked retractions has increased. Solutions span the entire academic publishing industry, including collaboration among libraries, repositories, researchers, and other related stakeholders.

This collaborative approach seems essential for developing and adopting consistent policies and tools that improve the quality of retraction notifications.

Studies on the topic highlight the increasingly urgent need to improve the identification and notification of retracted publications to prevent the spread of incorrect information.

Inconsistencies hinder the effective identification of these publications in indexing, variability in coverage, inaccurate labeling, incomplete inclusion of retractions, divergent retraction procedures, reliance on third-party sources, and the lack of standard norms. This points to improving identification and labeling practices in academic databases.

Collaboration among all stakeholders and adopting consistent policies and tools is essential to improving the quality of retraction notifications in scientific literature. Only through a concerted effort can the integrity of scientific research be maintained, and evidence-based decisions can be solid and reliable, not based on flawed science.


Editor’s Note

In the field of harm reduction for smoking, a notable example of poor science that has negatively influenced vaping-related policies, both in terms of data handling and the interpretation and dissemination of results, is Dr. Stanton Glantz.
Glantz, a prominent American researcher in the field of tobacco control, has had several articles retracted.
One of the most notorious cases is the 2019 study published in the Journal of the American Heart Association (JAHA), which linked the use of electronic cigarettes to heart attacks. This article was retracted in 2020 after other scientists pointed out that some heart attacks in the analysis occurred before the participants began using electronic cigarettes. The editors considered the study’s conclusions unreliable due to methodological issues and improper data handling.
Another retracted article was a 2018 study that also linked the use of electronic cigarettes to heart attacks. A follow-up analysis published in the American Journal of Preventive Medicine refuted this, finding flaws in the original methodology and data interpretation.

Check out our other content

Check out other tags:

Most Popular Articles