“GHOULING” (TARGETED Study Retraction) IS BIASING SCIENTIFIC LITERATURE


I offer this explanation of the process of “Ghouling” from the perspective of an experienced Editor-in-Chief of two journals (Cancer Informatics; Science, Public Health Policy & the Law, Associate Editor of one (Applied Bioinformatics), member of number Editorial Boards (incl. Cancer Research until last year), peer-reviewer of countless studies and reviews in many journals, some in which I have published and some in which I have not.

FROM TIME TO TIME, imposters posing as scientists publish fraudulent and faked studies. From time to time, imposters posing as well-meaning unsolicited reviewers of published studies visit, or rather, haunt journals to specifically target studies whose results they wish had not been published. The process, accidentally called “ghouling” in a recent Unbreaking Science podcast episode (see below), is a corruption of the processes involved in rational discourse in Science.

Fraud in Science is a destructive process: it pollutes the scientific literature with results that are unfounded. It also leads to a loss of trust in Science as a discipline. The antidote, of course, is retraction of outright fraud, such as the retraction of the study that showed that the antiviral hydroxychloroquine was not effective against SARS-CoV-2 infection. Three of the authors retracted the study after it was realized the results reported were not supported by the data. Other studies were found to be using lethal doses of HCQ by Dr. Meryl Nass. Those studies, along with a misinterpretation of the available knowledge base by Dr. Anthony Fauci, helped reduce enthusiasm for prophylactic and therapeutic use of hydroxychloroquine. Fauci should have celebrated the Henry Ford study that found that HCQ plus corticosteroids reduced the fatality rate in coronavirus by 50%; instead, he complained that because patients in the study received both hydroxychloroquine and corticosteroids, one could know which factor was responsible. Along with the rest of the studies on HCQ, we now know that the balance of evidence strongly and overwhelmingly supports early treatment with HCQ (see:c19study.com). Fauci also cynically rejected the study for being “peer-reviewed”, because, he implied, “anything” can pass peer review.

If you take that position at face value, Fauci’s position is a stunning indictment on the process of peer review. The process of scientific peer review provides a form of double, triple or even further jeopardy for scientific studies. When conducted appropriately, it is a blinded review process, wherein a scientist or team of researchers submits their manuscript to a journal. The journal’s Editor-in-Chief or submissions managers then select reviewers from among the peers of the author to review the paper. These peers should not have any conflicts of interest – they should not have collaborated with any of the authors of the study, and they should not have any financial conflicts of interest that would cause them to want to “turf” the study. Usually, two or three reviewers are contacted by the journal, and the reviewers agree to read and critique the manuscript. Peer reviewers are not compensated for their efforts by the journal, although some open access journals sometimes offer discounts on publication fees to their reviewers per reviewer effort. Other publishers offer reviewers access to their content for a period of time related to the number of studies they have reviewed – regardless of their recommendation that the journal publish, or not publish, the studies they review.

After the reviewers have provided their recommendation (usually “Accept”, “Accept w/minor revision”, “Accept w/major revision”, “Reject”), the first round of review is completed, and pending the Editor or Editorial Board members’ decision, the manuscript may be published, sent back to the author for revisions, or rejected.

The Greater Scientific Community Reacts

Once the manuscript is published, the study is now subjected to an unlimited number of readers, all of whom may agree or disagree with the results of the study, or their interpretation. Since replication is key to science, skeptics can conduct a replicate of the study to determine if their own results match. That’s the ideal response – i.e., “Science Begets Science”. Then, a body of results from independent scientists can be assessed with systematic reviews and with meta-analyses to determine if an overall trend in the studies support a particular conclusion.

If suspected fraud is spotted in a scientific publication, those spotting it sometimes contact the journal, and, usually, the journal will contact the author(s) with the concerns. If the author(s) cannot provide a satisfactory explanation (mistakes do occur!), the author may issue an Erratum, or a Corrigendum. If the error is fatal, they may even pull their own study. An author-retracted study is an admission of error; it can even be celebrated as a mark of objectivity. If, however, mistakes cannot explain the result(s) in question, and especially if the evidence lines up in a way that is onl consistent with deception (such as made-up data, digitally altered images of gels, plagiarism), the journal may retract the study in question. In many cases, all of the perpetrator’s work naturally comes under scrutiny, and repeat offenders have been found.

It’s generally understood that the interpretation should stem logically from the results, and the results from the methods… but this is not always so. Differing interpretations of the results is not unusual, so authors are usually careful to not “overdraw” their conclusions. In the rare event that the authors offer a clearly unwarranted interpretation missed by peer review, it is not unusual for a reader of the study to send a communication to the Editor-in-Chief. What happens next determines the integrity of a journal and its editorial board.

If the journal supports rational discourse, the EIC can publish the communication or statement of concern from the reader, and then invite the author(s) to reply. In every western body of law, the accused has the right to know who their accuser is. Otherwise, unfounded accusations from anonymous sources could lead to a never-ending battle of take-downs by factions in society.

In the even rarer event that a reader spots a methodological flaw, the study can also be further scrutinized by the authors themselves. Sometimes, the methodological flaw, if rectified, does not alter the conclusions of the study. However, if the error cannot be rectified and is fatal to the past conclusions drawn, the authors might then choose to retract their study. An example of this was the retraction of a study from Stanford University that reported auto-antibodies in people who developed narcolepsy following receipt of the H1N1 flu vaccine in Europe, implying that the vaccine caused autoimmunity against the orexin protein. Their methodological issue had been communicated in a published Note of Concern, the authors agreed and retracted their study, and then the authors of the Note of Concern retracted their own Note of Concern in kind. The study authors subsequently published a new study reporting autoantibodies not to the orexin protein, but, instead to the orexin protein receptor, demonstrating high likelihood of plausibility of molecular mimicry as an explanation for the post-vaccination onset of narcolepsy.

If the methodological flaw cannot be reconciled, and the author does not retract, or provide a counter argument that is not acceptable to the EIC, the EIC then can decide whether the issue is significant enough to require that action be taken by the journal. This is a serious step not to be taken lightly – such retractions have implications for the authors’ reputations – and, as was seen in the case of Dr. Andrew Wakefield, career-terminating.

https://www.tandfonline.com/doi/full/10.1080/09581596.2021.1878109

The penalties for retraction do not merely involve the loss of a single study; the stain on one’s reputation can also lead to professional “shunning” in a manner that could only work to prevent that author from being successfully funded to conduct research in the same area; from being invited to collaborations or from successfully recruiting collaborators. Journal-enforced retractions create a spectator sport both within outside of science and has even spawned an industry (e.g. the website “Retraction Watch” provides all of the scurrilous and juicy details of retraction, and the mainstream media of course publish the names of scientists found “guilty” of fraud by the act of journal retraction. Given that replication and meta-analyses are important for the assessment of the state of a particular hypothesis, lack of replication of the results by another, independent research team is not generally considered to be grounds for retraction, or evidence of fraud. As discussed, however, outright and obvious fraud is grounds for retraction; given the consequences of retraction, the level of evidence needed to prove fraud should be very high.

But what if the fraud is the targeted retraction effort?

Enter the Ghouls

One thing I’ve learned in the half-century of living and learning is that if society establishes an organized process, cheats will enter, and game the system. The legitimate process of rational discourse is one that is not only tried and true, but the value of the adversarial but rational discourse is knowledge itself. The high value of open debate toward achieving the goal of learning was mastered by the ancients, now all but forgotten by some struggling to survive in an academic world filled with competition, spite and institution-originated perverse incentives. All, however, is not lost, for an excellent counter example, see “Adversarial alignment enables competing models to engage in cooperative theory building toward cumulative science“; Ellemers et al., 2020. PNAS.

In this mix of the good, the bad, and the ugly enters the Ghoul. The Ghoul is a professional knowledge hit-man; one who travels from journal-to-journal, targeting only studies (and study after study) that reports results The Ghoul wishes had not been published. The Ghoul haunts Editors-in-Chief with notices of concern with no intention of ever attaching their name to the statements of concern; they feign fear of reprisal from the presumptively evil author or authors of the journal articles they target; they usually have no interest in the content of the journal prior to the delivery of their statement of concern or after; they perseverate in working to effect retraction of studies only in specific areas that may not be in their own area of expertise; they do not concern themselves with clear instances of fraud in studies that support their particular position on a topic, and, finally, they refuse Editor-in-Chief’s requests to publish their concern to allow the author(s) to provide a response so the rest of the greater Scientific community can also see both sides of any argument and make their own minds up on the matter discussed.

To do otherwise is to bias future meta-analyses.

How to Prevent Ghouling

Authors who publish in controversial topics are in total control of their manuscript until the moment it is accepted. They can pull their submitted manuscript at any time – something they should not do if the review process is going poorly, (but I’m sure some do). In their submission letter, they can inform the Editor-in-Chief of the identity or likely identity of their own particular Ghoul; they can even predict that their article will be targeted – post-publication – with uncharacteristically intense and unwarranted scrutiny, and inform the EIC that should such events might transpire, the author(s) would like to challenge the EIC and the Editorial Board to invite the assailant to submit their concerns for peer-review in the journal (or, at least Editorial Board review), and to inform them that the victims of the assault will be invited to respond to the concerns in their own, published response.

If the EIC then does receive a nastygram from a Ghoul, they, and their editorial board, are better able to interpret the attack in the context of a conflict-of-interest, harassment, or, as I accidentally quipped, “Ghouling” behavior of someone bent on biasing the scientific literature in a particular field of inquiry in their favor. Ghouling begets retraction bias. Preventing Ghouling is therefore very important Ghouls are the book-burners of modern times; their goal is to discredit individuals who may well be among those who are among those most objective scientists of our time.

This particular tactic – of appealing to the higher mind of the EIC and the Editorial Board toward rational discourse – won’t make an article retraction-proof, but it will at least allow the Greater Scientific Community to witness the same, small group of people – or person – targeting studies in journal after journal and thus gauge whether the person has any agendas, conflicts of interest.

After all, the light of day is the best way to scare off the Ghouls.

RETRACTION WARS ON “UNBREAKING SCIENCE”
unrecognizable person in ghost costume taking selfie on street
Photo by Katerina Holmes on Pexels.com

This article first appeared on jameslyonsweiler.com.

0 0 votes
Article Rating

Follow James Lyons-Weiler, PhD on:

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments