‘People don’t want to talk’: The taboo ‘zombie’ problem in medical science

We’re sorry, this feature is currently unavailable. We’re working to restore it. Please try again later.

Advertisement

‘People don’t want to talk’: The taboo ‘zombie’ problem in medical science

By Angus Dalton

Examine, a free weekly newsletter covering science with a sceptical, evidence-based eye, is sent every Tuesday. You’re reading an excerpt – Sign up to get the whole newsletter in your inbox.

Medical research has a major problem: an alarmingly high number of trials are based on fake, fraudulent or misinterpreted data.

Professor Ben Mol from Melbourne’s Monash University is a world-leading figure in busting dodgy research findings.

Professor Ben Mol from Melbourne’s Monash University is a world-leading figure in busting dodgy research findings.Credit: Wayne Taylor

Research misconduct sleuths call them “zombie” studies. They look like real research papers but they’re rotten to the core. And when these studies go on to influence clinical guidelines, that is, how patients are treated in hospitals and doctors’ rooms, they can be dangerous.

Professor Ben Mol, head of the Evidence-based Women’s Health Care Research Group at Monash University, is a professional zombie hunter. For years, he has warned that between 20 and 30 per cent of medical trials that inform clinical guidelines aren’t trustworthy.

“I’m surprised by the limited response from people in my field on this issue,” he says. “It’s a topic people don’t want to talk about.”

The peer review process is designed to ensure the validity and quality of findings, but it’s built on the assumption that data is legitimate.

Science relies on an honour system whereby researchers trust that colleagues have actually carried out the trials they describe in papers, and that the resulting data was collected with rigorous attention to detail.

But too often, once findings are queried, researchers can’t defend their conclusions. Figures such as former BMJ editor Richard Smith and Anaesthesia editor John Carlisle argue it’s time to assume all papers are flawed or fraudulent until proven otherwise. The trust has run out.

“I think we have been naive for many years on this,” Mol says. “We are the Olympic Games without any doping checks.”

Advertisement
Loading

Now, however, Mol has presented a compelling solution.

How bad science gets into the clinic

Untrustworthy papers may be the result of scientists misinterpreting their data or deliberately faking or plagiarising their numbers. Many of these “zombie” papers emerge from Egypt, Iran, India and China and usually crop up in lower-quality journals.

The problem gets bad when these poor-quality papers are laundered by systematic reviews or meta-analyses in prestigious journals. These studies aggregate hundreds of papers to produce gold-standard scientific evidence for whether a particular treatment works.

Often papers with dodgy data are excluded from systematic reviews. But many slip through and go on to inform clinical guidelines.

My colleague Liam Mannix has written about an example of this with the hormone progesterone. Official guidelines held that the hormone could reduce the risk of pre-term birth in women with a shortened cervix.

But those guidelines were based on a meta-analysis largely informed by a paper from Egypt that was eventually retracted due to concerns about the underlying data. When this paper was struck from the meta-analysis, the results reversed to suggest progesterone had no preventative effect.

There’s a litany of other examples where discounting dodgy data can fundamentally alter the evidence that shapes clinical guidelines. That’s why, in The Lancet’s clinical journal eClinical Medicine, Mol and his colleagues have reported a new way to weed out bad science before it makes it to the clinic.

Holding back the horde

The new tool is called the Research Integrity in Guidelines and evIDence synthesis (RIGID) framework. It mightn’t sound sexy, but it’s like a barbed-wire fence that can hold back the zombie horde.

The world-first framework lays out a series of steps researchers can take when conducting a meta analysis or writing medical guidelines to exclude dodgy data and untrustworthy findings. It involves two researchers screening articles for red flags.

“You can look at biologically implausible findings like very high success rates of treatments, very big differences between treatments, unfeasible birth weights. You can look at statistical errors,” says Mol.

“You can look at strange features in the data, only using rounded numbers, only using even numbers. There are studies where out of dozens of pairs of numbers, everything is even. That doesn’t happen by chance.”

A panel decides if a paper has a medium to high risk of being untrustworthy. If that’s the case, the RIGID reviewers put their concerns to the paper’s authors. They’re often met with stony silence. If authors cannot address the concerns or provide their raw data, the paper is scrapped from informing guidelines.

The RIGID framework has already been put to use, and the results are shocking.

In 2023, researchers applied RIGID to the International Evidence-based Guidelines for Polycystic Ovary Syndrome (PCOS), a long misunderstood and misdiagnosed syndrome that affects more than 1 in 10 women. As a much maligned condition, it was critical the guidelines were based on the best possible evidence.

In that case, RIGID discounted 45 per cent of papers used to inform the health guidelines.

That’s a shockingly high number. Those potentially untrustworthy papers might have completely skewed the guidelines.

Imagine, Mol says, if it emerged that almost half of the maintenance reports of a major airline were faked? No one would be sitting around waiting for a plane to crash. There would be swift action and the leadership of the airline sacked.

Breaking the taboo

With the publication of the RIGID guidelines in a high-impact journal and the PCOS example, Mol hopes this “taboo” subject will be talked about more in scientific circles. Many scientists are reluctant to speak up.

Loading

“It brings a lot of negative energy. It brings risk. It doesn’t bring you any positivity in terms of extra credits for your academic career like high-impact publications,” Mol says of calling out dodgy research.

But the honour system of science is broken, and it’s time to clean it up. Mol hopes many other medical researchers and journals take up the RIGID tool.

“There are many systems in society where trust is not enough. Paying tax, sports, traffic checks, speed cameras. If you don’t follow the rules there, then you’re at risk of being caught.

“That never happens in medicine or in scientific research.”

Start the day with a summary of the day’s most important and interesting stories, analysis and insights. Sign up for our Morning Edition newsletter.

Most Viewed in National

Loading