from TheAtlantic Website
John Iioannidis In 2001, rumors were circulating in Greek hospitals that surgery residents, eager to rack up scalpel time, were falsely diagnosing hapless Albanian immigrants with appendicitis.
At the University of Ioannina medical school’s teaching hospital, a newly minted doctor named Athina Tatsioni was discussing the rumors with colleagues when a professor who had overheard asked her if she’d like to try to prove whether they were true - he seemed to be almost daring her.
She accepted the challenge and, with the professor’s and other colleagues’ help, eventually produced a formal study showing that, for whatever reason, the appendices removed from patients with Albanian names in six Greek hospitals were more than three times as likely to be perfectly healthy as those removed from patients with Greek names.
Good thing, because the study had actually been a sort of audition.
The professor, it turned out, had been
putting together a team of exceptionally brash and curious young
clinicians and Ph.D.s to join him in tackling an unusual and
controversial agenda.
But the group convened in a spacious conference room that would have been at home at a Silicon Valley start-up. Sprawled around a large table were Tatsioni and eight other youngish Greek researchers and physicians who, in contrast to the pasty younger staff frequently seen in U.S. hospitals, looked like the casually glamorous cast of a television medical drama.
The professor, a dapper and soft-spoken
man named John Ioannidis, loosely presided.
One noted that Salanti’s study didn’t address the fact that drug-company research wasn’t measuring critically important “hard” outcomes for patients, such as survival versus death, and instead tended to measure “softer” outcomes, such as self-reported symptoms (“my chest doesn’t hurt as much today”).
Another pointed out that Salanti’s study
ignored the fact that when drug-company data seemed to show
patients’ health improving, the data often failed to show that the
drug was responsible, or that the improvement was more than
marginal.
Just as I was getting the sense that the data in drug studies were endlessly malleable, Ioannidis, who had mostly been listening, delivered what felt like a coup de grâce:
Everyone nodded.
Though the results of drug studies often
make newspaper headlines, you have to wonder whether they prove
anything at all. Indeed, given the breadth of the potential problems
raised at the meeting, can any medical-research studies be trusted?
He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies - conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain - is misleading, exaggerated, and often flat-out wrong.
He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community; it has been published in the field’s top journals, where it is heavily cited; and he is a big draw at conferences.
Given this exposure, and the fact that
his work broadly targets everyone else’s work in medicine, as well
as everything that physicians do and all the health advice we get,
Ioannidis may be one of the most influential scientists alive. Yet
for all his influence, he worries that the field of medical research
is so pervasively flawed, and so riddled with conflicts of interest,
that it might be chronically resistant to change - or even to
publicly admitting that there’s a problem.
The oracle was said to have issued pronouncements to priests through the rustling of a sacred oak tree.
Today, a different oak tree at the site provides visitors with a chance to try their own hands at extracting a prophecy.
He chuckles, but Ioannidis (pronounced yo-NEE-dees) tends to laugh not so much in mirth as to soften the sting of his attack.
And sure enough, he goes on to suggest
that an obsession with winning funding has gone a long way toward
weakening the reliability of medical research.
Where were the hard data that would back up their treatment decisions?
There was plenty of published research, but much of it was remarkably unscientific, based largely on observations of a small number of cases. A new “evidence-based medicine” movement was just starting to gather force, and Ioannidis decided to throw himself into it, working first with prominent researchers at Tufts University and then taking positions at Johns Hopkins University and the National Institutes of Health.
He was unusually well armed: he had been a math prodigy of near-celebrity status in high school in Greece, and had followed his parents, who were both physician-researchers, into medicine.
Now he’d have a chance to combine math and medicine by applying rigorous statistical analysis to what seemed a surprisingly sloppy field.
It didn’t turn out that way. In poring over medical journals, he was struck by how many findings of all types were refuted by later findings.
Of course, medical-science “never minds” are hardly secret.
And they sometimes make headlines, as,
Peer-reviewed studies have come to opposite conclusions on,
But beyond the headlines, Ioannidis was shocked at the range and reach of the reversals he was seeing in everyday medical research.
“Randomized controlled trials,” which compare how one group responds to a treatment against how an identical group fares without the treatment, had long been considered nearly unshakable evidence, but they, too, ended up being wrong some of the time.
Baffled, he started looking for the specific ways in which studies were going wrong.
And before long he discovered that the range of errors being committed was astonishing:
This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was.
Researchers headed into their studies wanting certain results - and, lo and behold, they were getting them.
We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously.
Perhaps only a minority of researchers were succumbing to this bias, but their distorted findings were having an outsize effect on published research.
To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings.
But while coming up with eye-catching theories is relatively easy, getting reality to bear them out is another matter.
The great majority collapse under the weight of contradictory data when studied rigorously. Imagine, though, that five different research teams test an interesting theory that’s making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly “proves” it true through some combination of error, fluke, and clever selection of data.
Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news?
Researchers can sometimes win attention
by refuting a prominent finding, which can help to at least raise
doubts about results, but in general it is far more rewarding to add
a new insight or exciting-sounding twist to existing research than
to retest its basic premises - after all, simply re-proving someone
else’s results is unlikely to get you published, and attempting to
undermine the work of respected colleagues can have ugly
professional repercussions.
He pulled together his team, which remains largely intact today, and started chipping away at the problem in a series of papers that pointed out specific ways certain studies were getting misleading results. Other meta-researchers were also starting to spotlight disturbingly high rates of error in the medical literature.
But Ioannidis wanted to get the big picture across, and to do so with solid data, clear reasoning, and good statistical analysis.
The project dragged on, until finally he retreated to the tiny island of Sikinos in the Aegean Sea, where he drew inspiration from the relatively primitive surroundings and the intellectual traditions they recalled.
In 2005, he unleashed two papers that
challenged the foundations of medical research.
In the paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time.
Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right.
His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.
The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review process - in which journals ask researchers to help decide which studies to publish - to suppress opposing views.
Still, Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal?
The other paper headed off that claim. He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals.
These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes.
Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid.
Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable.
That article was published in the
Journal of the American Medical Association.
Considering his willingness, even eagerness, to slap the face of the medical-research community, Ioannidis comes off as thoughtful, upbeat, and deeply civil.
He’s a careful listener, and his frequent grin and semi-apologetic chuckle can make the sharp prodding of his arguments seem almost good-natured. He is as quick, if not quicker, to question his own motives and competence as anyone else’s.
A neat and compact 45-year-old with a trim mustache, he
presents as a sort of dashing nerd - Giancarlo Giannini with a bit
of Mr. Bean.
But Ioannidis points out that obviously questionable findings cram the pages of top medical journals, not to mention the morning headlines.
Consider, he says, the endless stream of results from nutritional studies in which researchers follow thousands of people for some number of years, tracking what they eat and what supplements they take, and how their health changes over the course of the study.
In a single week this fall, Google’s news page offered these headlines:
...and dozens of similar stories.
But these studies often sharply conflict with one another. Studies have gone back and forth on the cancer-preventing powers of vitamins A, D, and E; on the heart-health benefits of eating fat and carbs; and even on the question of whether being overweight is more likely to extend or shorten your life. How should we choose among these dueling, high-profile nutritional findings?
Ioannidis suggests a simple approach:
ignore them all.
But even if a study managed to highlight a genuine health connection to some nutrient, you’re unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act together as a sort of network, and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you.
Even if changing that one factor does
bring on the claimed improvement, there’s still a good chance that
it won’t do you much good in the long run, because these studies
rarely go on long enough to track the decades-long course of disease
and ultimately death. Instead, they track easily measurable health
“markers” such as cholesterol levels, blood pressure, and
blood-sugar levels, and meta-experts have shown that changes in
these markers often don’t correlate as well with long-term health as
we have been led to believe.
(For example, though the vast majority of studies of overweight individuals link excess weight to ill health, the longest of them haven’t convincingly shown that overweight people are likely to die sooner, and a few of them have seemingly demonstrated that moderately overweight people are likely to live longer.)
And these problems are aside from
ubiquitous measurement errors (for example, people habitually
misreport their diets in studies), routine misanalysis (researchers
rely on complex software capable of juggling results in ways they
don’t always understand), and the less common, but serious, problem
of outright fraud (which has been revealed, in confidential surveys,
to be much more widespread than scientists like to acknowledge).
Should you be among the lucky minority that stands to benefit, don’t expect a noticeable improvement in your health, because studies usually detect only modest effects that merely tend to whittle your chances of succumbing to a particular disease from small to somewhat smaller.
And so it goes for all medical studies, he says.
Indeed, nutritional studies aren’t the worst. Drug studies have the added corruptive force of financial conflict of interest.
The exciting links between genes and various diseases and traits that are relentlessly hyped in the press for heralding miraculous around-the-corner treatments for everything from colon cancer to schizophrenia have in the past proved so vulnerable to error and distortion, Ioannidis has found, that in some cases you’d have done about as well by throwing darts at a chart of the genome.
(These studies seem to have improved somewhat in recent years, but whether they will hold up or be useful in treatment are still open questions.)
Vioxx, Zelnorm, and Baycol were among the widely prescribed drugs found to be safe and effective in large randomized controlled trials before the drugs were yanked from the market as unsafe or not so effective, or both.
But of course it’s that very extravagance of claim (one large randomized controlled trial even proved that secret prayer by unknown parties can save the lives of heart-surgery patients, while another proved that secret prayer can harm them) that helps gets these findings into journals and then into our treatments and lifestyles, especially when the claim builds on impressive-sounding evidence.
Though scientists and science journalists are constantly talking up the value of the peer-review process, researchers admit among themselves that biased, erroneous, and even blatantly fraudulent studies easily slip through it.
Nature, the grande dame of science journals, stated in a 2006 editorial,
What’s more, the peer-review process
often pressures researchers to shy away from striking out in
genuinely new directions, and instead to build on the findings of
their colleagues (that is, their potential reviewers) in ways that
only seem like breakthroughs - as with the exciting-sounding gene
linkages (autism genes identified!) and nutritional findings (olive
oil lowers blood pressure!) that are really just dubious and
conflicting variations on a theme.
The ultimate protection against research
error and bias is supposed to come from the way scientists
constantly retest each other’s results - except they don’t. Only the
most prominent findings are likely to be put to the test, because
there’s likely to be publication payoff in firming up the proof, or
contradicting it.
He looked at three prominent health
studies from the 1980s and 1990s that were each later soundly
refuted, and discovered that researchers continued to cite the
original results as correct more often than as flawed - in one case
for at least 12 years after the results were discredited.
Yet much, perhaps even most, of what doctors do has never been formally put to the test in credible studies, given that the need to do so became obvious to the field only in the 1990s, leaving it playing catch-up with a century or more of non-evidence-based medicine, and contributing to Ioannidis’s shockingly high estimate of the degree to which medical knowledge is flawed.
That we’re not routinely made seriously
ill by this shortfall, he argues, is due largely to the fact that
most medical interventions and advice don’t address life-and-death
situations, but rather aim to leave us marginally healthier or less
unhealthy, so we usually neither gain nor risk all that much.
Other meta-research experts have confirmed that similar issues distort research in all fields of science, from physics to economics (where the highly regarded economists J. Bradford DeLong and Kevin Lang once showed how a remarkably consistent paucity of strong evidence in published economics studies made it unlikely that any of them were right).
And needless to say, things only get worse when it comes to the pop expertise that endlessly spews at us from diet, relationship, investment, and parenting gurus and pundits. But we expect more of scientists, and especially of medical scientists, given that we believe we are staking our lives on their results. The public hardly recognizes how bad a bet this is.
The medical community itself might still
be largely oblivious to the scope of the problem, if Ioannidis
hadn’t forced a confrontation when he published his studies in 2005.
David Gorski, a surgeon and researcher at Detroit’s Barbara Ann Karmanos Cancer Institute, noted in his prominent medical blog that when he presented Ioannidis’s paper on highly cited research at a professional meeting,
Ioannidis offers a theory for the relatively calm reception.
In a sense, he gave scientists an
opportunity to cluck about the wrongness without having to
acknowledge that they themselves succumb to it - it was something
everyone else did.
Other researchers are eager to work with him: he has published papers with 1,328 different co-authors at 538 institutions in 43 countries, he says. Last year he received, by his estimate, invitations to speak at 1,000 conferences and institutions around the world, and he was accepting an average of about five invitations a month until a case last year of excessive-travel-induced vertigo led him to cut back.
Even so, in the weeks before I visited
him he had addressed an AIDS conference in San Francisco, the
European Society for Clinical Investigation, Harvard’s School of
Public Health, and the medical schools at Stanford and Tufts.
But his bigger worry, he says, is that while his fellow researchers seem to be getting the message, he hasn’t necessarily forced anyone to do a better job.
He fears he won’t in the end have done much to improve anyone’s health.
As helter-skelter as the University of Ioannina Medical School campus looks, the hospital abutting it looks reassuringly stolid.
Athina Tatsioni has offered to take me on a tour of the facility, but we make it only as far as the entrance when she is greeted - accosted, really - by a worried-looking older woman.
Tatsioni, normally a bit reserved, is warm and animated with the woman, and the two have a brief but intense conversation before embracing and saying goodbye.
Tatsioni explains to me that the woman and her husband were patients of hers years ago; now the husband has been admitted to the hospital with abdominal pains, and Tatsioni has promised she’ll stop by his room later to say hello. Recalling the appendicitis story, I prod a bit, and she confesses she plans to do her own exam.
She needs to be circumspect, though, so she won’t appear to be second-guessing the other doctors. Tatsioni doesn’t so much fear that someone will carve out the man’s healthy appendix.
Rather, she’s concerned that, like many patients, he’ll end up with prescriptions for multiple drugs that will do little to help him, and may well harm him.
Of course, the doctors have all been trained to order these tests, she notes, and doing so is a lot quicker than a long bedside chat.
They’re also trained to ply the patient with whatever drugs might help whack any errant test numbers back into line. What they’re not trained to do is to go back and look at the research papers that helped make these drugs the standard of care.
But not only is checking out the
research another time-consuming task, patients often don’t even like
it when they’re taken off their drugs, she explains; they find their
prescriptions reassuring.
Knowing that some of his researchers are spending more than half their time seeing patients makes him feel the team is better positioned to bridge that gap; their experience informs the team’s research with firsthand knowledge, and helps the team shape its papers in a way more likely to hit home with physicians. It’s not that he envisions doctors making all their decisions based solely on solid evidence - there’s simply too much complexity in patient treatment to pin down every situation with a great study.
In fact, the question of whether the problems with medical research should be broadcast to the public is a sticky one in the meta-research community.
Already feeling that they’re fighting to keep patients from turning to alternative medical treatments such as homeopathy, or misdiagnosing themselves on the Internet, or simply neglecting medical treatment altogether, many researchers and physicians aren’t eager to provide even more reason to be skeptical of what doctors do - not to mention how public disenchantment with medicine could affect research funding.
Ioannidis dismisses these concerns.
We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right.
That’s because being wrong in science is fine, and even necessary - as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough.
But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.
|