The famous study that convinced us fat is good wasn’t very good science

Journals are mostly interested in studies with new and striking results—results that go against the conventional wisdom, even if that wisdom is correct.

In the world of research, the gold standard for reliability is the meta-analysis—a study that brings together multiple smaller studies into something that, in theory at least, has lots more statistical power than the components it was built from. Journalists and policy makers gravitate to them because they seem more reliable, drawing as they do on all the research, not just a single experiment.

But there’s a problem. As a provocative article recently published in the Journal of the American Medical Association (JAMA) points out, many meta-analyses of nutrition are falling far short of their promise. “When applied to studies conducted with similar populations and methods, meta-analyses can be useful,” the authors write. “However, many published meta-analyses have combined the findings of studies that differ in important ways, prompting [psychologist Hans] Eysenck to complain that they have mixed apples and oranges—and sometimes ‘apples, lice, and killer whales’—yielding meaningless conclusions.”

The result: Far from increasing statistical power, these studies are reducing it or causing real correlations to disappear.

“If I want to make butter look safe, then I’m going to choose the most highly adjusted dataset, because if I adjust for cholesterol levels I’m going to make the butter effect go away.”

One of the meta-analyses they discuss was a 2014 study examining the connection between saturated fat and coronary artery disease. That study was important: It was cited in Mark Bittman’s “Butter Is Back” column for The New York Times, Time magazine’s “Eat Butter” cover story, and the report of the 2015 National Dietary Guidelines Advisory Committee. It helped reshape the discussion over fat in our diets.

But the analysis had some big problems, says Dr. Neal Barnard, adjunct associate professor of medicine at the George Washington University School of Medicine and Health Sciences in Washington, D.C., and an author of the JAMA article.

One had to do with the mix of studies selected. One of the studies used in the meta-analysis was the Oxford Vegetarian Study, which included vegans, ovo-lacto vegetarians, fish eaters, and meat eaters. The broad range of participants gave the study a broad range of saturated fat intake: from 6 or 7 percent of total calories consumed in the vegans to approximately twice that much in other groups. The findings were striking: People in the top third for saturated fat intake were almost three times as likely to die of ischemic heart disease (the term given to heart problems caused by narrowed heart arteries.)

But the meta-analysis combined those data with data from Malmö Diet and Cancer study conducted in Sweden, which included no one at the lowest level of saturated fat intake: Instead, the range of saturated fat consumed was from 13 percent to more than 22 percent of total calories—that is, above Sweden’s maximum daily intake level. That study found no association between saturated fat intake and cardiovascular disease.

Why not? Well, maybe Swedes are invulnerable to dietary cholesterol. More likely, though, it’s because almost everyone in the study would have ranked in the top third of saturated fat consumption in the Oxford study, and almost everybody in the study was experiencing elevated levels of cardiac disease. In other words: there was no healthier group to compare them to. So it’s not that there’s no association; it’s that everybody is affected. The authors of the Malmö study themselves argued that they didn’t have the evidence to test a connection.

And yet, the Malmö study was folded in with the Oxford study, diluting its results with what probably amounts to a big false-negative—people who had more cardiovascular disease, but looked like they didn’t because there was no comparison group.

Choosing what studies to include in a meta-analysis is only part of the problem. There’s also the question of how to weight them properly, and how to treat studies that use different endpoints or methodologies or adjust the data in inconsistent ways.

Journals are mostly interested in studies with new and striking results—results that go against the conventional wisdom, even if that wisdom is correct. Add in the influence of industry and you get a situation where the published research turns one-sided.

Barnard provides an example: “Let’s say that when I’m doing my analysis I decide to put in the original studies, but I use the lowest level of control or adjustment. For instance, in the Malmö study, they reported their data in different ways. In one model, if I remember correctly, they adjusted only for age. You could see that saturated fat increased the risk of heart disease, though maybe not significantly. But if you adjust for age plus cholesterol level plus blood pressure plus body weight . . . The more things you adjust for, the more the effect starts to disappear. The point is, if you’re doing a meta-analysis, you decide which of these datasets you are going to use. If I want to make butter look safe, then I’m going to choose the most highly adjusted dataset, because if I adjust for cholesterol levels I’m going to make the butter effect go away, because butter’s effect is through raising cholesterol.”

Then there’s the “file cabinet problem.” Not every piece of research makes it into the published record. Journals are mostly interested in studies with new and striking results—results that go against the conventional wisdom, even if that wisdom is correct. Add in the influence of industry, Barnard says, and you get a situation where the published research turns one-sided. “When you look at the early studies on the effect of dietary cholesterol on blood cholesterol they show fairly convincingly that the more cholesterol you eat, the higher your blood cholesterol will be,” he says. “But then, once that was clearly established, the government wasn’t really interested in finding it anymore, and the only people funding research on the effect of dietary cholesterol was the egg industry, and so in recent years that’s kind of been the predominant funder.”

How dominant? According to a lawsuit filed in early 2016 against the secretaries of agriculture and health and human services, arguing that they had permitted inappropriate industry influence on the Dietary Guidelines for Americans, the egg industry funded 29 percent of studies on dietary cholesterol in 1992—but 92 percent in 2013. And you don’t have to believe that scientists are easily bought to regard that as a problem.

Two points of interest: The lawsuit failed for a really depressing reason: The court found that as a matter of law, there’s no way to determine what “inappropriate” influence means. And Neal Barnard is the founder and president of the organization that brought the suit, the Physicians Committee for Responsible Medicine. He’s by no means a disinterested bystander in the food wars; rather, he’s an outspoken advocate for the vegan diet and preventive medicine, and an opponent of animal testing. (You can get a sample of his opinions in this Salon interview.) I bring it up not to discredit his conclusions about saturated fat and other dietary issues (though they are much in the minority these days), but to highlight the fact that in the JAMA article, Barnard and his coauthors have done what I believe scientists ought to do: They’ve argued not for their beliefs (though they certainly do that elsewhere) but for the integrity of the science they want to be able to use to support them. That’s too rare an approach in our “post-truth” era.

This was, of course, the week when Richard Thaler won the Nobel Prize in economics for demonstrating that our beliefs and actions are not as rational as we would like to think. We have a complicated relationship with reality that leads us to make a lot of bad decisions. We can’t do much to rejigger the neurological minefield of our brains, but we can at least make the effort to get outside our own heads and let ourselves be guided by fact. We won’t always succeed—that’s kind of what it means to be human, sadly enough. But we can try. And standing up for good science is a great place to start.

Also tagged

Patrick Clinton is The Counter's contributing editor. He's also a long-time journalist and educator. He edited the Chicago Reader during the politically exciting years that surrounded the election of the city’s first black mayor, Harold Washington; University Business during the early days of for-profit universities and online instruction; and Pharmaceutical Executive during a period that saw the Vioxx scandal and the ascendancy of biotech. He has written and worked as a staff editor for a variety of publications, including Chicago, Men’s Journal, and Outside (for which he ran down the answer to everyone’s most burning question about porcupines). For seven years, he taught magazine writing and editing at Northwestern University's Medill School of Journalism.