Hillary Bonhomme
Late last year, without much fanfare, the National Academies of Sciences, Engineering, and Medicine published a document entitled “Redesigning the Process for Establishing the Dietary Guidelines for Americans.” It was the kind of thing only a bureaucrat could love: more than 250 pages on roles, workflow, and analytic standards. Here’s a sample, chosen at random:
“Employing and modeling different standards of ‘typical consumption’—operationalized by composite nutrient profiles weighted to reflect population averages—are critical, as they help evaluate what the population’s average nutrient intake would be if they followed the recommendations under varying circumstances. The approach taken by [the committee] is in contrast with others that rely on especially nutrient-dense foods (such as salmon, apricots, or almonds), which might result in insufficient nutrient intakes when the patterns are put into practice with more typically consumed foods. However, the range of expected nutrient intakes, as well as the average, could be obtained if the variability in intakes were accounted for.”
You get the flavor of the thing: soporific.
But while some of us were dozing off, other people were getting worked up, including a researcher named Edward Archer, who coauthored an open letter to the Academy that labeled the report “extremely misleading,” stating that it “contained errors of fact and omission, and failed to address, or even acknowledge a large body of rigorous research that is explicitly contrary to the authors’ conclusions,” and calling for it to be retracted.
So what’s so wrong about the process that leads to the dietary guidelines? When you get down to it, just one thing. Unfortunately, it’s kind of the most important thing.
Let’s back up a moment. In my February 15th column, we looked at the many problems in nutritional research: The studies tend to be small and speculative; the effects of any given food or food component tend to be small; research designs are often faulty; and researcher bias is somewhere between rife and universal. All of this contributes to the likelihood that the conclusions of a lot of nutrition research, possibly even most nutrition research, are wrong.
What we didn’t talk about last time is the data problem.
You see, most nutrition researchers are forced to collect their data using a notoriously unreliable scientific instrument: the human brain. It’s hard and expensive to conduct rigorous nutritional experiments where you know through direct observation and measurement exactly what people are eating. Instead, most studies are conducted by asking people what they ate.
And that, says Edward Archer, is a huge problem. “Now, human memory has been demonstrated to be flawed for hundreds of years. Memory is not like a video recording. It’s a reconstructive process and every time you remember something you change it and more importantly you have other memories getting in the way of your current memory. And not only do we have mis-estimation and false memories, we also have lying. I have a paper under review right now demonstrating that about 60 percent of people will admit to lying about the foods that they eat.”
Now, there’s wrong and there’s wrong. If you have an otherwise functional clock that just happens to be set ten minutes slow, it’s no problem to adjust the incorrect data it provides you at any given moment. The question is whether faulty food data can be similarly manipulated to make it useful.
Pay attention to that mention of the funding community. I mentioned that nutrition research suffers from conflicts of interest. Sometimes that means corporate sponsorship, of course. But almost always it involves preserving nutrition research itself. What happens if survey-based data suddenly become unacceptable? A lot of nutrition researchers will quickly discover it’s a lot harder to find funding, conduct studies, and publish the kinds of articles that provide tenure, job security, and prestige. And these endangered scholars are the peers who pass judgment on the articles that appear in peer-reviewed journals. They have a powerful incentive not to rock the boat.
You could look at food’s data problem and say that it means we should be a lot more circumspect about believing the latest bit of hype coming out of universities (especially if the university in question is Cornell and the researcher is working in the lab of Brian Wansink). But Archer goes considerably further. He doesn’t just say that most nutritional studies are questionable. He says they’re wrong.
Dietary cholesterol? There was never evidence that it caused harm—something that even the scientific committee of the Dietary Guidelines for Americans conceded in 2015.
The Mediterranean diet? Reducing sodium? The evidence is actually against the Mediterranean diet, Archer says. And for many people, reducing sodium might actually do more harm than good.
(How did we travel down this path in the first place? That’s a question for another day. But if you’re impatient, Archer maps it out in great detail here.)
Let’s pause and take a deep breath.
OK, there are cranks out there. Could Archer be one? I suppose. He’s got that whole voice-of-one-crying-in-the-wilderness thing going on, which is worrisome. He was recently let go by the University of Alabama, where he worked as a nutrition researcher. (Don’t feel too sorry for him. He’s now chief science officer at a startup focused on using data to improve health.) But his articles are showing up in the right places, and even the people who disagree with him seem to treat him with respect. And he’s not the only scientist who thinks the way he does. For those of us who haven’t got the scientific expertise to sort things out for ourselves, it’s always hard to know what to think when faced with this sort of controversy. Here’s how I approach it:
Second, the question about data is truly bothersome. I ask, “Would I lie on a food questionnaire?” and the answer is “almost certainly.” I’m pretty sure I have lied to my doctor about my diet, even though I think it’s a terrible idea. Fortunately, I’m not sure he actually listens to me. If there’s no data, I don’t want to hear the conclusions.
So I’m inclined to think that Archer may well be right. At least he’s given me a tool for chipping away at the quasi-scientific garbage that I encounter on a daily basis. And if there’s no evidence of connections between specific foods and diets and diseases, does that mean that we just haven’t proven them yet, or that there aren’t any to be discovered?
At this point, you may have written me off as being a crank myself. Fair enough, I suppose. But stick around for one last question: If it is in fact true, as it might be, that diet has little or no impact on diseases—if that becomes the accepted wisdom—what happens to the “good food” movement? Would it be a disaster, or maybe the beginning of a truer, kinder approach to the vexing problems we all face.
That’s what we’ll talk about next time. See you then.