You’ve probably already made your New Year’s resolutions, so this may be coming a bit late. But come on—you know you’re not actually going to practice the cello two hours a day or learn Mandarin by listening to podcasts while you train for that half-marathon in March. And the salted caramel gelato is just going to stare at you mournfully from the freezer case until you succumb.
Why not resolve this instead: Next time you read about an absolutely outrageous scientific study—even if the story is in an unimpeachable real-news source—refrain from being outraged until you’ve checked it out for yourself. You don’t have to do this with every single outrageous story that comes along; life’s too short for that. But once in a while, just enough to give your bullshit detector a good dusting off. That’s going to be important in the days ahead.
Here’s my favorite recent example: In December, the Annals of Internal Medicine published “The Scientific Basis of Guideline Recommendations on Sugar Intake,” an evaluation of nine public health guidelines. The study, which was paid for by International Life Science Institute, an organization largely funded by the food industry, was rather critical of various aspects of the guidelines.
That fact didn’t go down well. A story in the New York Times entitled “Study Tied to Food Industry Tries to Discredit Sugar Guidelines” described the situation like this: “A prominent medical journal on Monday published a scathing attack on global health advice to eat less sugar. Warnings to cut sugar, the study argued, are based on weak evidence and cannot be trusted.” (That’s actually pretty thoroughly inaccurate, for reasons we’ll get to in a minute.) Marion Nestle, a vigorous opponent of industry-sponsored research, wrote, “This one doesn’t pass the laugh test.” We took a swipe or two at it ourselves.
The sugar industry using a significant medical journal to undercut important health guidelines. Who wouldn’t be outraged?
Well, us. We’ve now made a resolution, and we’re sticking to it. We’re going to look at the article.
The first thing you notice in reading the article is that it’s a lot more limited in scope than the coverage suggests. It’s not really about the recommendations contained in the guidelines. It’s grading the guidelines themselves, evaluating the way they were prepared and written using a scale called the GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach. GRADE, which has been around since 2000, isn’t meant just as an assessment; it intends to improve guidelines by getting developers to follow a standard approach—transparent, evidence-based, and suited to the goals the guidelines are advancing and the audience they hope to serve. Yes, the article calls the evidence base into question, but first, it spends much of its time addressing other questions: Were stakeholders appropriately consulted? Are the guidelines clear? Is methodology stated? Were steps taken to ensure editorial independence?
Let’s take one example the authors scrutinize: the U.S. 2015–2020 Dietary Guidelines for Americans. Overall, this document was one of the most highly rated. It had the top scores in clarity and stakeholder involvement. It took a hit in applicability, presumably because it fails to address metrics and specific techniques of gaining adherence, though it’s pretty good on discussing varying needs of different constituencies. Its lowest score was for editorial independence—though that is almost certainly because it fails to state how the project protected its authors from inappropriate influences, and because there’s no section stating potential conflicts of interest of the authors.
Then there’s that pesky little question of evidence. Interestingly, the article doesn’t have much to say about the evidence behind the U.S. guidelines, because they’re based on modeling rather than experimental or observational data.
Here’s what that means: The U.S. authors looked at what a normal person needs to eat to be properly nourished. Add up all the calories in that diet, and you get a number that falls about 10 percent below the maximum number of calories you ought to eat if you don’t want to gain weight. Thus, those calories are free to be used in the form of empty calories—such as sugar. Eat more empty calories than that and you’ll either be replacing more nutritious food or be taking on too many calories.
This makes sense, but if you’re not overwhelmed by its scientific sophistication, you’re not alone. There’s a lot of talk these days about the role of sugar and starch in Type 2 diabetes, a role that involves things like insulin resistance. You wouldn’t know it to read the U.S. guidelines. It’s not that the authors were unaware. The research report they worked from in creating the guidelines specifically mentions that there is credible evidence that sugar’s impact is not just a matter of calories. That mention didn’t make it into the guidelines.
So how do we evaluate that omission? It probably doesn’t have much impact on the specific recommendation—we just don’t know enough. It makes the guidelines a bit less useful to some physicians and others. And it means that the guidelines don’t fully represent the science. As a result, if you were grading the guidelines, you wouldn’t give them an “A.” But that omission doesn’t undercut them entirely.
Same thing with the “weak evidence” that the Times highlighted. We all know what strong evidence looks like: Large, randomized clinical trials with sophisticated analytics that let us draw conclusions about the effects of different doses and differences between different populations. By that standard, there isn’t all that much strong food science out there. Again, we all already knew that.
Marion Nestle put it beautifully: “What are dietary guidelines supposed to do?” she wrote. “We cannot lock up large numbers of people and feed them controlled amounts of sugar for decades and see what happens. Short of that, we have to do the best we can with observational and intervention studies, none of which can ever meet rigorous standards for proof. So this review is stating the obvious.”
Well, it would be if it were primarily concerned with whether the guidelines were accurate or not. Pretty clearly the stated goal is to determine whether the authors of the various documents did as good a job as they could within the limitations they faced. The study doesn’t actually grade them all that low. The overall rating for the group: “recommended with modifications.” That’s hardly a blazing attack.
All right, we’ve kept our resolution. We waited to get outraged, but that doesn’t mean we’re not allowed to indulge at all. There’s still lots to get lathered up about:
The misreading of the article as an attack on the recommendations is quite predictable, and yet the authors and editors did nothing to fend it off. The headline in particular is actively misleading. Maybe the body of the article doesn’t do the bidding of evil sugarmeisters, but the package stinks. Yes, journals are read by technically sophisticated audiences, but some articles are guaranteed to circulate widely. This was one, and it should have been written and edited to fill that role better.
Speaking of misreading, what the hell is wrong with the New York Times? This was a simple story to report, yet it managed to get almost nothing right. Come on, guys.
And the real outrage, as far as I’m concerned: Why is the science taking so long? Marion Nestle is right: the easy research paths aren’t going to work well. But we face an epidemic of diabetes that constitutes an authentic public crisis. Where’s the commitment, where’s the creativity? Science can do better.
The point of the resolution, you see, isn’t to make you calmer. It’s to improve the quality of your anger, to make it more focused and accurate. Try it. You’ll like it. And happy New Year.