Research and the Media - Reading Between the LinesRecently, I read an article about the promise of a nutritional drink called Souvenaid for Alzheimer’s treatment. (My co-worker wrote a great blog post about it a couple of weeks ago.) As reported in CNN, early studies showed the drink had the potential to improve certain types of memory in Alzheimer’s patients.

Then I found another article, this time in ABC News, titled “Anti-Alzheimer’s ‘Cocktail’ Meets with Disdain.” The scientists quoted in this article suggest that the positive results of the Souvenaid study were overstated, very short-term, and possibly tarnished by the deep involvement of Souvenaid’s maker, Dannon (of Dannon yogurt). Dr. Murali Doraiswamy, a highly respected researcher from Duke, summed up the general consensus:  “At this point I would not recommend that anyone get excited by this or start medicating themselves with these ingredients till we have results from more definitive studies. At best, this study offers some grounds for further testing.

These contradictory articles are confusing. How are we, as consumers, supposed to interpret them? How can we understand the “real” results of this study when presented with such conflicting information?

Here are a couple of tips to give you context for understanding contradictory opinions on study results like these:

1) When scientists are trying to prove or disprove a hypothesis, they typically start with a small “pilot” study. Why? Big studies involving hundreds or thousands of people are very expensive to run. A small study might not show definitive results, but it can show trend data that indicate a larger study is worth the effort. But often the media pick up on the results of even the smallest studies. (You might have noticed that you often read about a “breakthrough” Alzheimer’s cure that you never hear about again. Probably the reported results were based on a small pilot study, and didn’t hold up in larger studies.) So: when reading about the results of a scientific study, take a look at how many people were included in the study. If it’s under a couple of hundred, it’s likely still an early-stage study (though it depends a little on the topic and on the number of studies that preceded it). That doesn’t mean that you should discount the results, just that you should be aware they are not definitive.

2) It’s extremely hard to build consensus among scientists, especially in the early days of a new treatment or discovery. For every scientist who says one thing works, there are often others who say it doesn’t.This means that you shouldn’t automatically reject a finding just because another scientist questions it. There are many reasons that it’s hard to get a single, unified answer from the scientific community.

  • It may have to do with the questioning nature of science: many scientists regard skepticism as a job requirement, and require every i to be dotted (or triple dotted) and every t crossed (five times) before they feel confident backing a result. It took decades to build a general consensus among scientists that adult brain plasticity (the basis of our brain fitness programs) was proven!
  • It may have to do with personal research biases–if one scientist studying Alzheimer’s is convinced the path to success lies in a drug that changes brain chemistry, he or she may be inherently prejudiced against a nutritional drink–and vice-versa.
  • It may be because some scientists do misrepresent results–intentionally or not. Even if the scientists involved in a study are perfectly fair, reporters often misrepresent results (overemphasizing the positive and omitting the questions, for example), because it makes for a better story or they don’t understand the nuances of the results.
  • It may be because of how results are measured: In the case of the Souvenaid study, CNN reported that the drink improved delayed verbal recall–one form of memory. The ABC News article didn’t mention the positive result on delayed verbal recall; the scientists it cited focused on the lack of positive results in all the other measures. Using standardized measures–tests that are widely known and trusted in the scientific community–is important to building consensus around results. It one way, it makes perfect sense: If a study does not use standardized measures, the group running the study can claim almost any result by measuring in a way that shows their data positively. On the other hand, standardized measures can be limiting if they aren’t designed for what you’re trying to study. In the case of brain fitness, we often used standardized measures designed to assess people with pre-dementia and dementia–good because they are well-accepted as trustworthy measures, but also problematic because they aren’t designed to detect the more subtle differences in normal age-related cognitive decline.
  • It may also be related to who is driving and funding the study. Some scientists feel strongly that any study that isn’t “independent”–that is designed and/or funded by a corporation with a vested interest in the study–results are suspect. If a study shows that pineapple juice helps people live longer, but it was paid for by Dole, are the results to be trusted? We have come up against this kind of thinking at Posit, since we have funded some of our own studies (though many others have been done independently, without input from Posit). In some cases, the study results are trustworthy. In others, probably not. Knowing who drove and funded a study can give you more context for interpreting the results.

This post has gotten quite long, so I’ll stop here. The point is this: Study results are often misrepresented and misinterpreted. But the people challenging those results aren’t always right, either. Paying attention to the details of the study–and who is talking about the study–can help you wade through the morass and get closer to the truth.