In my last post, I wrote about the #overlyhonestmethods discussion on Twitter and its insights into creative (and funny) ways that researchers deal with unexpected problems in their work. While I was following #overlyhonestmethods, I came across a mention of a creative-sounding study reviewing the research on whether specific foods can cause cancer. Since the media regularly covers cancer research – and often makes wrong or misleading reports about it – I found the text of the whole article to see what it had to say. When I saw that the article had the awesome title of Is Everything We Eat Associated With Cancer? A Systematic Cookbook Review, I knew I had to write something about it.
I’ll start out by saying that I am not an epidemiologist, but one of the great things about the article is that the methodology is explained clearly enough so that a non-epidemiologist like myself can follow it. Also, I want to emphasize that by calling this study’s methodology “creative”, I am not implying that it’s wrong or weak – quite the opposite. The authors actually came up with a very smart way to explore a potentially complex question.
Just like any essay for school needs to have a topic, a research study has to have a research question, so that the research itself can be structured to produce an answer to that specific question. In this study, the authors set up their research question like this [my paraphrasing]:
- Every year, thousands of studies investigate whether different dietary factors are linked to cancer. The results of these studies get lots of media attention.
- But while these results may show linkages between certain foods and cancer, the linkages may be weak or meaningless. And other research indicates that weak or meaningless linkages may be underreported or misinterpreted.
- So, given these problems, how can we tell whether a specific food is actually a risk factor for cancer?
This question establishes a potentially huge scope for investigation, because of the sheer number of studies in this area and the wide range of foods that are studied. I’m guessing that the authors, like most researchers, didn’t have the resources to locate and analyze the results of every single study in this area – so they had to structure their investigation to get a reliable representation of the results of all of these studies.
This is the point where creativity comes into the research design. The best way to get a set of information that can reasonably be expected to accurately represent a larger set of information is through random sampling. In random sampling, you choose pieces of information from a larger group in a way that doesn’t give any piece of information a better chance of being chosen than any other piece.
So the authors had to select a “large random sample of food ingredients” (p. 127) – and they decided to do this by using a source that lists a lot of different food ingredients: namely, a cookbook. They chose this cookbook – the best selling cookbook of its time, still in print in its 13th edition – and used it alongside a random number generator. They got a number from the random number generator, went to the page number in the cookbook that matched that number, and recorded the ingredients in the first recipe on that page. They did this until they had a list of 50 different ingredients.
The authors then searched an online database of published studies for research studies that investigated linkages between cancer risks and each one of the 50 ingredients. They found at least one study for 40 of their chosen ingredients. If an ingredient was tested in more than one study, the authors selected the 10 most recent studies involving that ingredient.
The details of the analysis are outlined in the paper, but to summarize, the authors looked at the results of each individual study, and also conducted several types of meta-analysis to test the combined significance of the linkages reported in all the studies. “Significant” in this context usually means that if the study was conducted multiple times, the reported results would occur by chance less than 5% of the time. And the authors also looked at how these studies were conducted, such as “exposure contrasts”: for example, how often people had consumed the food that was being tested, or how much of that food the people had eaten.
Here’s the authors’ summary of their findings:
We found that 80% of ingredients from randomly selected recipes had been studied in relation to malignancy, and the large majority of those studies were interpreted by their authors as offering evidence for increased or decreased risk of cancer. However, the vast majority of these claims were based on weak statistical evidence….(and) we found great variability in the types of exposure contrasts. [p. 131-132]
The authors concluded that because of the variations in the studies’ results, the differences in the ways the studies were conducted, and “a climate in which ‘negative’ results are undervalued and not reported” (p. 132), it’s very difficult to draw accurate conclusions about whether any of these foods actually can be linked to cancer. And this is a really important point. The media really do a disservice by running “Eat broccoli five times a day and you won’t get acne”-type stories, when that’s not what the research says at all.
(If you’re interested in reading more about these sorts of issues, this website does a great job of calling out inaccurate health reporting. This particular example from that website illustrates the problems that happen from not understanding, or not accurately reporting, how study design and methodology affect a study’s results.)
So maybe the media are creative about research in ways that aren’t so good for advancing accurate scientific knowledge. But the cookbook-based research in this study is the sort of research creativity that should be commended – especially as a cookbook is not something you would expect to find lying around a research lab (unless there are some other #overlyhonestmethods that haven’t come to light yet). I hope that other researchers will be inspired by these authors’ creativity, as I was, to think of different or new ways to approach their own research.
So, I guess I’m having distilled water for dinner… again. I’ll be interested to read, in a few years, if there is any impact on our common knowledge from the newer journal – PLoS ONE, I think it is – that facilitates the publication of negative results. As I understand it, there is a huge bias in our public (published) understanding because mostly only “positive” results appear in the academic literature.