LEEDS, England — It’s no secret that nutrition research contradicts itself pretty often. One day eggs are healthy, the next day some study says you should should never touch them again. According to the American Journal of Clinical Nutrition, the discrepancies in these inconsistent nutrition studies may come down to use of statistics.
In a study led by a team of scientists from the University of Leeds and The Alan Turing Institute, the national institute for data science and artificial intelligence, it was discovered that the most common way to study health implications of foods yields misleading and often confusing results.
“These findings are relevant to everything we think we know about the effect of food on health. It is well known that different nutritional studies tend to find different results. One week a food is apparently harmful and the next week it is apparently good for you,” says lead author Georgia Tomova, a PhD researcher in the University of Leeds’ Institute for Data Analytics and The Alan Turing Institute, in a statement.
Whether someone’s total energy consumption is statistically controlled for or not is one factor that can lead to significant differences in result interpretation. Controlling for other foods participant’s eat can skew findings even more, making it such that a food meant to be concluded as unhealthy is actually considered healthy, or vice versa. These and other variations in methodology lead many researchers to rely on review articles that look at the big picture instead of individual studies.
“Unfortunately, because most studies have different approaches to controlling for the rest of the diet, it is likely that each study is estimating a very different quantity, making the ‘average’ rather meaningless,” adds Tomova.
New “causal inference” methods, which describes the process of inferring causes based on assumptions, study designs, and estimations, helped the team better identify these inconsistencies. The method was able to not only help identify causative conclusions based on inferential data, but the team was also able to detect areas that they needed greater understanding in.
“Different studies can provide different estimates for a range of reasons but we think that this one statistical issue may explain a lot of the disagreement. Fortunately, this can be easily avoided in the future,” says Dr. Peter Tennant, Associate Professor of Health Data Science in Leeds’ School of Medicine.
In the future, the team hopes that their findings can assist future nutrition research efforts by helping them avoid conducting inappropriately designed and controlled studies, resulting in more reliable health studies.
The findings are published in the American Journal of Clinical Nutrition.