A recent new “study” of the risk in eating eggs is circulating widely in the news and social media.
It provides an excellent example to use for learning how to critically analyze the quality of “scientific” studies reported in the media. This one provides only an observation so weak that it tells us no more about the safety of eating eggs than the observation of a sprinter does about the seeming flat shape of the earth.
Good science measures phenomena under conditions that eliminate as many variables as possible so that the results reflect only the effect of the one being observed. If a test (study) does not do that, it’s bad science. And a famous physicist I know once said to me, “bad science isn’t even science”.
While I could argue at length about the weakness of the statistical results in this case, one need only look at the methods of the study to know that the study was not even science.
First, the authors used inaccurate measurements of egg consumption; a survey that asks people what they ate. Surveys for diet recall are inaccurate because of memory fallibility, as well as emotional overlays that skew answers given. Plus, it only measured egg consumption in one set of surveys at the beginning of the study, and never checked again over the ensuing 17.5 years to measure any changes in the answers. Do you think eating habits never change?
Second, although the authors tried to use statistical methods to account for variables that might have been affected the subjects, like weight, blood pressure, or diabetes, they did not eliminate or even account for many important variables that could have been linked to egg consumption, such as other unhealthy behaviors like high salt consumption or lack of exercise.
Think of it this way; a good way to eliminate variables is to take the thousands of people and divide them randomly into 2 groups. That would make each group likely to have the same distribution of important unmeasured variables, thus eliminating those variables from having any different effect in one group or the other. This study did not do that.
(And even in studies that do try to randomize out variables, the more possible variables, the larger the number of subjects that are needed to assure they are distributed evenly in both groups.)
The problems limiting this study are similar to the ones that lead us to discount a sprinter’s observations about the shape of the earth. First, the sprinter is only measuring the ground with her feet, not a carefully calibrated 3 dimensional ruler. Second, if the sprinter kept going in a straight line long enough she would return to the start from the opposite side, leading to the correct conclusion that the world could not be flat, as it appeared to be over the first few hundred yards. Her observations were too limited to be useful as science.
This study did not randomly divide the participants, it divided them based on egg consumption. Nothing they measured after that can be considered factually valid because the groups were not truly equal in the distribution of other unmeasured variables. But even if they did randomize, they would need hundreds of thousands of subjects to have measured enough to “randomize out” the large number of unmeasured variables. And they would have needed to measure the important variable, egg consumption, more rigorously over time.
Don’t be fooled by catchy headlines based on the conclusions of scientists trying to inflate the importance of their work. If the work was bad science, it wasn’t even science, and they weren’t acting as scientists. The media needs them to look smart so they can sell their news. You don’t need to listen.
If you want more guidance on how to critically analyze scientific reports in the news, we can help coach you on effective practices.
Call or email us for a free consultation.
Recent Comments