In my last post for this blog I discussed the fact that individual scientific studies are insufficient to establish a claim or phenomenon, and yet people often cite a single study as if it offers proof of their position. In order to really understand the status of a scientific claim, rather, you need to have some sense of the totality of relevant scientific research - the so-called scientific "literature."  

In this post, as promised, I will cover some basics as to how to interpret the published literature. This topic, however, requires some background discussion on the nature of expertise. Many scientific areas are highly specialized and require specific technical knowledge. This may become apparent when reading the technical literature - publications intended for the expert community, not the general public. They can often be incomprehensible to the general public, and not by design or to be elitist. Very specific concepts require very specific language to discuss properly, and so a technical jargon will typically develop in scientific fields.  

I often encounter this myself. Even as a physician I can find papers in medical specialties far from my own difficult to follow. In advanced fields outside of medicine I can be totally lost. This is, in fact, a good way to estimate your own knowledge in a field and your ability to grapple with the primary sources - papers discussing original research published in technical detail. If you are not fluent in the jargon, then don't assume you are fluent in the concepts.  

For this reason most people most of the time will rely upon secondary or popular sources of information about science. These are books, articles, and other media in which scientists who are experts digest the current scientific information on a topic and translate it into concepts and language that non-experts can understand. This is a difficult craft - making highly technical science accessible to a general audience without dumbing down the science. Few do it very well.

If you are well read in the secondary sources you can become well versed in a scientific topic. I have read voraciously about evolution, for example, and consider myself a knowledgeable lay person on this topic. But this should never be confused with actual technical expertise. I am still relying on experts to interpret the literature and tell me what it says.  

Most people, therefore, do not need to know how to interpret the primary literature. If you have a field of expertise, and can read the technical literature, then this skill becomes important. It is still useful, however, to understand the basics because then you are better able to judge when an expert is truly an expert and how solid the scientific consensus is on a topic. Further, when media experts are interviewed or write about a topic your BS detector will be more finely honed if you know something about how scientific evidence works.

This applies to many of the topics that skeptics find interesting. Let's take homeopathy, for example (always a great go-to example of pseudoscience). You will find that proponents often cherry pick individual studies when making their case, often referring to small or preliminary studies. Another tactic is to refer to the number of studies in homeopathy or that allegedly show a positive result. This allows them to cite the research and appear authoritative, even when they are not accurately reflecting the research.  

In order to understand what the literature on a given topic actually says you need to perform a systematic review (or rely upon someone else who has done it for you). A systematic review will comb through all the published literature for possibly relevant studies (this is still no easy task, but has been made much easier by searchable databases of published studies), and then use some inclusion and exclusion criteria to find the most relevant studies.  

Once you have the studies in hand, you then need to read through all of them and grade them on their methodological quality. You then, of course, need to look at the results - are they negative or positive and how robust are the findings?  

Then comes the real tricky part - looking for the overall pattern in the studies. Do the results across studies agree with each other, not only in whether or not they are positive, but in what way are they positive. In other words, are the same outcomes showing positive results. A medical study, for example, might use many different outcome measures (incidence of disease, mortality, quality of life, biological markers, etc.), and so a good systematic review will ask whether or not the same outcome is improved in most studies, or are they all different, suggesting random results.  

Another important pattern to look for is the relationship between the rigor of the study and the magnitude of the outcome. When a phenomenon is not real what we see in the literature is an inverse correlation between study quality and positive outcome - effect sizes tend to shrink as studies become more rigorous, and the most rigorous studies (if they exist) tend to be negative. This, for example, is the pattern we see in the homeopathy research.  

There are also ways to look for publication bias - the tendency to publish positive studies and not negative studies (because they are more interesting and get more press). One way to do this is the funnel plot - plotting the outcome of studies on one axis and the rigor of those studies on another. As studies get more rigorous the variation in their outcome should diminish until they zero in on the presumed real effect. For phenomena that are not real, they will approach a null effect. But even studies of real effects will show this pattern - a funnel shape with the tip pointing at the presumed real effect size.  

If, however, there is publication bias toward positive studies then the negative half of the funnel will be diminished or missing.  

Related to the systematic review is the meta-analysis. In order to do a meta-analysis you need to first do a systematic review to identify the relevant studies, then you combine the data from multiple studies and analyze the combined data as if it were one large study.  

Meta-analyses are difficult to do well and generally follow the "garbage in-gargage out" rule. It turns out that you can't simply combine a bunch of small studies and get one large rigorous study. It is a legitimate way to analyze the data, but it is not substitute for a large rigorous definitive study. It further should not replace other aspects of the systematic review - such as looking at the patterns in the data. Those patterns disappear when you lump all the data together.  

Better than any single systematic review or meta-analysis is a review of systematic reviews. You can either do this yourself or, again, rely on experts to do so. There are published systematic reviews of systematic reviews. I find it useful to look at multiple systematic reviews by different reviewers. If there is a consensus among various systematic reviews on the same question, that is likely to be a highly reliable conclusion.  

But you need to also pay attention to not just whether the outcome is positive or negative, but what the reviewers are saying about the quality of the evidence. Sometimes their conclusion is - more study is needed.  

Conclusion  

I think the most important take home message is that the scientific literature on a question needs to be interpreted as a whole, and that there are many nuances to doing so. Digging into the primary literature and trying to tease out a conclusion is not an easy task, and really should only be done by experts.  

We are all non-experts on most topics (even if you are an expert in something), and so for most scientific questions we are going to rely upon experts to translate the research for us. Hopefully this article will help the informed skeptic speak the language of the scientific literature a little more fluently so as to better understand experts who are trying to communicate its findings. It should also help in sniffing out the scientific poser who is not a real expert, or those with an ideological agenda distorting the research.

 

Steven Novella, M.D. is the JREF's Senior Fellow and Director of the JREF’s Science-Based Medicine project.

Dr. Novella is an academic clinical neurologist at Yale University School of Medicine. He is the president and co-founder of the New England Skeptical Society and the host and producer of the popular weekly science show, The Skeptics’ Guide to the Universe. He also authors the NeuroLogica Blog.