The following is a contribution to the JREF’s ongoing blog series on skepticism and education. If you are an educator and would like to contribute to this series, please contact Bob Blaskiewicz.

One of the most common impediments to learning is our seemingly innate tendency to overemphasize the usefulness of common sense. Although it may be quite useful with the mundane decisions of daily life, more complicated issues often contradict what seems intuitively likely. Thus, an important critical thinking exercise is convincing students that common sense can fail you.

In my experience, the three most instructive themes in this regard are: 1. Demonstrating the difference between correlation and causation; 2. Emphasizing the difference in value between anecdotal evidence and. replicated, peer-reviewed research; and 3. The importance of control groups and placebos.

Correlation versus Causation

Some of the most glaring examples of the fallibility of common sense lies in the seductive nature of correlations. In many cases, it seems obvious that a phenomenon is being caused by the factor that occurs with it most frequently. In other words, if A and B always (or quite frequently) occur together, then one must be causing the other. Although a lot depends on the variables involved (e.g., none of my students believes that a rooster’s crow makes the sun come up), this error can be close to irresistible. But simply stating that correlation and causation are not always related is not enough for most people to grasp our meaning. Instead, three avenues of reasoning must be fleshed out, and relatable examples should be used.

Correlation Does Not imply Causation: Arguably one of the most widely cited examples of the “correlation means causation” error is the erroneous belief that vaccinations cause autism. The history and dangerous results of this belief (begun by medical fraud, and popularized by numerous celebrities) have been covered ad nauseam on various blogs and publications, but a frequent argument used by its proponents is that rates of autism have steadily increased with the introduction of (and increase in) childhood vaccinations. At least part of this correlation is quite real, but it certainly does not support the claim of a causal link between autism and vaccines. Instead, some childhood vaccinations are simply timed around the point in a child’s life when she or he is likely to begin showing signs of autism. Therefore, many parents of children with autism find the claim of a causal link quite compelling, since the timing of both vaccines and autistic behaviors and diagnosis in their children seem suspiciously close.

A more humorous example of this error can be found on the web site of the Church of the Flying Spaghetti Monster, in the form of an open letter to the Kansas School Board. In it, the writer satirically expresses a shared concern over students being exposed to only one theory (i.e., evolutionary theory), and offers her/his own competing theory to explain the phenomenon of Global Warming. The fact that this theory happens to be derived from a correlation with pirates makes it a ridiculous, yet poignant, example of the classic correlation/causation error.

Direction of Causality Problem (reverse causation): Unfortunately, many texts and web sites offer only ridiculous examples of the Direction of Causality Problem (e.g., more firefighters are called upon to fight larger fires, therefore, people could assume that firefighters’ association with large fires means that firefighters cause fires), also known as reverse causation. However, many real-world examples exist in the research that can be used to instruct students. A 1997 study in the International Journal of Epidemiology investigating the correlation between breastfeeding and growth stunting in infants provides such a case. At the time of the study, a debate was growing in the nutritional sciences about the benefits of breastfeeding. A number of correlational studies had reported that toddlers breastfed beyond 12 months were significantly underweight and smaller than their age-matched counterparts who were not. In a study of 134 toddlers in Peru, Marquis, Habicht, Lanata, Black, and Rasmussen (1997) collected data not only on their feeding habits, but also on factors related to overall health and thriving. Their conclusion was that children with more health issues, and whose development was poor, were being breastfed more and for longer periods of their lives than those who were generally healthy and growing well. Therefore, earlier studies showing a negative relationship between breastfeeding and growth had reported the causal factor as an effect, and the effect as the causal factor; reverse causation.

The Third Variable Problem: A good example of the importance of understanding this phenomenon began when numerous epidemiological studies reported that women receiving hormone replacement therapy (HRT) experienced a significantly lower incidence of coronary heart disease (CHD). This correlation was found in all but a few studies after the initial report, resulting in an increase in “preventive” HRT being prescribed for post-menopausal women. Re-analysis of the data from these studies, this time controlled for socio-economic status, showed that women in higher socio-economic groups were more likely to be taking HRT; these women also had better diets and were more likely to exercise regularly, resulting in the decrease in CHD incidence. Later, in fact, scientists conducting randomized controlled studies demonstrated that HRT caused a small but statistically significant increase in risk of CHD (Lawlor, Davey Smith, & Ebrahim, 2004).

Anecdotal Evidence vs. Replicated, Peer-Reviewed Research  

How often have we read statements such as, “Suzie J. from Richmond, VA used our product for just six months and reported a 50 lb. weight loss!”, or “How well does our reading program work? Just ask Timmy G. of Bakersfield, CA! His grades went from C’s to A’s and B’s in less than one year!” Such anecdotal evidence is used primarily to sell products and services to a public too busy or distracted to seek out scientific evidence of their value. Given the ubiquitous nature of these claims, they’re a great way to teach students to think critically with examples they can relate to.

For example, nearly all of my students are familiar with the phrase, “Hooked-on-Phonics worked for me!” Anyone alive in the 1980s or 1990s was inundated by television and radio commercials in which cute little kids sang the praises of the reading program inspired by a father who wanted to help his struggling child improve his reading skills. Their subsequent marketing of a product that seemed to make up for years of incompetence in the public schools resulted in millions of dollars in sales, and a stern backlash by those in the educational community (and later the Federal Trade Commission) who demanded evidence of its effectiveness. 

One of my favorite exercises for my students is to ask them to consider the effectiveness of such anecdotes and testimonials, and how they differ from peer-reviewed research. To emphasize this point, I ask them to consider how they might carry out a study in which they tested the effectiveness of a program like Hooked-on-Phonics, and require them to devise an experiment from start to finish, considering possible correlates (e.g., parental involvement) that might affect the outcome of studies that did not include randomized trials. After they complete this activity, my students often begin asking their own questions. Two of the most frequent are: 1.) “Why isn’t there a law requiring businesses to prove the effectiveness of their products before being allowed to make claims and sell them?” and, 2.) “Why doesn’t the general public demand that manufacturers show proof that their products work before they buy them?”

The Importance of Control Groups and Placebos

Aside from the possibility of dishonesty, one of the reasons that anecdotes and testimonials are of little scientific value is that there is no basis for comparison of the person reporting success to the general public. In other words, if exposure to a variable is associated with a change in another variable, how do we know that the change wasn’t the result of chance? Would the change have occurred anyway, without exposure to the other variable? To answer these questions, we must consider the importance of control groups.

Particularly important in the social sciences, control groups allow researchers to eliminate and/or isolate specific variables in their studies. They allow researchers access to the behavior and/or condition of a group of people who are left in their “natural state,” never exposed to the causal agent or condition. In many experiments, a host of possible causes for phenomena must be narrowed down to one probable cause, which isn’t easy when you consider that many variables can be affected by such things as the testing environment, researcher bias, and even biological differences between subjects.

Although those of us who work in the sciences take for granted the importance of control groups, a surprising percentage of my students must be reminded to demand them when evaluating claims. For many, controls seem counter-intuitive. “If something works, it just works,” they seem to argue. But critical-thinking requires us to redefine “works,” and to more closely evaluate claims of effectiveness in terms of comparing treated to non-treated individuals.

A highly effective tool when using control groups is the placebo. From the Latin, “I please,” a placebo is a fake form of the suspected causal agent or condition. In medicine, the classic placebo is an inert substance in the form of a pill. In psychology, the placebo condition might be a fake form of therapy. In any case, its use reflects the knowledge that expectation on the part of the subjects can change their behaviors in ways that mask the effect of the suspected cause. Subjects told that they might be receiving medication, but also that they might instead receive a placebo, are not likely to be so affected, and any change can then be attributed to the medicine.

The effect of the placebo has also opened up inquiry into the fascinating world of the Placebo Effect -- which Dr. Harriet Hall suggests we begin referring to as the Placebo Phenomenon, reflecting its non-effect, and psychological roots. A recent series of articles in Psychology Today devoted to examining the science behind placebos address a number of findings, including some of the exaggerated claims of their effectiveness, as well as the controversial issue of doctors prescribing placebos to patients with real (albeit psychologically induced) complaints.

In closing, I would like to emphasize to anyone (professional or non-professional) attempting to warn people against the over-use of common sense to keep in mind that scientific thinking is often quite unnatural and counter-intuitive. Therefore a good measure of patience and repetition is usually required before people become comfortable giving up their use of instinct, hunches, and personal experience in favor of the more artificial practice of critical thinking.

References

  • Lawlor, D. A., Davey Smith, G., & Ebrahim, S. (2004). "Commentary: The hormone replacement-coronary heart disease conundrum: Is this the death of observational epidemiology?" International Journal of Epidemiology, 33(3), 464–467.
  • Marquis, G. S., Habicht, J. P., Lanata, C. F., Black, R. E., & Rasmussen, K. M. (1997). Association of breastfeeding and stunting in Peruvian toddlers: an example of reverse causality. International Journal of Epidemiology, 26(2), 349-356.
  • Nathans, A. (1994, December 15). Hooked on Phonics settles with FTS on advertising claims. The Los Angeles Times. Retrieved from http://articles.latimes.com/1994-12-15/news/mn-9369_1_hooked-on-phonics-program.
  • Novella, S. (2007, November/December). The Anti-Vaccination Movement. Skeptical Inquirer, Volume 31.6. Retrieved from http://www.csicop.org/si/show/anti-vaccination_movement/.
  • The Placebo Effect. (2012, January). Psychology Today. Retrieved from http://www.psychologytoday.com/collections/201201/the-placebo-effect.

     

Sheldon_W_HelmsSheldon W. Helms is an associate professor of psychology at Ohlone College in Fremont, CA. He has taught psychology for more than 16 years, and teaches a wide range of topics including Abnormal Psychology, Experimental Psychology, Social Psychology, and Human Sexuality. He serves on the Board of Directors for the Bay Area Skeptics and is the founder of the Ohlone Psychology Club Speaker Series, through which he regularly hosts top name speakers in science and skepticism.