Focus on Science: Expand Your Science Knowledge… Relate It to Autism Treatment… Becoming a Savvy Consumer

Written by Daniel Mruzek, PhD, BCBA-D

In science, “validity” refers to the degree to which a study proves a “hypothesis” (i.e., a proposed assertion made by a scientist) about how two or more events relate to one another (e.g., “Treatment X” and improved speech in children with autism). When the validity of a study is high, we can interpret the results with guarded confidence. Conversely, if validity is low, study results must be viewed with great caution or thrown out altogether. This is one of the reasons why the scientific method is such a powerful way of solving problems. In science, we are not beholden to the powers of persuasion beyond what we determine to be empirically valid. Expanding one’s science knowledge, therefore, is a very liberating exercise, especially when faced with countless treatment options proposed by practitioners and entrepreneurs. In this series, we will look at common threats to validity so that you can watch for these threats as you make decisions about autism treatments.

Regression to the Mean

Have you ever heard the expression “its darkest right before the dawn”? This expression, which refers to the belief that events are at their worst immediately prior to getting better, is reminiscent of a significant threat to the validity of treatment research: regression to the mean. In treatment research, regression to the mean is a statistical phenomenon in which: 1) a random “blip” in data occurs (e.g., increase in tantrums); 2) a treatment is applied (e.g., a weighted vest); and, 3) subsequently, the “blip” in data randomly returns to baseline (or “regresses to the mean”), as do random occurrences all the time. In these cases, the treatment may look effective, though, in actuality, it was not. Consider this example: I ask my 2nd grade son to pick a card out of a deck of 52 playing cards without looking, and he pulls out a queen of hearts. I then tell him, “If I say ‘abracadabra’, the next card you pick will be lower than a queen.” I say “abracadabra”, and he picks out a seven of clubs. Should my son be amazed at the power of my “magic”? Of course not! It was not the effectiveness of any magic that led to his selection of a card lower than a queen but, considering that only a king and ace are higher, it was simply the statistical odds working in my favor. A similar phenomenon can happen when we research potential autism treatments or apply them in “real life.” Researchers conducting studies on autism treatments often recruit participants who demonstrate problems (e.g., classroom inattention, sleep difficulty, self-injurious behaviors) when those problems are at extreme- or at least elevated- levels. Likewise, as parents and practitioners, we often employ new treatments for particular problems when those problems are particularly intense or high rate. But, note that, just like our card example above, when a particular phenomenon is measured as elevated at a point in time, there is an increased probability that it will return to baseline (i.e., the average or the “mean”) in the future. And, if some “treatment” is applied in the meantime, the illusion of effectiveness may result. Researchers have a variety of methodological safeguards that they can employ to counter threats to validity like “regression to the mean,” including the use of randomized control groups (i.e., “no treatment” comparison groups) and treatment reversal designs (i.e., studies with planned, temporary discontinuations of the treatment to see if the problem reappears). These kinds of safeguards increase our confidence that the treatment in question is actually effective and not a game of statistical probabilities. Persons with autism- and their families- deserve nothing less. In the next issue, we will look at the limitations -and the potential for deception- found in the use of customer testimonials in the marketing of autism treatments.