Autism peer review process

Written by Daniel Mruzek, PhD, BCBA-D

If one wanted to market a scientifically-unproven intervention, device, or pill as a valid autism treatment to families affected by autism, how would one go about it? Glossy pictures? Glowing testimonials? Miracle claims? Hyped social media pitches? Charming infomercials? Answer: All of the above.

And, here’s another marketing strategy: portraying one’s product as having scientific validation when, in fact, such validation does not exist. To do this, one might make references to “scientific evidence” in material that, upon systematic inspection, is less than convincing.

I was reminded of this when I recently reviewed a web page that boldly claims, “SCIENTIFIC RESEARCH Verifies The Son-Rise Program® WORKS!” Findings support the efficacy of parent-delivered SRP intervention for promoting social-communicative behavior in children with autism spectrum disorders.”

VerificationWhat are these findings? On the website, the Son-Rise marketers provide a link to a key source of their “verification” – a paper entitled “Training Parents to Promote Communication and Social Behavior in Children with Autism: The Son-Rise Program”. This paper, written by a trio of Northwestern University researchers, presents a study of the purported benefits of intervention delivered by 35 parents of children with autism who participated in a five-day parent-training course on Son-Rise Program methods, as well as an advanced follow-up course 3-12 months later. Parents completed The Autism Treatment Evaluation Checklist (ATEC), a rating scale that contains items on communication, sociability, cognition, physical status and behavior.

The authors divided the 35 parents into three groups based on how many hours of intervention the parents reported giving their child each week during the interval between their first and second Son-Rise trainings (i.e., no intervention [11 parents], 1–19 hours [13 parents], 20 or more hours [11 parents]). The authors present results reflecting statistically significant higher ATEC scores at second ATEC completion relative to the first, and they suggest that these higher scores reflect real improvements in communication, social skills, and sensory and cognitive awareness. They go on to point out that children with greater gains were more likely to have had more hours of parent-administered SRP.

As linked on the Son Rise web-site, this study is typeset like a published research article, prompting me to try to determine which peer-reviewed journal had published it. Through email correspondence, the third author, Cynthia K. Thompson, reported that the study had not been published because the team had decided to collect additional data prior to submission for peer review. In other words, this study is a “work in progress” and certainly not a verification of treatment effectiveness. In fact, this practice of repeatedly analyzing results prior to the close of data collection is, in itself, problematic from a scientific standpoint, as the process involves conducting many analyses that often yield varying results but are never reported (see Simmons, Nelson, & Simonsohn, 2011).

One of the mechanisms that make science such a powerful engine for progress is the reliance on the peer review process. The scientific method requires that, when a scientist makes an assertion (e.g., “This treatment works!”), (s)he knows that there exists a responsibility to show other scientists how they arrived at their conclusions with enough specificity that others can replicate the study. It is through this process of peer review that faulty assertions about the data are challenged and, hopefully, rejected in short order.

Typically, in the peer review process, an editor reviews a manuscript and, if deemed appropriate, shares the manuscript with a team of reviewers with demonstrated expertise in the relevant subject area. In many cases, these reviewers are “blind” to the identity of the authors and vice-versa, so as to minimize personal biases (e.g., affiliations, personal grudges). The reviewers are charged with the task of evaluating the contents of the manuscript on the basis of scientific merit, including the methodology, statistical analyses of the data and logic of the authors’ conclusions. The reviewers then describe, in writing, their opinions regarding the strengths and weaknesses of the study and make a recommendation regarding publication. The editor synthesizes this feedback and provides a summary to the author(s). In many cases, the editor will reject the manuscript for publication altogether. In other cases, he or she may require revisions, acknowledge limitations, temper conclusions or make other substantive changes prior to publication.

In the case of the manuscript written by Thompson and her colleagues, I suspect that, if it were submitted to a journal with a legitimate peer review process, a multitude of questions would be raised about it, including:

  • participant recruitment (35 self-selected parents out of a pool of 430 parents, many of whom participated in the first training but apparently did not return for more);
  • group distribution (non-random group assignment);
  • the outcome measure (one brief checklist completed by parents);
  • treatment fidelity (no way of verifying the quality or quantity of actual treatment), and
  • control for placebo effect, expectancy bias or any number of potential threats to the validity of responses.

Verification2And, despite some effort on the part of the authors to control for this, there is no real way of knowing what other interventions the child was engaged in during the interval between their parents’ first and second Son Rise training.

Of course, scientists who make up peer review committees are vulnerable to the same kinds of human frailties as the rest of humanity (i.e., jealousy, ego, bias, profit motive); however, the communal nature of the process, as well as another scientific safeguard- replication of results by others- helps to minimize the degree to which these frailties impact decisions regarding the quality of a study and our confidence in the results. The scientific method is far from perfect, but it is probably the best game in town for vetting new interventions.

Parents and other consumers of product pitches can watch for treatment claims that look like they have been established through the scientific process but in actuality fall short. Discuss potential treatment options with licensed and/or board certified experts whom you trust. Practice skepticism, especially when fantastical claims are made. Use resources, such as ASAT’s Treatment Summaries for quick reference. Families affected by autism deserve honest, direct communication about the state of the science for treatment options. Accept nothing less.

References

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366.

Citation for this article:

Mruzek, D. W. (2012). Focus on science: “Verification” and the peer review process. Science in Autism Treatment, 9(3), 18-19.

Print Friendly, PDF & Email