To identify relevant scientific studies on the effects of each intervention to include on the ASAT website, a search was conducted of standard databases of published research in medicine (Medline), psychology (PsycInfo), and education (ERIC). In addition, previous reviews of research on autism interventions were consulted, including Educating Children with Autism (National Research Council, 2001) and reports from the Cochrane Database of Systematic Reviews. Practice guidelines developed by organizations such as the American Academy of Pediatrics and the American Speech and Hearing Association also were inspected. Finally, writings by the developers of each intervention, either in printed media or on the worldwide web, were examined.
Research studies were evaluated based on criteria developed by the clinical psychology division of the American Psychological Association (Chambless et al., 1996). Summaries of the research were sent to anonymous reviewers for feedback prior to being posted on this website. The summaries are intended to provide a brief overview of the evidence on an intervention, rather than to describe each individual study in detail or to list every individual report on an intervention.
The American Psychological Association criteria focus on studies that have been published in peer-reviewed journals. In such journals, the editor receives research reports from authors and sends the report to several experts, whose identities are usually withheld from the authors of the report so that they can give honest feedback. The experts scrutinize the adequacy of the research methodology, soundness of the conclusions, and contribution to scientific knowledge. The editor uses the expert feedback to make a recommendation for or against publishing the report and passes along the comments and recommendation to the authors. While not a perfect process, peer review increases the likelihood that published reports are reliable and useful sources of information. Many autism treatments, however, do not receive this kind of review and are instead publicized in sources such as the popular media, websites, advertisements, and workshops. Peer-reviewed reports are generally much more trustworthy than reports in the popular media and are emphasized in the research reviews on the ASAT website.
The American Psychological Association criteria take into account two kinds of research designs: between-group and single-subject studies. In one between-group approach, called the randomized clinical trial (RCT), participants are randomly assigned to two or more groups. One group receives the treatment, and the other is untreated (or receives an alternate treatment); then the outcomes of each group are statistically compared. This design has yielded strong evidence for the efficacy of some interventions, such as the use of the medication risperidone to reduce aggression (McCracken et al., 2002), which is a problem for some (but not all) individuals with ASD. The design also has been used to show that some interventions are ineffective. For example, 14 RCTs evaluated injections of secretin, which is a hormone that aids with digestion and which was proposed as a possible cure for ASD. Because none of the studies detected a benefit from secretin (Williams, Wray, & Wheeler, 2005), researchers now consider this intervention to be discredited.
Quasi-experimental designs, in which groups are matched on relevant pretreatment variables but are not randomly assigned to groups, are often used because of the ethical and practical difficulties of conducting random assignment for a treatment as lengthy and complex as early, intensive ABA. This design also can yield important evidence of efficacy of an intervention. Such studies (e.g., Lovaas, 1987) involve comparisons of children who received this intervention to similar children who received other services, with assignment determined on the basis of factors such as therapist availability (e.g., whether there are qualified therapists who have openings to start early, intensive ABA with a new child). However, RCTs are still needed to confirm the results of quasi-experiments.
Single-case designs compare a baseline phase in which an individual receives no treatment to one or more intervention phases in which the individual does receive treatment. Data are collected continuously on the outcome measure. Consistent improvement on the outcome measure during intervention relative to baseline indicates that the treatment was effective for that individual. Many ABA teaching methods have been confirmed by multiple single-case studies (Goldstein, 2002; McConnell, 2002). Other interventions have been disconfirmed by single-case studies. For example, in Facilitated Communication (FC), the practitioner holds a person’s hands, wrists, or arms to guide that person to type messages on a keyboard or a board with printed letters. Proponents of this intervention assert that this method produces sudden, dramatic gains in communication by individuals with ASD, but numerous single-case studies, involving hundreds of individuals with ASD, revealed that the practitioner rather than the individual with ASD controls the communication that occurs during FC (Mostert, 2001). Thus, the apparent gains in communication do not reflect actual improvement.
When possible, intervention studies should be conducted in a double-blind, placebo-control manner such that study participants and practitioners are unaware of whether the participants are receiving treatment or placebo. For example, pills that contain the active ingredient can be made to appear identical to placebo pills. Investigators can postpone telling participants and practitioners which pill they received until the completion of the study. Although this strategy is not viable for most behavioral or educational studies because the interventions cannot be disguised, it is feasible for most complementary and alternative medicine (CAM) treatments and was important in the studies on secretin.
Unfortunately, many interventions for ASD have support only from testimonials, surveys, or laboratory tests that are unvalidated for ASD research. (For example, some CAM practitioners conduct urinalyses and hypothesize, despite very limited evidence, that the results may reveal imbalances in neurotransmitters of the central nervous system). Much of this work is presented in the popular media or on the Internet, but some of it does find its way into prestigious, peer-reviewed publications (e.g., Wakefield et al., 1998). Thus, it is necessary to scrutinize the research methodology to determine whether it meets the standards described above, and to require multiple peer-reviewed studies confirming the efficacy of an intervention, rather than drawing conclusions from a single report.
While scientifically rigorous research with objective measures eventually may corroborate some anecdotal reports, it is also possible (and historically has usually happened) that research will indicate that outcomes of the intervention are not significantly better than outcomes in control conditions. For example, FC and secretin have had many ardent proponents who gave stirring testimonials, but they were refuted by research. Thus, no matter how compelling they may sound, testimonials are not reliable evidence that an intervention for ASD is efficacious.
The most recent update of the ASAT research review was conducted in December, 2009. If you believe relevant, peer-reviewed research with strong scientific designs has been overlooked in the research summaries, please contact firstname.lastname@example.org.