Journal of Experimental Education, volume 88, issue 2, pages 335-350

Changing Criterion Designs: Integrating Methodological and Data Analysis Recommendations

Publication typeJournal Article
Publication date2019-02-09
scimago Q1
SJR1.136
CiteScore6.7
Impact factor2.9
ISSN00220973, 19400683
Developmental and Educational Psychology
Education
Abstract
Changing criterion designs (CCD) are single-case experimental designs that entail a step-by-step approximation of the final level desired for a target behavior. Following a recent review on the des...
Odom S.L., Barton E.E., Reichow B., Swaminathan H., Pustejovsky J.E.
2018-08-01 citations by CoLab: 25 Abstract  
An increasing movement in single case research is to employ statistical analyses as one form of data analysis. Researchers have proposed different statistical approaches. The purpose of this paper is to examine the utility and discriminant validity of two novel types of between-case standardized effect size analyses with two existing systematic reviews. The between-case analyses found greater effect sizes for the studies in the object play review and smaller effect sizes for studies of sensory intervention, which were consistent with the overall conclusions reached in the original systematic reviews. These findings provide evidence of discriminant validity, although concerns remain around the methods' utility across different single case research designs. Future directions for research and development also are provided.
Ganz J.B., Ayres K.M.
2018-08-01 citations by CoLab: 58 Abstract  
Single-case experimental designs (SCEDs), or small-n experimental research, are frequently implemented to assess approaches to improving outcomes for people with disabilities, particularly those with low-incidence disabilities, such as some developmental disabilities. SCED has become increasingly accepted as a research design. As this literature base is needed to determine what interventions are evidence-based practices, the acceptance of SCED has resulted in increased critiques with regard to methodological quality. Recent trends include recommendations from a number of expert scholars and institutions. The purpose of this article is to summarize the recent history of methodological quality considerations, synthesize the recommendations found in the SCED literature, and provide recommendations to researchers designing SCEDs with regard to essential and aspirational standards for methodological quality. Conclusions include imploring SCED to increase the quality of their experiments, with particular consideration regarding the applied nature of SCED research to be published in Research in Developmental Disabilities and beyond.
Dart E.H., Radley K.C.
2018-07-05 citations by CoLab: 33 Abstract  
Single-case data are frequently used in school psychology. In research, single-case designs allow experimenters to provide rigorous demonstrations of treatment effects on a smaller scale and with more precise measurement than traditional group experimental design. In practice, single-case data are used to evaluate the effects of school-based services to make decisions at the individual level within a multitiered system of support (MTSS). School psychology and related fields (e.g., special education) have worked to increase the rigor of single-case data by developing standards for single-case experimental design and developing robust single-case effect size statistics; however, in practice, single-case data are often collected with less experimental rigor and evaluated using visual analysis of a linear graph as opposed to quantitative effect sizes. This is concerning, as an emerging body of literature suggests that simple elements of the graphical display (e.g., ordinate axis scaling, ratio of X to Y axis length) can have a profound impact on effect size judgments made by visual analysts. Currently, there are no standards guiding the construction of linear graphs used to display single-case data. The purpose of this paper is to advance the perspective that our field must develop and adopt standards of linear graph construction or risk inaccurate decisions within a MTSS framework. (PsycINFO Database Record
Barnard-Brak L., Richman D.M., Little T.D., Yang Z.
Behaviour Research and Therapy scimago Q1 wos Q1
2018-03-01 citations by CoLab: 20 Abstract  
Comparing visual inspection results of graphed data reveals inconsistencies in the interpretation of the same graph among single-case experimental design (SCED) researchers and practitioners. Although several investigators have disseminated structured criteria and visual inspection aids or strategies, inconsistencies in interpreting graphed data continue to exist even for individuals considered to be experts at interpreting SCED graphs. We propose a fail safe k metric that can be used in conjunction with visual inspection, and it can be used in-vivo after each additional data point is collected within a phase to determine the optimal point in time to shift between phases (e.g., from baseline to treatment). Preliminary proof of concept data are presented to demonstrate the potential utility of the fail safe k metric with a sample of previously published SCED graphs examining the effects on noncontingent reinforcement on occurrences of problem behavior. Results showed that if the value of fail safe k is equal to or less than the number of sessions in the current phase, then the data path may not be stable and more sessions should be run before changing phases. We discuss the results in terms of using the fail safe k as an additional aid for visual inspection of SCED data.
Ledford J.R., Lane J.D., Severini K.E.
Brain Impairment scimago Q2 wos Q3
2017-10-02 citations by CoLab: 111 Abstract  
Single case designs (SCDs) allow researchers to objectively evaluate the impact of an intervention by repeatedly measuring a dependent variable across baseline and intervention conditions. Rooted in baseline logic, SCDs evaluate change over time, with each participant serving as his or her own control during the course of a study. Formative and summative evaluation of data is critical to determining causal relations. Visual analysis involves evaluation of level, trend, variability, consistency, overlap, and immediacy of effects within (baseline and intervention) and between conditions (baseline to intervention). The purpose of this paper is to highlight the process for visually analysing data collected in the context of a SCD and to provide structures and procedures for evaluating the six data characteristics of interest. A checklist with dichotomous responses (i.e., yes/no) is presented to facilitate implementation and reporting of systematic visual analysis.
Ledford J.R.
American Journal of Evaluation scimago Q1 wos Q2
2017-08-31 citations by CoLab: 23 Abstract  
Randomization of large number of participants to different treatment groups is often not a feasible or preferable way to answer questions of immediate interest to professional practice. Single case designs (SCDs) are a class of research designs that are experimental in nature but require only a few participants, all of whom receive the treatment(s) of interest. SCDs are particularly relevant when a dependent variable of interest can be measured repeatedly over time across two conditions (e.g., baseline and intervention). Rather than using randomization of large numbers of participants, SCD researchers use careful and prescribed ordering of experimental conditions, which allow researchers to improve internal validity by ruling out alternative explanations for behavior change. This article describes SCD logic, control of threats to internal validity, the use of randomization and counterbalancing, and data analysis in the context of single case research.
Levin J.R., Ferron J.M., Gafurov B.S.
Journal of School Psychology scimago Q1 wos Q1
2017-08-01 citations by CoLab: 39 Abstract  
A number of randomization statistical procedures have been developed to analyze the results from single-case multiple-baseline intervention investigations. In a previous simulation study, comparisons of the various procedures revealed distinct differences among them in their ability to detect immediate abrupt intervention effects of moderate size, with some procedures (typically those with randomized intervention start points) exhibiting power that was both respectable and superior to other procedures (typically those with single fixed intervention start points). In Investigation 1 of the present follow-up simulation study, we found that when the same randomization-test procedures were applied to either delayed abrupt or immediate gradual intervention effects: (1) the powers of all of the procedures were severely diminished; and (2) in contrast to the previous study's results, the single fixed intervention start-point procedures generally outperformed those with randomized intervention start points. In Investigation 2 we additionally demonstrated that if researchers are able to successfully anticipate the specific alternative effect types, it is possible for them to formulate adjusted versions of the original randomization-test procedures that can recapture substantial proportions of the lost powers.
Pustejovsky J.E., Ferron J.M.
2017-05-25 citations by CoLab: 46
Natesan P., Hedges L.V.
Psychological Methods scimago Q1 wos Q1
2017-04-13 citations by CoLab: 26 Abstract  
Although immediacy is one of the necessary criteria to show strong evidence of a causal relation in single case designs (SCDs), no inferential statistical tool is currently used to demonstrate it. We propose a Bayesian unknown change-point model to investigate and quantify immediacy in SCD analysis. Unlike visual analysis that considers only 3-5 observations in consecutive phases to investigate immediacy, this model considers all data points. Immediacy is indicated when the posterior distribution of the unknown change-point is narrow around the true value of the change-point. This model can accommodate delayed effects. Monte Carlo simulation for a 2-phase design shows that the posterior standard deviations of the change-points decrease with increase in standardized mean difference between phases and decrease in test length. This method is illustrated with real data. (PsycINFO Database Record
Tincani M., Travers J.
Remedial and Special Education scimago Q1 wos Q1
2017-04-04 citations by CoLab: 57 Abstract  
Demonstration of experimental control is considered a hallmark of high-quality single-case research design (SCRD). Studies that fail to demonstrate experimental control may not be published because researchers are unwilling to submit these papers for publication and journals are unlikely to publish negative results (i.e., the file drawer effect). SCRD studies comprise a large proportion of intervention research in special education. Consequently, the existing body of research, comprised mainly of studies that show experimental control, may artificially inflate efficacy of interventions. We discuss how experimental control evolved as the standard for high-quality SCRD; why, in the era of evidence-based practice, rigorous studies that fail to fully demonstrate experimental control are important to include in the body of published intervention research; the role of non-replication studies in discovering intervention boundaries; and considerations for researchers who wish to conduct and appraise studies that fail to yield full experimental control.
Brogan K.M., Falligant J.M., Rapp J.T.
Behavior Modification scimago Q1 wos Q3
2017-02-15 citations by CoLab: 24 Abstract  
Adolescents who have been adjudicated for illegal sexual behavior may receive treatment that requires attending group therapy sessions and classes. For some adolescents, nonsexual problem behavior (e.g., verbal outbursts, noncompliance) interferes with their ability to participate in group treatment. Researchers used a multiple-baseline across groups design with an embedded changing criterion design to evaluate the effects of an interdependent group contingency for decreasing disruptive behavior in adolescents across two therapy groups. Results indicated that the procedure was effective in reducing disruptive behavior emitted by adolescents in group therapy. Measures of social validity indicated that both the therapists and students viewed the overall procedures and outcomes as acceptable. Implications for interdependent group contingencies across diverse populations are discussed.
Todd T., Reid G., Butler-Kisber L.
2016-08-10 citations by CoLab: 32 Abstract  
Individuals with autism often lack motivation to engage in sustained physical activity. Three adolescents with severe autism participated in a 16-week program and each regularly completed 30 min of cycling at the end of program. This study investigated the effect of a self-regulation instructional strategy on sustained cycling, which included self-monitoring, goal setting, and self-reinforcement. Of particular interest was the development of self-efficacy during the physical activity as a mediator of goal setting. A multiple baseline changing criterion design established the effectiveness of the intervention. The results suggest that self-regulation interventions can promote sustained participation in physical activity for adolescents with severe autism.
Cervantes C.M., Porretta D.L.
2016-08-10 citations by CoLab: 34 Abstract  
The purpose of this study was to examine the impact of an after school physical activity intervention on adolescents with visual impairments within the context of Social Cognitive Theory. Four adolescents with visual impairments (1 female, 3 males) between 14 and 19 years of age from a residential school for the blind served as participants. We used a range-bound changing criterion single-subject design. Physical activity was measured using ActiGraph accelerometers. Questionnaires were used to obtain information on selected social cognitive theory constructs. Results show that the intervention exerted functional control over the target behaviors (e.g., leisure-time physical activity) during intervention phases. Similarly, changes in scores for selected social cognitive constructs, in particular for outcome expectancy value, suggest a positive relationship between those constructs and physical activity behavior. No maintenance effects were observed.
Shadish W.R., Zelinsky N.A., Vevea J.L., Kratochwill T.R.
2016-05-13 citations by CoLab: 99 Abstract  
The published literature often underrepresents studies that do not find evidence for a treatment effect; this is often called publication bias. Literature reviews that fail to include such studies may overestimate the size of an effect. Only a few studies have examined publication bias in single-case design (SCD) research, but those studies suggest that publication bias may occur. This study surveyed SCD researchers about publication preferences in response to simulated SCD results that show a range of small to large effects. Results suggest that SCD researchers are more likely to submit manuscripts that show large effects for publication and are more likely to recommend acceptance of manuscripts that show large effects when they act as a reviewer. A nontrivial minority of SCD researchers (4% to 15%) would drop 1 or 2 cases from the study if the effect size is small and then submit for publication. This article ends with a discussion of implications for publication practices in SCD research.
Manolov R., Tanious R.
Journal of Behavioral Education scimago Q1 wos Q3
2024-06-19 citations by CoLab: 2 Abstract  
AbstractOverlap is one of the data aspects that are expected to be assessed when visually inspecting single-case experimental designs (SCED) data. A frequently used quantification of overlap is the Nonoverlap of All Pairs (NAP). The current article reviews the main strengths and challenges when using this index, as compared to other nonoverlap indices such as Tau and the Percentage of data points exceeding the median. Four challenges are reviewed: the difficulty in representing NAP graphically, the presence of a ceiling effect, the disregard of trend, and the limitations in using p-values associated with NAP. Given the importance of complementing quantitative analysis and visual inspection of graphed data, straightforward quantifications and new graphical elements for the time-series plot are proposed as options for addressing the first three challenges. The suggestions for graphical representations (representing within-phase monotonic trend and across-phases overlaps) and additional numerical summaries (quantifying the degree of separation in case of complete nonoverlap or the proportion of data points in the overlap zone) are illustrated with two multiple-baseline data sets. To make it easier to obtain the plots and quantifications, the recommendations are implemented in a freely available user-friendly website. Educational researchers can use this article to inform their use and application of NAP to meaningfully interpret this quantification in the context of SCEDs.
Manolov R., Tanious R.
Behavior Research Methods scimago Q1 wos Q1
2023-12-11 citations by CoLab: 0 Abstract  
AbstractSingle-case experimental design (SCED) data can be analyzed following different approaches. One of the first historically proposed options is randomizations tests, benefiting from the inclusion of randomization in the design: a desirable methodological feature. Randomization tests have become more feasible with the availability of computational resources, and such tests have been proposed for all major types of SCEDs: multiple-baseline, reversal/withdrawal, alternating treatments, and changing criterion designs. The focus of the current text is on the last of these, given that they have not been the subject of any previous simulation study. Specifically, we estimate type I error rates and statistical power for two different randomization procedures applicable to changing criterion designs: the phase change moment randomization and the blocked alternating criterion randomization. We include different series lengths, number of phases, levels of autocorrelation, and random variability. The results suggest that type I error rates are generally controlled and that sufficient power can be achieved with as few as 28–30 measurements for independent data, although more measurements are needed in case of positive autocorrelation. The presence of a reversal to a previous criterion level is beneficial. R code is provided for carrying out randomization tests following the two randomization procedures.
Manolov R., Onghena P.
2022-09-15 citations by CoLab: 5 Abstract  
Immediacy is one of six data aspects (alongside level, trend, variability, overlap, and consistency) that has to be accounted for when visually analyzing single-case data. Given that it is one of the aspects that has received considerably less attention than other data aspects, the current text offers a review of the proposed conceptual definitions of immediacy (i.e., what it refers to) and also of the suggested operational definitions (i.e., how exactly is it assessed and/or quantified). Provided that a variety of conceptual and operational definitions is identified, we propose following a sensitivity analysis using a randomization test for assessing immediate effects in single-case experimental designs, by identifying when changes were most clear. In such a sensitivity analysis, the immediate effects are tested for multiple possible intervention points and for different possible operational definitions. Robust immediate effects can be detected if the results for the different operational definitions converge.
Tanious R., Manolov R.
2022-02-18 citations by CoLab: 5 Abstract  
Single-case experimental designs (SCEDs) are a class of experimental designs suited for answering research questions at an individual level. The main designs available in SCED research are phase designs, multiple baseline designs, alternation designs, and changing criterion designs. Embedded designs, also referred to as combination or hybrid designs, consist of one of these basic designs forms embedded in another design (e.g., a changing criterion design embedded in a multiple baseline design). Systematic reviews of SCEDs have repeatedly indicated that embedded designs are frequently used in applied SCED research. In spite of their popularity, specific recommendations on the conduct and analysis of embedded SCED designs are lacking to date. The purpose of the present article is therefore to provide guidance to applied researchers wishing to conduct embedded SCED designs in terms of design options, design requirements, randomization, and data analysis.
Manolov R., Moeyaert M., Fingerhut J.E.
2021-03-25 citations by CoLab: 31
Tanious R., Onghena P.
Behavior Research Methods scimago Q1 wos Q1
2020-10-26 citations by CoLab: 46 Abstract  
Single-case experimental designs (SCEDs) have become a popular research methodology in educational science, psychology, and beyond. The growing popularity has been accompanied by the development of specific guidelines for the conduct and analysis of SCEDs. In this paper, we examine recent practices in the conduct and analysis of SCEDs by systematically reviewing applied SCEDs published over a period of three years (2016–2018). Specifically, we were interested in which designs are most frequently used and how common randomization in the study design is, which data aspects applied single-case researchers analyze, and which analytical methods are used. The systematic review of 423 studies suggests that the multiple baseline design continues to be the most widely used design and that the difference in central tendency level is by far most popular in SCED effect evaluation. Visual analysis paired with descriptive statistics is the most frequently used method of data analysis. However, inferential statistical methods and the inclusion of randomization in the study design are not uncommon. We discuss these results in light of the findings of earlier systematic reviews and suggest future directions for the development of SCED methodology.
Tanious R., Onghena P.
Healthcare scimago Q2 wos Q3 Open Access
2019-11-13 citations by CoLab: 28 PDF Abstract  
Health problems are often idiosyncratic in nature and therefore require individualized diagnosis and treatment. In this paper, we show how single-case experimental designs (SCEDs) can meet the requirement to find and evaluate individually tailored treatments. We give a basic introduction to the methodology of SCEDs and provide an overview of the available design options. For each design, we show how an element of randomization can be incorporated to increase the internal and statistical conclusion validity and how the obtained data can be analyzed using visual tools, effect size measures, and randomization inference. We illustrate each design and data analysis technique using applied data sets from the healthcare literature.
Ferron J., Rohrer L.L., Levin J.R.
Behavior Modification scimago Q1 wos Q3
2019-05-12 citations by CoLab: 15 Abstract  
To strengthen the scientific credibility arguments for single-case intervention studies, randomization design-and-analysis methods have been developed for the multiple-baseline, ABAB, and alternating treatment designs, including options for preplanned designs, wherein the series and phase lengths are established prior to gathering data, as well as options for response-guided designs, wherein ongoing visual analyses guide decisions about when to intervene. Our purpose here is to develop randomization methods for another class of single-case design, the changing criterion design. We first illustrate randomization design-and-analysis methods for preplanned changing criterion designs and then develop and illustrate methods for response-guided changing criterion designs. We discuss the limitations associated with the randomization methods and the validity of the corresponding intervention-effect inferences.

Top-30

Journals

1
2
1
2

Publishers

1
2
3
4
1
2
3
4
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex | MLA
Found error?