Journal of School Psychology, volume 63, pages 13-34
Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types
Joel R. Levin
1
,
John Ferron
2
,
Boris S Gafurov
3
Publication type: Journal Article
Publication date: 2017-08-01
Journal:
Journal of School Psychology
scimago Q1
SJR: 1.840
CiteScore: 6.7
Impact factor: 3.8
ISSN: 00224405, 18733506
PubMed ID:
28633936
Developmental and Educational Psychology
Education
Abstract
A number of randomization statistical procedures have been developed to analyze the results from single-case multiple-baseline intervention investigations. In a previous simulation study, comparisons of the various procedures revealed distinct differences among them in their ability to detect immediate abrupt intervention effects of moderate size, with some procedures (typically those with randomized intervention start points) exhibiting power that was both respectable and superior to other procedures (typically those with single fixed intervention start points). In Investigation 1 of the present follow-up simulation study, we found that when the same randomization-test procedures were applied to either delayed abrupt or immediate gradual intervention effects: (1) the powers of all of the procedures were severely diminished; and (2) in contrast to the previous study's results, the single fixed intervention start-point procedures generally outperformed those with randomized intervention start points. In Investigation 2 we additionally demonstrated that if researchers are able to successfully anticipate the specific alternative effect types, it is possible for them to formulate adjusted versions of the original randomization-test procedures that can recapture substantial proportions of the lost powers.
Are you a researcher?
Create a profile to get free access to personal recommendations for colleagues and new articles.