Presentation on theme: "Just Because They Say It’s ‘Scientifically- based’ Doesn’t Mean It Will Work!"— Presentation transcript:
Just Because They Say It’s ‘Scientifically- based’ Doesn’t Mean It Will Work!
Changing Landscape P.L. 94-142 ----- IDEA ‘03 Decrease in experimental studies 1980 (61%) ---- 1995 (38%) Definition of “scientifically-based” = Random assignment/True Experiment National Reading Panel report Institute for Educational Sciences Configuration/role of OSEP Status of Part D in reauthorization
“….methodologically weak research, trivial studies, an infatuation with jargon, and a tendency toward fads.” National Research Council
Educational Research: The Hardest Science of All!!!
Standards for Field Testing Interventions (CRL) Practical and doable Easy for both teachers & students to learn Yield meaningful”real world” outcomes Broad in reach…..impacts non-SWDs Impact performance of SWDs to enable them to compete within criterion environment
Guiding Principles (CRL) Deal with complex realities of schools Participant input at all stages Use sound research methodologies/designs Collect many measures on interventions Field-testing in multiple stages Insist on both statistical & social significance Translate field protocols into user manuals Bring interventions to scale
Designing High Quality Research in Special Education: Group Experimental Design Gersten, Lloyd, Baker (1999)
The BIG Issue: Trade-off between Internal & External Validity “….the challenge to educational researchers is to sensibly negotiate a balance between those components that satisfy science (internal) & those that reflect the complexities of real classroom teaching (external).”
Good Research is Largely Dictated by….. How well independent variables are conceptualized & operationalized, the definition of the sample, & the dependent variables are selected to assess the impact of the intervention
On Independent Variables… Precise definitions needed Problems arise with PAR (flexibility) Syntheses/meta need precision Majority of literature: incomplete or very poor description of intervention Gap between conceptualization & implementation (# min., support, tchr. Training, etc.)
Improving Independent Variables Intervention transcripts Replications (others/component analysis) Fidelity measures throughout implementation (amt of training, lesson length, time, feedback) Good comparison groups (control teacher effects, feedback, time) (see p. 13
Improving Sample Selection & Description Sample size (difficult with SWDs) Stronger the treatment, smaller #s Increase power by increasing homogeneity Precise sample description (ELL, SES, achievement & cognitive levels, etc.) Random selection (survey); random assignment (intervention )
Quasi-Experimental Designs Students used from intact groups Determine similarity with pretests -- if variance exists, use procedures to adjust statistically Problems when differences on pretests exceed 1/2 SD of the two groups
Improving Quasi-Experiments Adequate prestesting with measures with good technical qualities Pretest data with more than.5 SD shouldn’t be used ANCOVA shouldn’t be used if SD>.5
Dependent Measures Those measures used to evaluate the impact of the intervention. The conclusions of a study depend on both the quality of the intervention and the measures used to evaluate the intervention.
Improving Dependent Measures Use multiple measures ( global and specific skill) Select measures non-biased toward intervention (teaching to the test) Ensure that not all measures are developed by researcher Select measures with good technical adequacy