Presentation is loading. Please wait.

Presentation is loading. Please wait.

Larry D. Gruppen, Ph.D. University of Michigan From Concepts to Data: Conceptualization, Operationalization, and in Educational Research Measurement.

Similar presentations


Presentation on theme: "Larry D. Gruppen, Ph.D. University of Michigan From Concepts to Data: Conceptualization, Operationalization, and in Educational Research Measurement."— Presentation transcript:

1 Larry D. Gruppen, Ph.D. University of Michigan From Concepts to Data: Conceptualization, Operationalization, and in Educational Research Measurement

2 Objectives Identify key research design issues Wrestle with the complexities of educational measurement Explain the concepts of reliability and validity in educational measurement Apply criteria for measurement quality when conducting educational research

3 Agenda A brief nod to design From theory to measurement Criteria for measurement quality –Reliability –Validity Application: analyze an article

4 Guiding Principles for Scientific Research in Education 1.Question: pose significant question that can be investigated empirically 2.Theory: link research to relevant theory 3.Methods: use methods that permit direct investigation of the question 4.Reasoning: provide coherent, explicit chain of reasoning 5.Replicate and generalize across studies 6.Disclose research to encourage professional scrutiny and critique

5 Study design Study design consists of: –Your measurement method(s) –The participants and how they are assigned –The intervention –The sequence and timing of measurements and interventions

6 Randomization Purpose: attempt to balance effects that are unknown or uncontrollable Randomization in selecting participants (sampling) Randomization in assigning participants to study conditions

7 Comparison Group Pre-post design - compare intervention group to itself Non-equivalent control group design - compare intervention group to an existing group Randomized control group design - compare to equivalent controls

8 Overview of Study Designs Symbols –Each line represents a group. – x = Intervention (e.g. treatment) –O 1, O 2, O 3 …= Observation (measurement) at Time 1, Time 2, Time 3, etc. –R = Random assignment

9 Non-Experimental Designs

10 x O 1 One-Group Posttest x O 1

11 Quasi-Experimental Designs

12 x O 1 O 1 Posttest-Only Control Group

13 O 1 x O 2 One-Group Pretest-Posttest

14 O 1 x O 2 O 1 O 2 Control Group Pretest-Posttest

15 Experimental Designs

16 Posttest Only Randomized Control Group R x O 1 R O 1

17 R O 1 x O 2 R O 1 O 2 Randomized Control Group Pretest- Posttest

18 Theory Constructs Operational Definition Measurement From Theory to Measurement

19 An Example: Self-assessment Theory –Self-assessment guides self-directed learning –Individuals may differ in how self-assessment guides self-directed learning Research question: To what extent do students spend more time learning in areas in which they perceive themselves to be weakest Constructs –Self-assessment of knowledge and skill –Self-directed learning

20 An Example: Self-assessment Operational definition –Self-assessment: Relative ranking of strengths and weaknesses in knowledge and skills - not in reference to other people –Self-directed learning: Amount of time they spent learning about 14 categories of patient presentation (e.g., cough, abdominal pain)

21 An Example: Self-assessment Measurement –Self-assessment: Rate on 10-point scale (1=least confident, 10=most confident) relative confidence in knowledge and skill on 14 presenting complaints –Self-directed learning: Self-report of time spent learning each of the 14 complaints using 11-point scale (0=no time at all, 10=a great deal)

22 Measurement Measurement: assignment of numbers to objects or events according to rules Quality: reliability and validity

23 The Challenge of Educational Measurement Almost all of the constructs we are interested in are buried inside the individual Measurement depends on transforming these internal states, events, capabilities, etc. into something observable Making them observable may alter the thing we are measuring

24 Examples of Measurement Methods Tests (knowledge, performance): defined response, constructed response, simulations Questionnaires (attitudes, beliefs, preferences): rating scales, checklists, open-ended responses Observations (performance, skills): tasks (varying degrees of authenticity), problems, real- world behaviors, records (documents)

25 Reliability Dependability (consistency or stability) of measurement A necessary condition for validity

26 Types of Reliability Stability (produces the same results with repeated measurements over time): –Test-retest –Correlation between scores at 2 times Equivalence/Internal Consistency (produces same results with parallel items on alternate forms): –Alternate forms; split-half; Kuder-Richardson; Chronbach’s alpha –Correlation between scores on different forms; Calculate coefficient alpha (a) Consistency (produces the same results with different observers or raters): –Inter-rater agreement –Correlation between scores from different raters; kappa coefficient

27 Validity Refers to the accuracy of inferences based on data obtained from measurement Technically, measures aren’t valid, inferences are No such thing as validity in the abstract: the key issue is ‘valid’ for what inference Want to reduce systematic, non-random error Unreliability lowers correlations, reducing validity claims

28 Conventional View of Validity Face validity: logical link between items and purpose— makes sense on the surface Content validity: items cover the range of meaning included in the construct or domain. Expert judgment Criterion validity: relationship between performance on one measurement and performance on another (or actual behavior) Concurrent and Predictive Correlation coefficients Construct validity: directly connect measurement with theory. Allows interpretation of empirical evidence in terms of theoretical relationships. Based on weight of evidence. Convergent and discriminant evidence. Multitrait-MultiMethod Analysis (MTMM)

29 Unified View of Construct Validity (Messick S, Amer Psych, 1995) Validity is not a property of an instrument but rather of the meaning of the scores. Must be considered holistically. 6 Aspects of Construct Validity Evidence –Content—content relevance & representativeness –Substantive—theoretical rationale for observed consistencies in test responses –Structural—fidelity of scoring structure to structure of construct domain –Generalizability—generalization to the population and across populations –External—convergent and discriminant evidence –Consequential—intended and unintended consequences of score interpretation; social consequence of assessment (fairness, justice)

30 Finding Measurement Instruments Scan the engineering education literature (obviously) Email engineering ed researchers (use the network) Examine literature for instruments used in prior studies General education/social science instrument databases –Buros Institute of Mental Measurements (Mental Measurement Yearbook, Tests in Print) http://buros.unl.edu/buros/jsp/search.jsp http://buros.unl.edu/buros/jsp/search.jsp –ERIC databases http://www.eric.ed.gov/http://www.eric.ed.gov/ –Educational Testing Service Test Collection http://www.ets.org/testcoll/index.html http://www.ets.org/testcoll/index.html Construct your own (last resort!) –Get some expert consultation (test writing, survey design, questionnaire construction, etc.)

31 Example In your groups, analyze the Steif & Dantzler statics concept inventory article. Look for: –Theoretical framework –Constructs used in the study –How constructs were operationalized –Measurement process Attention to reliability and validity

32 References Campbell DT, Stanley JC. Experimental and quasi- experimental designs for research. Chicago: Rand McNally; 1969. Cook, T.D. and Campbell, D.T. (1979). Quasi- Experimentation: Design and Analysis for Field Settings. Rand McNally, Chicago, Illinois. Messick S. Validity of psychological assessment: validation of inferences from persons' responses and performances as scientific inquiry into score meaning. American Psychologist. 1995;50:741-749. Messick S. Validity. In: Linn RL, ed. Educational measurement. 3rd ed. New York: American Council on Education & Macmillan; 1989:13-103.


Download ppt "Larry D. Gruppen, Ph.D. University of Michigan From Concepts to Data: Conceptualization, Operationalization, and in Educational Research Measurement."

Similar presentations


Ads by Google