Presentation is loading. Please wait.

Presentation is loading. Please wait.

Research Methods ContentArea Researchable Questions ResearchDesign MeasurementMethods Sampling DataCollection StatisticalAnalysisReportWriting ?

Similar presentations


Presentation on theme: "Research Methods ContentArea Researchable Questions ResearchDesign MeasurementMethods Sampling DataCollection StatisticalAnalysisReportWriting ?"— Presentation transcript:

1 Research Methods ContentArea Researchable Questions ResearchDesign MeasurementMethods Sampling DataCollection StatisticalAnalysisReportWriting ?

2 Assessment of Observation (Measurement) Observed Score = True Score + Error AAaahh!! A shark!

3 Error component may be either: Random Error = Varaiation due to unknown or uncontrolled factors Systematic Error = variation due to systematic but irrelevant elements of the design

4 Concern of scientific research is management of the error component Number of criteria by which to evaluate success

5 1. Reliability Does the measure consistently reflect changes in what it purports to measure? Consistency or stability of data across Time Circumstances Balance between consistency and sensitivity of measure

6 2. Validity Does the measure actually represent what it purports to measure? Accuracy of the data (for what?) Number of different types:

7 A. Internal Validity Semmelweis Pasteur Lister Semmelweis Pasteur Lister Effects of an experiment are due solely to the experimental conditions Extent to which causal conclusions can be drawn

8 Dependent upon experimental control Trade-off between high internal validity and generalizability of results

9 B. External Validity Can the results of an experiment be applied to other individuals or situations? Extent to which results can be generalized to broader populations or settings

10 Dependent upon sampling subjects and occasions Trade-off between high generalizability and internal validity

11 C. Construct Validity Whether or not an abstract, hypothetical concept exists as postulated Examples of Constructs: Intelligence Personality Conscience

12 Based on: Convergence = different measures that purport to measure the same construct should be highly correlated (similar) with one another Divergence = tests measuring one construct should not be highly correlated (similar) to tests purporting to measure other constructs

13 D. Statistical Conclusion Validity The extent to which a study has used appropriate design and statistical methods to enable it to detect the effects that are present The accuracy of conclusions about covariation made on the basis of statistical evidence

14 Based on appropriate: Statistical Power Methodological Design Statistical Analyses

15 Can have a reliable, but invalid measure. If measure is valid, then necessarily reliable.

16 3. Utility Usefulness of methods gauged in terms of: A. Efficiency B. Generality

17 A. Efficient Methods provide: Precise, reliable data with relatively low costs in: time materials equipment personnel

18 B. Generality Refers to the extent to which a method can be applied successfully to a wide range of phenomena a.k.a. Generalizability

19 Threats to Validity Numerous ways vailidity can be threatend Related to Design Related to Experimenter

20 Related to Design 1. Threats to Internal Validity (Cook & Campbell, 1979) A. History = specific events occurring to individual subject B. Testing = repeated exposure to testing instrument C. Instrumentation=changes in the scoring procedure over time

21 D. Regression = reversion of scores toward the mean or toward less extreme scores E. Mortatility = differential attrition across groups F. Maturation = developmental processes G. Selection = differential composition of subjects among samples

22 H. Selection by Maturation interaction I. Ambiguity about casual direction J. Diffusion of Treatments = information spread between groups K. Compensatory Equalization of Treatments = lack of treatment integrity L. Compensatory Rivalry = “ John Henry ” effect on nonparticpants

23 2. Threats to External Validity (LeCompte & Goetz, 1982) A. Selection = results sample-specific B. Setting = results context-specific C. History = unique experiences of sample limit generalizability D. Construct efffects = constricts are sample specific

24 Related to Experimenter 1. Noninteractional Artifacts A. Observer Bias = over/under estimation of phenomenon (schema) B. Interpreter Bias = error in interpretation of data C. Intentional Bias = fabrication or fraudulent interpretation of data

25 2. Interactional Artifacts A. Biosocial Effects = errors attributable to biosocial attributes of researcher B. Psychosocial Effects = errors attributable to psychosocial attributes of researcher C. Situational Effects = errors attributable to research setting and participants

26 D. Modeling Effects = errors attributable to example set by researcher E. Experimenter Expectancy Bias = researchers treatment of participants elicits confirmatory evidence of hypothesis

27 BasicApplied Purpose Context Methods Expand Knowledge Academic Setting Single Researcher Less Time/Cost Pressure Internal Validity Cause Single Level of Analysis Single Method Experimental Direct Observations Understand Specific Problem Real-World Setting Multiple Researchers More Time/Cost Pressure External Validity Effect Multiple Levels of Analysis Multiple Methods Quasi-Experimental Indirect Observations

28 Only substantial difference between applied and basic research: Basic = Experimental Control Applied = Statistical Control


Download ppt "Research Methods ContentArea Researchable Questions ResearchDesign MeasurementMethods Sampling DataCollection StatisticalAnalysisReportWriting ?"

Similar presentations


Ads by Google