Research Methods ContentArea Researchable Questions ResearchDesign MeasurementMethods Sampling DataCollection StatisticalAnalysisReportWriting ?

Slides:



Advertisements
Similar presentations
Andrea M. Landis, PhD, RN UW LEAH
Advertisements

Agenda Group Hypotheses Validity of Inferences from Research Inferences and Errors Types of Validity Threats to Validity.
Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
Validity of Quantitative Research Conclusions. Internal Validity External Validity Issues of Cause and Effect Issues of Generalizability Validity of Quantitative.
Inadequate Designs and Design Criteria
GROUP-LEVEL DESIGNS Chapter 9.
Experimental Research Designs
Internal Threats to Validity
Research Design and Validity Threats
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Reliability and Validity in Experimental Research ♣
MSc Applied Psychology PYM403 Research Methods Validity and Reliability in Research.
Lecture 10 Psyc 300A. Types of Experiments Between-Subjects (or Between- Participants) Design –Different subjects are assigned to each level of the IV.
© 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Chapter 4 Choosing a Research Design.
Sampling and Experimental Control Goals of clinical research is to make generalizations beyond the individual studied to others with similar conditions.
Psych 231: Research Methods in Psychology
Educational Research by John W. Creswell. Copyright © 2002 by Pearson Education. All rights reserved. Slide 1 Chapter 11 Experimental and Quasi-experimental.
Reliability and Validity. Criteria of Measurement Quality How do we judge the relative success (or failure) in measuring various concepts? How do we judge.
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
Experimental Research
L1 Chapter 11 Experimental and Quasi- experimental Designs Dr. Bill Bauer.
Experimental Design 264a Marketing Research. Criteria for Establishing a Causal Relationship Concomitant variation Temporal variation Control over other.
EVALUATING YOUR RESEARCH DESIGN EDRS 5305 EDUCATIONAL RESEARCH & STATISTICS.
Experimental Research Take some action and observe its effects Take some action and observe its effects Extension of natural science to social science.
Chapter 8 Experimental Research
Experimental Design The Gold Standard?.
McGraw-Hill/Irwin Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Choosing a Research Design.
Variation, Validity, & Variables Lesson 3. Research Methods & Statistics n Integral relationship l Must consider both during planning n Research Methods.
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
Day 6: Non-Experimental & Experimental Design
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
I/O Psychology Research Methods. What is Science? Science: Approach that involves the understanding, prediction, and control of some phenomenon of interest.
V ALIDITY IN Q UALITATIVE R ESEARCH. V ALIDITY How accurate are the conclusions you make based on your data analysis? A matter of degree Non-reification.
CHAPTER 8, experiments.
 Internal Validity  Construct Validity  External Validity * In the context of a research study, i.e., not measurement validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Enhancing Rigor in Quantitative Research.
Chapter Four Experimental & Quasi-experimental Designs.
Understanding Research Design Can have confusing terms Research Methodology The entire process from question to analysis Research Design Clearly defined.
Independent vs Dependent Variables PRESUMED CAUSE REFERRED TO AS INDEPENDENT VARIABLE (SMOKING). PRESUMED EFFECT IS DEPENDENT VARIABLE (LUNG CANCER). SEEK.
Research methods and statistics.  Internal validity is concerned about the causal-effect relationship in a study ◦ Can observed changes be attributed.
1 The Theoretical Framework. A theoretical framework is similar to the frame of the house. Just as the foundation supports a house, a theoretical framework.
Introduction section of article
Internal Validity. All about whether the research design (and data analysis) warrants the conclusions. Concerned with: – Causal relationships – Various.
Experimental Research Methods in Language Learning Chapter 5 Validity in Experimental Research.
Quasi Experimental and single case experimental designs
Experimental & Quasi-Experimental Designs Dr. Guerette.
Research Design. Time of Data Collection Longitudinal Longitudinal –Panel study –Trend study –Cohort study Cross-sectional Cross-sectional.
Chapter Eight: Quantitative Methods
EXPERIMENTAL DESIGNS. Categories Lab experiments –Experiments done in artificial or contrived environment Field experiments –Experiments done in natural.
Introduction to Validity True Experiment – searching for causality What effect does the I.V. have on the D.V. Correlation Design – searching for an association.
Experimental Research Design Causality & Validity Threats to Validity –Construct (particular to experiments) –Internal –External – already discussed.
CJ490: Research Methods in Criminal Justice UNIT #4 SEMINAR Professor Jeffrey Hauck.
Can you hear me now? Keeping threats to validity from muffling assessment messages Maureen Donohue-Smith, Ph.D., RN Elmira College.
Educational Research Experimental Research Chapter 9 (8 th Edition) Chapter 13 (7 th Edition) Gay and Airasian.
Reliability and Validity
Experimental Research
Experiments Why would a double-blind experiment be used?
Reliability and Validity
Reliability and Validity of Measurement
Introduction to Design
Experiments and Quasi-Experiments
Experiments and Quasi-Experiments
Experimental Research
Group Experimental Design
Methodology Week 5.
Chapter 11 EDPR 7521 Dr. Kakali Bhattacharya
Experimental Research
Misc Internal Validity Scenarios External Validity Construct Validity
Presentation transcript:

Research Methods ContentArea Researchable Questions ResearchDesign MeasurementMethods Sampling DataCollection StatisticalAnalysisReportWriting ?

Assessment of Observation (Measurement) Observed Score = True Score + Error AAaahh!! A shark!

Error component may be either: Random Error = Varaiation due to unknown or uncontrolled factors Systematic Error = variation due to systematic but irrelevant elements of the design

Concern of scientific research is management of the error component Number of criteria by which to evaluate success

1. Reliability Does the measure consistently reflect changes in what it purports to measure? Consistency or stability of data across Time Circumstances Balance between consistency and sensitivity of measure

2. Validity Does the measure actually represent what it purports to measure? Accuracy of the data (for what?) Number of different types:

A. Internal Validity Semmelweis Pasteur Lister Semmelweis Pasteur Lister Effects of an experiment are due solely to the experimental conditions Extent to which causal conclusions can be drawn

Dependent upon experimental control Trade-off between high internal validity and generalizability of results

B. External Validity Can the results of an experiment be applied to other individuals or situations? Extent to which results can be generalized to broader populations or settings

Dependent upon sampling subjects and occasions Trade-off between high generalizability and internal validity

C. Construct Validity Whether or not an abstract, hypothetical concept exists as postulated Examples of Constructs: Intelligence Personality Conscience

Based on: Convergence = different measures that purport to measure the same construct should be highly correlated (similar) with one another Divergence = tests measuring one construct should not be highly correlated (similar) to tests purporting to measure other constructs

D. Statistical Conclusion Validity The extent to which a study has used appropriate design and statistical methods to enable it to detect the effects that are present The accuracy of conclusions about covariation made on the basis of statistical evidence

Based on appropriate: Statistical Power Methodological Design Statistical Analyses

Can have a reliable, but invalid measure. If measure is valid, then necessarily reliable.

3. Utility Usefulness of methods gauged in terms of: A. Efficiency B. Generality

A. Efficient Methods provide: Precise, reliable data with relatively low costs in: time materials equipment personnel

B. Generality Refers to the extent to which a method can be applied successfully to a wide range of phenomena a.k.a. Generalizability

Threats to Validity Numerous ways vailidity can be threatend Related to Design Related to Experimenter

Related to Design 1. Threats to Internal Validity (Cook & Campbell, 1979) A. History = specific events occurring to individual subject B. Testing = repeated exposure to testing instrument C. Instrumentation=changes in the scoring procedure over time

D. Regression = reversion of scores toward the mean or toward less extreme scores E. Mortatility = differential attrition across groups F. Maturation = developmental processes G. Selection = differential composition of subjects among samples

H. Selection by Maturation interaction I. Ambiguity about casual direction J. Diffusion of Treatments = information spread between groups K. Compensatory Equalization of Treatments = lack of treatment integrity L. Compensatory Rivalry = “ John Henry ” effect on nonparticpants

2. Threats to External Validity (LeCompte & Goetz, 1982) A. Selection = results sample-specific B. Setting = results context-specific C. History = unique experiences of sample limit generalizability D. Construct efffects = constricts are sample specific

Related to Experimenter 1. Noninteractional Artifacts A. Observer Bias = over/under estimation of phenomenon (schema) B. Interpreter Bias = error in interpretation of data C. Intentional Bias = fabrication or fraudulent interpretation of data

2. Interactional Artifacts A. Biosocial Effects = errors attributable to biosocial attributes of researcher B. Psychosocial Effects = errors attributable to psychosocial attributes of researcher C. Situational Effects = errors attributable to research setting and participants

D. Modeling Effects = errors attributable to example set by researcher E. Experimenter Expectancy Bias = researchers treatment of participants elicits confirmatory evidence of hypothesis

BasicApplied Purpose Context Methods Expand Knowledge Academic Setting Single Researcher Less Time/Cost Pressure Internal Validity Cause Single Level of Analysis Single Method Experimental Direct Observations Understand Specific Problem Real-World Setting Multiple Researchers More Time/Cost Pressure External Validity Effect Multiple Levels of Analysis Multiple Methods Quasi-Experimental Indirect Observations

Only substantial difference between applied and basic research: Basic = Experimental Control Applied = Statistical Control