The Basics of Experimentation Ch7 – Reliability and Validity.

Slides:



Advertisements
Similar presentations
Questionnaire Development
Advertisements

Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Cross Cultural Research
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
Reliability and Validity
Independent and Dependent Variables
Reliability and Validity of Dependent Measures
VALIDITY AND RELIABILITY
Validity and Reliability
Copyright © Allyn & Bacon (2007) Hypothesis Testing, Validity, and Threats to Validity Graziano and Raulin Research Methods: Chapter 8 This multimedia.
The Basics of Experimentation
Measurement. Scales of Measurement Stanley S. Stevens’ Five Criteria for Four Scales Nominal Scales –1. numbers are assigned to objects according to rules.
Reliability and Validity of Research Instruments
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
EXPERIMENTAL DESIGNS Criteria for Experiments
RELIABILITY & VALIDITY
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Reliability and Validity in Experimental Research ♣
Concept of Measurement
MSc Applied Psychology PYM403 Research Methods Validity and Reliability in Research.
SOWK 6003 Social Work Research Week 4 Research process, variables, hypothesis, and research designs By Dr. Paul Wong.
© 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Chapter 4 Choosing a Research Design.
Measurement: Reliability and Validity For a measure to be useful, it must be both reliable and valid Reliable = consistent in producing the same results.
© 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Chapter 5 Making Systematic Observations.
Psych 231: Research Methods in Psychology
Variables cont. Psych 231: Research Methods in Psychology.
Validity, Reliability, & Sampling
Research Methods in MIS
Classroom Assessment A Practical Guide for Educators by Craig A
Reliability and Validity. Criteria of Measurement Quality How do we judge the relative success (or failure) in measuring various concepts? How do we judge.
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
McGraw-Hill/Irwin Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Choosing a Research Design.
Variation, Validity, & Variables Lesson 3. Research Methods & Statistics n Integral relationship l Must consider both during planning n Research Methods.
Research Methods Exam 2 Practice Questions Chapters 6 & 7 review.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Unanswered Questions in Typical Literature Review 1. Thoroughness – How thorough was the literature search? – Did it include a computer search and a hand.
LECTURE 06B BEGINS HERE THIS IS WHERE MATERIAL FOR EXAM 3 BEGINS.
V ALIDITY IN Q UALITATIVE R ESEARCH. V ALIDITY How accurate are the conclusions you make based on your data analysis? A matter of degree Non-reification.
Understanding Statistics
Final Study Guide Research Design. Experimental Research.
Experimental Research Validity and Confounds. What is it? Systematic inquiry that is characterized by: Systematic inquiry that is characterized by: An.
Group Quantitative Designs First, let us consider how one chooses a design. There is no easy formula for choice of design. The choice of a design should.
Psychology 290 Lab #2 Sept. 26 – 28 Types & Parts of Articles Operational Definition Variables Reliability & Validity.
LEARNING GOAL 1.2: DESIGN AN EFFECTIVE PSYCHOLOGICAL EXPERIMENT THAT ACCOUNTS FOR BIAS, RELIABILITY, AND VALIDITY Experimental Design.
Independent vs Dependent Variables PRESUMED CAUSE REFERRED TO AS INDEPENDENT VARIABLE (SMOKING). PRESUMED EFFECT IS DEPENDENT VARIABLE (LUNG CANCER). SEEK.
Validity RMS – May 28, Measurement Reliability The extent to which a measurement gives results that are consistent.
CDIS 5400 Dr Brenda Louw 2010 Validity Issues in Research Design.
Research in Communicative Disorders1 Research Design & Measurement Considerations (chap 3) Group Research Design Single Subject Design External Validity.
1 Lecture 3 Theory and Measurement: Causation, Validity and Reliability.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Review of Research Methods. Overview of the Research Process I. Develop a research question II. Develop a hypothesis III. Choose a research design IV.
Aron, Aron, & Coups, Statistics for the Behavioral and Social Sciences: A Brief Course (3e), © 2005 Prentice Hall Chapter 12 Making Sense of Advanced Statistical.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Reliability: The degree to which a measurement can be successfully repeated.
Chapter Six: The Basics of Experimentation I: Variables and Control.
Experimental Research Methods in Language Learning Chapter 5 Validity in Experimental Research.
Reliability and Validity Themes in Psychology. Reliability Reliability of measurement instrument: the extent to which it gives consistent measurements.
DENT 514: Research Methods
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
©2005, Pearson Education/Prentice Hall CHAPTER 6 Nonexperimental Strategies.
Validity & Reliability. OBJECTIVES Define validity and reliability Understand the purpose for needing valid and reliable measures Know the most utilized.
RELIABILITY AND VALIDITY Dr. Rehab F. Gwada. Control of Measurement Reliabilityvalidity.
Lecture 5 Validity and Reliability
Independent and Dependent Variables
Tests and Measurements: Reliability
Human Resource Management By Dr. Debashish Sengupta
Research Methods Lesson 2 Validity.
Experiment Basics: Variables
Introduction to Experimental Design
Presentation transcript:

The Basics of Experimentation Ch7 – Reliability and Validity

Reliability  Reliability – consistency and dependability. Should yield similar results across experiments.  Interrater reliability – different observers score the same behavior.  Test-retest reliability – measure behavior twice with the same test.  Interitem reliability – different parts of a test measuring the same variable are consistent.

Reliability  Interitem reliability – internal consistency e.g., in a multiple item questionnaire that measures a single construct variable, the internal consistency is evaluated among the set of items using statistical tests. 1. Split-half reliability – split the test into two halves and compute the correlation (coefficient of reliability) between halves. 2. Cronbach’s α – correlation of each test item with every other item

Validity  Validity – did you measure what you intended to measure?  Face validity – procedure is self-evident; works with nonconstruct variables that can be directly manipulated and measured. (e.g., measuring “pupil size” with a ruler). Least stringent type of validity.  Content validity – does the content of the measure fairly reflect the content of the variable we are trying to measure.

Evaluating Operational Definitions  Content validity  Does the content of our measure reflect the content of the qualities of the variable we want to measure.  Content validity also means that a test does not measure other qualities

Evaluating Operational Definitions  Predictive validity – do measures of a dependent variable predict actual behavior or performance? E.g. a questionnaire may measure a person’s desire to affiliate but will they actually seek out others when given the opportunity.

Evaluating Operational Definitions  Concurrent validity – is like predictive validity in that it compares scores on a measure with an outside criterion, but it is comparative rather than predictive; you compare scores with another known standard for the variable being studied.

Elevated Plus Maze Used as a measure of anxiety in rodents. Anxious animals spend more time in the enclosed arms and less time on the open arms.

Zero Maze

Open Field Test Thigmotaxis (time near the wall of an open field) is another measure of anxiety. If concurrent validity is high then the scores on the plus maze should be highly correlated with thigmotaxis scores in the open field.

Evaluating Operational Definitions  Construct validity – deals with the transition from theory to research application. Start with a general idea of the qualities of the construct, then convert the idea into an empirical test.  Have I successfully created a measure of the construct of interest?  Example, rats have a natural fear of predation associated with increased anxiety in open areas - design test that measures the tendency to avoid open areas.

Evaluating Operational Definitions  Construct validity – tests of construct validity are statistical and theoretical. Does the data make sense in the context of the overall theoretical framework?  Intelligence – maze bright versus maze dull rats.

Internal validity The degree to which a causal relationship can be established between the antecedent condition and behavior.  Three concepts tied to the problem of internal validity:  Extraneous variables  Confounding  Threats to internal validity

Extraneous variables factors that are not the main focus of the experiment. Other variables besides the IV and DV may change throughout the experiment:  Differences among subjects  Time of day of testing  Order of testing  Inconsistent treatment  Experimenter’s fatigue  Equipment failures

Confounding when the value of an extraneous variable changes systematically across different conditions of the experiment.  Changes we see in the DV can be explained equally well by the IV or the extraneous variable.

This is the Threatdown!

Threats to internal validity  History (one group tested together)  Maturation (boredom or fatigue)  Testing (previous administration of a test)  Instrumentation (feature of the measuring instrument changes)  Statistical regression (subjects assigned to conditions based on extreme scores)  Selection (no random assignment)  Subject mortality (subjects drop out)  Selection interactions (if subjects were not randomly assigned to groups a selection threat could interact with any of the others threats that may have affected one group but not others)