Manipulation and Measurement of Variables

Slides:



Advertisements
Similar presentations
Statistics for the Social Sciences Psychology 340 Fall 2006 Distributions.
Advertisements

Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Independent and Dependent Variables
Reliability and Validity of Dependent Measures
Part II Sigma Freud & Descriptive Statistics
Part II Sigma Freud & Descriptive Statistics
Measurement Reliability and Validity
Copyright © Allyn & Bacon (2007) Data and the Nature of Measurement Graziano and Raulin Research Methods: Chapter 4 This multimedia product and its contents.
Measurement. Scales of Measurement Stanley S. Stevens’ Five Criteria for Four Scales Nominal Scales –1. numbers are assigned to objects according to rules.
Data and the Nature of Measurement
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Concept of Measurement
Psych 231: Research Methods in Psychology
Sampling & Experimental Control Psych 231: Research Methods in Psychology.
© 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Chapter 5 Making Systematic Observations.
Psych 231: Research Methods in Psychology
Manipulation and Measurement of Variables
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Variables cont. Psych 231: Research Methods in Psychology.
Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Validity, Reliability, & Sampling
Manipulation and Measurement of Variables Psych 231: Research Methods in Psychology.
Experiment Basics: Variables
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Making Systematic Observations.
Experimental Research
Variation, Validity, & Variables Lesson 3. Research Methods & Statistics n Integral relationship l Must consider both during planning n Research Methods.
Measurement in Exercise and Sport Psychology Research EPHE 348.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
What is a Measurement? Concept of measurement is intuitively simple  Measure something two concepts involved  The thing you are measuring  The measurement.
Reasoning in Psychology Using Statistics Psychology
Statistics for the Social Sciences Psychology 340 Spring 2009 Review of SPSS basics.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Measurement and Questionnaire Design. Operationalizing From concepts to constructs to variables to measurable variables A measurable variable has been.
CHAPTER OVERVIEW The Measurement Process Levels of Measurement Reliability and Validity: Why They Are Very, Very Important A Conceptual Definition of Reliability.
Chapter 2: Behavioral Variability and Research Variability and Research 1. Behavioral science involves the study of variability in behavior how and why.
Introduction To Statistics. Statistics, Science, ad Observations What are statistics? What are statistics? The term statistics refers to a set of mathematical.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Designing psychological investigations
Some Terminology experiment vs. correlational study IV vs. DV descriptive vs. inferential statistics sample vs. population statistic vs. parameter H 0.
Reasoning in Psychology Using Statistics
Ch. 5 Measurement Concepts.
Experiment Basics: Variables
Product Reliability Measuring
Experiment Basics: Variables
Experiment Basics: Variables
CHAPTER 5 MEASUREMENT CONCEPTS © 2007 The McGraw-Hill Companies, Inc.
EXPERIMENTAL PSYCHOLOGY
Experiment Basics: Variables
Experimental Design.
Experimental Design.
Experiment Basics: Variables
Reasoning in Psychology Using Statistics
Experiment Basics: Variables
Scientific Method Steps
Experiment Basics: Variables
Ch 5: Measurement Concepts
Research Methods.
Reasoning in Psychology Using Statistics
Presentation transcript:

Manipulation and Measurement of Variables Psych 231: Research Methods in Psychology

Announcements For labs this week you’ll need to download (and bring to lab): Class experiment packet

Choosing your independent variable Choosing the right levels of your independent variable Review the literature Do a pilot experiment Consider the costs, your resources, your limitations Be realistic Pick levels found in the “real world” Pick a large enough range to show the effect And in the middle of the range

Identifying potential problems These are things that you want to try to avoid by careful selection of the levels of your IV (may be issues for your DV as well). Demand characteristics Experimenter bias Reactivity Floor and ceiling effects

Demand characteristics Characteristics of the study that may give away the purpose of the experiment May influence how the participants behave in the study Examples: Experiment title: The effects of horror movies on mood Obvious manipulation: Ten psychology students looking straight up Biased or leading questions: Don’t you think it’s bad to murder unborn children?

Experimenter Bias Experimenter bias (expectancy effects) The experimenter may influence the results (intentionally and unintentionally) E.g., Clever Hans One solution is to keep the experimenter “blind” as to what conditions are being tested Single blind - experimenter doesn’t know the condition Double blind - neither the participant nor the experimenter knows the condition

Reactivity Knowing that you are being measured Just being in an experimental setting, people don’t always respond the way that they “normally” would. Cooperative Defensive Non-cooperative Cooperative “You seem like a nice person: I’ll help you get the right results” Defensive “I don’t want to look stupid/evil. I’ll do what a smart/good person is expected to do (rather than what I normally would do).” Noncooperative “This experiment is annoying. Let me screw up the results.”

Floor effects A value below which a response cannot be made Imagine a task that is so difficult, that none of your participants can do it. As a result the effects of your IV (if there are indeed any) can’t be seen.

Ceiling effects When the dependent variable reaches a level that cannot be exceeded Imagine a task that is so easy, that everybody scores a 100% So while there may be an effect of the IV, that effect can’t be seen because everybody has “maxed out” To avoid floor and ceiling effects you want to pick levels of your IV that result in middle level performance in your DV

Measuring your dependent variables Scales of measurement Errors in measurement

Measuring your dependent variables Scales of measurement Scales of measurement - the correspondence between the numbers representing the properties that we’re measuring The scale that you use will (partially) determine what kinds of statistical analyses you can perform

Scales of measurement Categorical variables Quantitative variables Nominal scale

Scales of measurement Nominal Scale: Consists of a set of categories that have different names. Measurements on a nominal scale label and categorize observations, but do not make any quantitative distinctions between observations. Example: Eye color: blue, green, brown, hazel

Scales of measurement Categorical variables Quantitative variables Nominal scale Ordinal scale Quantitative variables

Scales of measurement Ordinal Scale: Consists of a set of categories that are organized in an ordered sequence. Measurements on an ordinal scale rank observations in terms of size or magnitude. Example: T-shirt size: XXL XL, Lrg, Med, Small,

Scales of measurement Categorical variables Quantitative variables Nominal scale Ordinal scale Quantitative variables Interval scale

Scales of measurement Interval Scale: Consists of ordered categories where all of the categories are intervals of exactly the same size. With an interval scale, equal differences between numbers on the scale reflect equal differences in magnitude. Ratios of magnitudes are not meaningful. Example: Fahrenheit temperature scale 40º 20º “Not Twice as hot”

Scales of measurement Categorical variables Quantitative variables Nominal scale Ordinal scale Quantitative variables Interval scale Ratio scale

Scales of measurement Ratio scale: An interval scale with the additional feature of an absolute zero point. Ratios of numbers DO reflect ratios of magnitude. It is easy to get ratio and interval scales confused Example: Measuring your height with playing cards

Scales of measurement Ratio scale 8 cards high

Scales of measurement Interval scale 5 cards high

Scales of measurement Ratio scale Interval scale 8 cards high 0 cards high means ‘no height’ 0 cards high means ‘as tall as the table’

Scales of measurement Categorical variables Quantitative variables Nominal scale Ordinal scale Quantitative variables Interval scale Ratio scale “Best” Scale? Given a choice, usually prefer highest level of measurement possible

Measuring your dependent variables Scales of measurement Errors in measurement

Reliability & Validity Example: Measuring intelligence? How do we measure the construct? How good is our measure? How does it compare to other measures of the construct? Is it a self-consistent measure?

Errors in measurement Reliability Validity If you measure the same thing twice (or have two measures of the same thing) do you get the same values? Validity Does your measure really measure what it is supposed to measure? Does our measure really measure the construct? Is there bias in our measurement?

Reliability & Validity Reliability = consistency Validity = measuring what is intended reliable valid unreliable invalid reliable invalid

Reliability True score + measurement error A reliable measure will have a small amount of error Multiple “kinds” of reliability

Reliability Test-restest reliability Test the same participants more than once Measurement from the same person at two different times Should be consistent across different administrations Reliable Unreliable

Reliability Internal consistency reliability Multiple items testing the same construct Extent to which scores on the items of a measure correlate with each other Cronbach’s alpha (α) Split-half reliability Correlation of score on one half of the measure with the other half (randomly determined)

Reliability Inter-rater reliability At least 2 raters observe behavior Extent to which raters agree in their observations Are the raters consistent? Requires some training in judgment

Validity Does your measure really measure what it is supposed to measure? There are many “kinds” of validity

Many kinds of Validity VALIDITY CONSTRUCT INTERNAL EXTERNAL FACE CRITERION- ORIENTED PREDICTIVE CONVERGENT CONCURRENT DISCRIMINANT

Many kinds of Validity VALIDITY CONSTRUCT INTERNAL EXTERNAL FACE CRITERION- ORIENTED PREDICTIVE CONVERGENT CONCURRENT DISCRIMINANT

Construct Validity Usually requires multiple studies, a large body of evidence that supports the claim that the measure really tests the construct

Face Validity At the surface level, does it look as if the measure is testing the construct? “This guy seems smart to me, and he got a high score on my IQ measure.”

External Validity Are experiments “real life” behavioral situations, or does the process of control put too much limitation on the “way things really work?”

External Validity Variable representativeness Relevant variables for the behavior studied along which the sample may vary Subject representativeness Characteristics of sample and target population along these relevant variables Setting representativeness Ecological validity - are the properties of the research setting similar to those outside the lab

Internal Validity The precision of the results Did the change result from the changes in the DV or does it come from something else?

Threats to internal validity History – an event happens the experiment Maturation – participants get older (and other changes) Selection – nonrandom selection may lead to biases Mortality – participants drop out or can’t continue Testing – being in the study actually influences how the participants respond

“Debugging your study” Pilot studies A trial run through Don’t plan to publish these results, just try out the methods Manipulation checks An attempt to directly measure whether the IV variable really affects the DV. Look for correlations with other measures of the desired effects.

Next time Read chapters 8. Remember: For labs this week you’ll need to download: Class experiment packet