MEASUREMENT: PART 1. Overview  Background  Scales of Measurement  Reliability  Validity (next time)

Slides:



Advertisements
Similar presentations
Chapter 8 Flashcards.
Advertisements

Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Ch 5: Measurement Concepts
Conceptualization and Measurement
The Research Consumer Evaluates Measurement Reliability and Validity
Taking Stock Of Measurement. Basics Of Measurement Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables:
Some (Simplified) Steps for Creating a Personality Questionnaire Generate an item pool Administer the items to a sample of people Assess the uni-dimensionality.
MEASUREMENT CONCEPTS © 2012 The McGraw-Hill Companies, Inc.
Chapter 4 – Reliability Observed Scores and True Scores Error
VALIDITY AND RELIABILITY
Chapter 5 Measurement, Reliability and Validity.
Reliability - The extent to which a test or instrument gives consistent measurement - The strength of the relation between observed scores and true scores.
 A description of the ways a research will observe and measure a variable, so called because it specifies the operations that will be taken into account.
Part II Sigma Freud & Descriptive Statistics
Reliability and Validity of Research Instruments
Data and the Nature of Measurement
RESEARCH METHODS Lecture 18
Lecture Overview: Measurement 1) Reliability of Measures 1) Reliability of Measures 2) Construct Validity 2) Construct Validity 3) Measurement scales 3)
Definition & Measurement “measurement is the beginning of science, … until you can measure something, your knowledge is meager and unsatisfactory” Lord.
Concept of Measurement
Lecture 7 Psyc 300A. Measurement Operational definitions should accurately reflect underlying variables and constructs When scores are influenced by other.
Conny’s Office Hours will now be by APPOINTMENT ONLY. Please her at if you would like to meet with.
Measurement: Reliability and Validity For a measure to be useful, it must be both reliable and valid Reliable = consistent in producing the same results.
Validity, Reliability, & Sampling
Research Methods in MIS
Conceptualization and Operationalization
The Practice of Social Research
Experimental Research
Instrumentation.
Foundations of Educational Measurement
SELECTION OF MEASUREMENT INSTRUMENTS Ê Administer a standardized instrument Ë Administer a self developed instrument Ì Record naturally available data.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
LECTURE 06B BEGINS HERE THIS IS WHERE MATERIAL FOR EXAM 3 BEGINS.
1 Psychology 2020 Measurement & Observing Behavior Unit 2.
The Basics of Experimentation Ch7 – Reliability and Validity.
Chapter Five Measurement Concepts. Terms Reliability True Score Measurement Error.
Counseling Research: Quantitative, Qualitative, and Mixed Methods, 1e © 2010 Pearson Education, Inc. All rights reserved. Basic Statistical Concepts Sang.
Tests and Measurements Intersession 2006.
Measurement Validity.
Research: Conceptualization and Measurement Conceptualization Steps in measuring a variable Operational definitions Confounding Criteria for measurement.
Research methods in clinical psychology: An introduction for students and practitioners Chris Barker, Nancy Pistrang, and Robert Elliott CHAPTER 4 Foundations.
Advanced Research Methods Unit 3 Reliability and Validity.
Chapter 8 Validity and Reliability. Validity How well can you defend the measure? –Face V –Content V –Criterion-related V –Construct V.
Research: Conceptualization and Measurement Conceptualization Steps in measuring a variable Operational definitions Confounding Criteria for measurement.
Measurement and Questionnaire Design. Operationalizing From concepts to constructs to variables to measurable variables A measurable variable has been.
CHAPTER OVERVIEW The Measurement Process Levels of Measurement Reliability and Validity: Why They Are Very, Very Important A Conceptual Definition of Reliability.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
Chapter 2: Behavioral Variability and Research Variability and Research 1. Behavioral science involves the study of variability in behavior how and why.
Reliability: The degree to which a measurement can be successfully repeated.
Discussion Overview: Measurement I) Reliability of Measures I) Reliability of Measures II) Construct Validity II) Construct Validity III) Measurement scales.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Reliability a measure is reliable if it gives the same information every time it is used. reliability is assessed by a number – typically a correlation.
Outline Variables – definition  Physical dimensions  Abstract dimensions Systematic vs. random variables Scales of measurement Reliability of measurement.
Measuring Research Variables
Lesson 3 Measurement and Scaling. Case: “What is performance?” brandesign.co.za.
© 2009 Pearson Prentice Hall, Salkind. Chapter 5 Measurement, Reliability and Validity.
Measurement and Scaling Concepts
Ch. 5 Measurement Concepts.
Product Reliability Measuring
MEASUREMENT: RELIABILITY AND VALIDITY
Measurement: Part 1.
CHAPTER 5 MEASUREMENT CONCEPTS © 2007 The McGraw-Hill Companies, Inc.
Tests and Measurements: Reliability
Measurement: Part 1.
RESEARCH METHODS Lecture 18
Measurement: Part 1.
Ch 5: Measurement Concepts
Reliability and validity
Presentation transcript:

MEASUREMENT: PART 1

Overview  Background  Scales of Measurement  Reliability  Validity (next time)

Background  Measurement: Process of quantifying human characteristics (sometimes called psychometrics)  Long-standing strength of psychology (IQ, personality), recent emphasis in health (e.g., PCORI)  Necessary for evaluating measures and research  Variables can be understood at two levels  Construct: theoretical concept, not directly observable (e.g., psychopathy, marital satisfaction, intelligence)  Operational Definition: Concrete method of assessing a construct  Major methods: Tests and surveys, behavioral observation, and physiologic measures

The Basics: Scales of Measurement Scale Categorical/Nominal/DiscreteContinuous/Quantitative Multicategorical / Polytomous Binomial*/ Dichotomous OrdinalIntervalRatio ExamplesGeographic region, Race, Major, Favorite music genre Gender?, Yes/no status on anything Any rankings (college, popularity) Intelligence, Openness, Resilience, Shame Age, Time, # of health conditions (1) Rank Order: Numbers have meaning XXX (2) Equal Intervals: Differences in scores reflect comparable differences in the construct across the range of scores XX (3) True Zero: Score of zero indicates that absence of the construct X * Usually, rank is considered arbitrary for dichotomous variables, though sometimes these variables are treated as continuous in statistical analyses

Scales of Measurement  Implications for Analyses (Tough for beginners)  Two continuous variables: correlation  Dichotomous and continuous variable: t-test  Polytomous and continuous variable: ANOVA  Type of data always dictates the type of analysis, can be easy (above) or complex (many other statistics)  Implications for Scale Development/Interpretation  Continuous vs. artificially dichotomized/polytomized variables  Choice of response options for categorical variables  Choice of response options for continuous variables

Psychometrics  Measurement Reliability = consistency across measurements  Opposite of measurement error  Several types: Split half, internal consistency, parallel forms, test-retest, inter-rater  Measurement Validity = how well a measure measures what it’s supposed to (how well a measure operationalizes a construct)  Several forms: Face, content, construct (convergent, discriminant, incremental), criterion (concurrent, predictive)  Other Validities: Internal, external, statistical conclusion

Split-half Reliability  How well scores on one half of the test correlate with scores on the other  First half correlated with second half; odds with evens, etc.  Problem: Many ways to split a test in two, and different splits yield different correlations

Internal-Consistency Reliability  Cronbach’s alpha ( α)  Average correlation of all possible split-halves (corrected for loss of scale length due to halving)  Increases as inter-item correlations increase (items correlate well with each other), and increases as the length of the measure increases (errors cancel out)  Common descriptors: Unacceptable <.50Acceptable Poor Good Fair Excellent.90+

Other Reliabilities  Parallel forms  Correlation between multiple forms of the same measure (e.g., SAT, GRE, MCAT, neuropsych test)  Test-retest  Longitudinal correlation between same measure  Mood vs. personality vs. cognitive skills  Inter-rater  Correlate scores from two or more different raters  Measure of agreement or consensus  Behavioral observations, informant data, health records, clinician ratings/diagnoses