Measuring Research Variables

Slides:



Advertisements
Similar presentations
Chapter 8 Flashcards.
Advertisements

Reliability Definition: The stability or consistency of a test. Assumption: True score = obtained score +/- error Domain Sampling Model Item Domain Test.
MEASUREMENT CONCEPTS © 2012 The McGraw-Hill Companies, Inc.
© McGraw-Hill Higher Education. All rights reserved. Chapter 3 Reliability and Objectivity.
© 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Validity and Reliability Chapter Eight.
Chapter 4 – Reliability Observed Scores and True Scores Error
Assessment Procedures for Counselors and Helping Professionals, 7e © 2010 Pearson Education, Inc. All rights reserved. Chapter 5 Reliability.
VALIDITY AND RELIABILITY
Part II Sigma Freud & Descriptive Statistics
Measuring Research Variables
CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY
Measurement. Scales of Measurement Stanley S. Stevens’ Five Criteria for Four Scales Nominal Scales –1. numbers are assigned to objects according to rules.
Reliability and Validity of Research Instruments
Part II Knowing How to Assess Chapter 5 Minimizing Error p115 Review of Appl 644 – Measurement Theory – Reliability – Validity Assessment is broader term.
Measuring Research Variables KNES 510 Research Methods in Kinesiology 1.
Chapter 15 Conducting & Reading Research Baumgartner et al Chapter 15 Measurement Issues in Research.
RELIABILITY & VALIDITY
Concept of Measurement
RELIABILITY consistency or reproducibility of a test score (or measurement)
McGraw-Hill/Irwin © 2003 The McGraw-Hill Companies, Inc.,All Rights Reserved. Part Two THE DESIGN OF RESEARCH.
Lecture 7 Psyc 300A. Measurement Operational definitions should accurately reflect underlying variables and constructs When scores are influenced by other.
Measurement: Reliability and Validity For a measure to be useful, it must be both reliable and valid Reliable = consistent in producing the same results.
Chapter 5 Measuring Variables. DESCRIBING VARIABLES Correspondence Standardization Quantification Duplication.
Validity, Reliability, & Sampling
Research Methods in MIS
Classical Test Theory By ____________________. What is CCT?
Measurement and Data Quality
Validity and Reliability
MEASUREMENT OF VARIABLES: OPERATIONAL DEFINITION AND SCALES
Instrument Validity & Reliability. Why do we use instruments? Reliance upon our senses for empirical evidence Senses are unreliable Senses are imprecise.
Foundations of Educational Measurement
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Data Analysis. Quantitative data: Reliability & Validity Reliability: the degree of consistency with which it measures the attribute it is supposed to.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Unanswered Questions in Typical Literature Review 1. Thoroughness – How thorough was the literature search? – Did it include a computer search and a hand.
Reliability Chapter 3. Classical Test Theory Every observed score is a combination of true score plus error. Obs. = T + E.
Reliability Chapter 3.  Every observed score is a combination of true score and error Obs. = T + E  Reliability = Classical Test Theory.
Reliability & Validity
Validity Is the Test Appropriate, Useful, and Meaningful?
1 Chapter 4 – Reliability 1. Observed Scores and True Scores 2. Error 3. How We Deal with Sources of Error: A. Domain sampling – test items B. Time sampling.
Tests and Measurements Intersession 2006.
Chapter 7 Measurement and Scaling Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin.
Research methods in clinical psychology: An introduction for students and practitioners Chris Barker, Nancy Pistrang, and Robert Elliott CHAPTER 4 Foundations.
Chapter 8 Validity and Reliability. Validity How well can you defend the measure? –Face V –Content V –Criterion-related V –Construct V.
Measurement and Questionnaire Design. Operationalizing From concepts to constructs to variables to measurable variables A measurable variable has been.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
Class 9 Dependent Variables, Instructions/Literature Review
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
Measurement Issues General steps –Determine concept –Decide best way to measure –What indicators are available –Select intermediate, alternate or indirect.
Chapter 9 Correlation, Validity and Reliability. Nature of Correlation Association – an attempt to describe or understand Not causal –However, many people.
SOCW 671: #5 Measurement Levels, Reliability, Validity, & Classic Measurement Theory.
Measurement MANA 4328 Dr. Jeanne Michalski
©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
MEASUREMENT: PART 1. Overview  Background  Scales of Measurement  Reliability  Validity (next time)
Chapter 6 - Standardized Measurement and Assessment
Reliability a measure is reliable if it gives the same information every time it is used. reliability is assessed by a number – typically a correlation.
Chapter 6 Norm-Referenced Reliability and Validity.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Chapter 6 Norm-Referenced Measurement. Topics for Discussion Reliability Consistency Repeatability Validity Truthfulness Objectivity Inter-rater reliability.
Chapter 2 Norms and Reliability. The essential objective of test standardization is to determine the distribution of raw scores in the norm group so that.
Measurement and Scaling Concepts
Ch. 5 Measurement Concepts.
Questions What are the sources of error in measurement?
Tests and Measurements: Reliability
پرسشنامه کارگاه.
By ____________________
Chapter 11: Measuring Research Variables
Presentation transcript:

Measuring Research Variables chapter 11 Measuring Research Variables

Chapter Outline Validity Reliability Methods of establishing reliability Intertester reliability (objectivity) Standard error of measurement Using standard scores to compare performance (continued)

Chapter Outline (continued) Measuring movement Measuring written responses Measuring affective behavior Scales for measuring affective behavior Measuring knowledge Item response theory

Four Basic Types of Measurement Validity The American Educational Research Association and American Psychological Association agree on the definition of four types of validity: Logical validity Content validity Criterion validity Concurrent Predictive Construct validity

Desired Qualities in a Criterion Measure Relevance (e.g., the extent to which the criterion exemplifies success) Freedom from bias (Everyone must have same chance to achieve a good score.) Reliability of criterion (You can’t predict a criterion if you can’t measure it.) Availability (how hard is it to obtain criterion score)

Measurement Reliability Overview Definition Observed score = True score + Error score Sources of measurement error Expressing reliability through correlation Interclass Intraclass

Estimating Reliability Interclass Simple correlation Weaknesses Intraclass: ANOVA with repeated measures Treating trial-to-trial variation as measurement error Discarding trials Ignoring trial-to-trial variation

Trial-to-Trial Variation Can be treated as measurement error R = (MSS – MSE)/MSS Can be solved by discarding trials, then using above formula Can be ignored R = (MSS – MSres)/MSS

Methods of Establishing Reliability Determining stability: test, retest, then use intraclass method Constructing alternate forms Obtaining internal consistency Same-day test-retest Split-half technique Kuder-Richardson (KR-20 and KR-21) Coefficient alpha

Intertester Reliability (Objectivity) Agreement among testers or raters

Other Measurement Issues Standard error of measurement Standard scores z scores: z = (X – M)/s T scale: T = 50 + 10z S y · x = s 1 . – r

Measuring Various Types of Characteristics Movement Affective behavior Attitudes Personality Scales for affective behavior Likert-type Semantic differential

Rating Scales Types of rating scales Rating errors Numerical Checklist Forced choice Rankings Rating errors Leniency Central tendency Halo Proximity errors Observer bias Observer expectation

Measuring Knowledge Analyzing test items Types of knowledge test items Item difficulty = (# correct)/total Item discrimination = (nH – nL)/n Types of knowledge test items Multiple choice True/false Completion Matching Essay Item response theory (IRT)