Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.

Slides:



Advertisements
Similar presentations
Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Advertisements

1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
Reliability Definition: The stability or consistency of a test. Assumption: True score = obtained score +/- error Domain Sampling Model Item Domain Test.
© 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Validity and Reliability Chapter Eight.
Psychometrics William P. Wattles, Ph.D. Francis Marion University.
VALIDITY AND RELIABILITY
Research Methodology Lecture No : 11 (Goodness Of Measures)
Part II Sigma Freud & Descriptive Statistics
Reliability for Teachers Kansas State Department of Education ASSESSMENT LITERACY PROJECT1 Reliability = Consistency.
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY
Reliability and Validity of Research Instruments
RESEARCH METHODS Lecture 18
RELIABILITY & VALIDITY
Concept of Measurement
Lecture 7 Psyc 300A. Measurement Operational definitions should accurately reflect underlying variables and constructs When scores are influenced by other.
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Research Methods in MIS
Educational Assessment
Chapter 7 Evaluating What a Test Really Measures
Reliability and Validity. Criteria of Measurement Quality How do we judge the relative success (or failure) in measuring various concepts? How do we judge.
Classroom Assessment Reliability. Classroom Assessment Reliability Reliability = Assessment Consistency. –Consistency within teachers across students.
Measurement Concepts & Interpretation. Scores on tests can be interpreted: By comparing a client to a peer in the norm group to determine how different.
Measurement and Data Quality
Validity and Reliability
VALIDITY, RELIABILITY, and TRIANGULATED STRATEGIES
Instrumentation.
Foundations of Educational Measurement
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
MEASUREMENT CHARACTERISTICS Error & Confidence Reliability, Validity, & Usability.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Copyright © 2008 by Nelson, a division of Thomson Canada Limited Chapter 11 Part 3 Measurement Concepts MEASUREMENT.
Psychometrics William P. Wattles, Ph.D. Francis Marion University.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Inferential Statistics.
Reliability REVIEW Inferential Infer sample findings to entire population Chi Square (2 nominal variables) t-test (1 nominal variable for 2 groups, 1 continuous)
Chapter Five Measurement Concepts. Terms Reliability True Score Measurement Error.
Reliability & Validity
Counseling Research: Quantitative, Qualitative, and Mixed Methods, 1e © 2010 Pearson Education, Inc. All rights reserved. Basic Statistical Concepts Sang.
Assessing Learners with Special Needs: An Applied Approach, 6e © 2009 Pearson Education, Inc. All rights reserved. Chapter 4:Reliability and Validity.
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
Measurement Validity.
Advanced Research Methods Unit 3 Reliability and Validity.
Learning Objective Chapter 9 The Concept of Measurement and Attitude Scales Copyright © 2000 South-Western College Publishing Co. CHAPTER nine The Concept.
Chapter 8 Validity and Reliability. Validity How well can you defend the measure? –Face V –Content V –Criterion-related V –Construct V.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
Presented By Dr / Said Said Elshama  Distinguish between validity and reliability.  Describe different evidences of validity.  Describe methods of.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Chapter 4 Validity Robert J. Drummond and Karyn Dayle Jones Assessment Procedures for Counselors and Helping Professionals, 6 th edition Copyright ©2006.
MEASUREMENT. MeasurementThe assignment of numbers to observed phenomena according to certain rules. Rules of CorrespondenceDefines measurement in a given.
Validity and Item Analysis Chapter 4. Validity Concerns what the instrument measures and how well it does that task Not something an instrument has or.
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 15 Developing and Testing Self-Report Scales.
©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 6 - Standardized Measurement and Assessment
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
5. Evaluation of measuring tools: reliability Psychometrics. 2011/12. Group A (English)
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
Measurement and Scaling Concepts
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 25 Critiquing Assessments Sherrilene Classen, Craig A. Velozo.
Copyright © 2009 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 47 Critiquing Assessments.
Chapter 2 Theoretical statement:
Ch. 5 Measurement Concepts.
Reliability and Validity in Research
Evaluation of measuring tools: validity
Journalism 614: Reliability and Validity
Reliability & Validity
پرسشنامه کارگاه.
Measurement Concepts and scale evaluation
Presentation transcript:

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Measurement The assignment of numbers to represent the amount of an attribute present in an object or person, using specific rules Advantages: –Removes guesswork –Provides precise information –Less vague than words

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Errors of Measurement Obtained Score = True score + Error Obtained score: An actual data value for a participant (e.g., anxiety scale score) True score: The score that would be obtained with an infallible measure Error: The error of measurement, caused by factors that distort measurement

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Factors That Contribute to Errors of Measurement Situational contaminants Transitory personal factors Response-set biases Administration variations Problems with instrument clarity Item sampling Instrument format

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Key Criteria for Evaluating Quantitative Measures Reliability Validity

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Reliability The consistency and accuracy with which an instrument measures the target attribute Reliability assessments involve computing a reliability coefficient –most reliability coefficients are based on correlation coefficients

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Correlation Coefficients Correlation coefficients indicate direction and magnitude of relationships between variables Range  from –1.00 (perfect negative correlation)  through 0.00 (no correlation)  to (perfect positive correlation)

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Three Aspects of Reliability Can Be Evaluated Stability Internal consistency Equivalence

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Stability The extent to which scores are similar on 2 separate administrations of an instrument Evaluated by test–retest reliability –Requires participants to complete the same instrument on two occasions –A correlation coefficient between scores on 1 st and 2 nd administration is computed –Appropriate for relatively enduring attributes (e.g., self-esteem)

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Internal Consistency The extent to which all the instrument’s items are measuring the same attribute Evaluated by administering instrument on one occasion Appropriate for most multi-item instruments Evaluation methods: –Split-half technique –Coefficient alpha

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Equivalence The degree of similarity between alternative forms of an instrument or between multiple raters/observers using an instrument Most relevant for structured observations Assessed by comparing observations or ratings of 2 or more observers (interobserver/interrater reliability) Numerous formula and assessment methods

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Reliability Coefficients Represent the proportion of true variability to obtained variability: r =V T V o Should be at least.70;.80 preferable Can be improved by making instrument longer (adding items) Are lower in homogeneous than in heterogeneous samples

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Validity The degree to which an instrument measures what it is supposed to measure Four aspects of validity: –Face validity –Content validity –Criterion-related validity –Construct validity

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Face Validity Refers to whether the instrument looks as though it is measuring the appropriate construct Based on judgment, no objective criteria for assessment

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Content Validity The degree to which an instrument has an appropriate sample of items for the construct being measured Evaluated by expert evaluation, via the content validity index (CVI)

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Criterion-Related Validity The degree to which the instrument correlates with an external criterion Validity coefficient is calculated by correlating scores on the instrument and the criterion

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Criterion-Related Validity (cont’d) Two types of criterion-related validity: Predictive validity: the instrument’s ability to distinguish people whose performance differs on a future criterion Concurrent validity: the instrument’s ability to distinguish individuals who differ on a present criterion

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Construct Validity Concerned with the questions: What is this instrument really measuring? Does it adequately measure the construct of interest?

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Methods of Assessing Construct Validity Known-groups technique Relationships based on theoretical predictions Multitrait-multimethod matrix method (MTMM) Factor analysis

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Multitrait-Multimethod Matrix Method Builds on two types of evidence: Convergence Discriminability

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Convergence Evidence that different methods of measuring a construct yield similar results Convergent validity comes from the correlations between two different methods measuring the same trait

Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Discriminabililty Evidence that the construct can be differentiated from other similar constructs Discriminant validity assesses the degree to which a single method of measuring two constructs yields different results