Reliability and Validity in Research

Slides:



Advertisements
Similar presentations
Questionnaire Development
Advertisements

Conceptualization and Measurement
The Research Consumer Evaluates Measurement Reliability and Validity
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
© 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Validity and Reliability Chapter Eight.
VALIDITY AND RELIABILITY
Part II Sigma Freud & Descriptive Statistics
Part II Sigma Freud & Descriptive Statistics
1 Reliability and Validity in Research Bee Bornheimer, Robin Fitzpatrick, Sarah Lehmann, Matt Pierce, and Maureen Whalen April 23, 2008.
Chapter 4A Validity and Test Development. Basic Concepts of Validity Validity must be built into the test from the outset rather than being limited to.
Reliability, Validity, Trustworthiness If a research says it must be right, then it must be right,… right??
RESEARCH METHODS Lecture 18
Concept of Measurement
Beginning the Research Design
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Validity and Reliability EAF 410 July 9, Validity b Degree to which evidence supports inferences made b Appropriate b Meaningful b Useful.
Classroom Assessment A Practical Guide for Educators by Craig A
Measurement and Data Quality
Validity and Reliability
Ch 6 Validity of Instrument
Foundations of Educational Measurement
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Validity & Reliability Trustworthiness
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
WELNS 670: Wellness Research Design Chapter 5: Planning Your Research Design.
Reliability & Validity
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
TRUSTWORTHINESS OF THE RESEARCH Assoc. Prof. Dr. Şehnaz Şahinkarakaş.
Measurement Validity.
Advanced Research Methods Unit 3 Reliability and Validity.
Selecting a Sample. Sampling Select participants for study Select participants for study Must represent a larger group Must represent a larger group Picked.
Concurrent Validity Pages By: Davida R. Molina October 23, 2006.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
Validity and Item Analysis Chapter 4. Validity Concerns what the instrument measures and how well it does that task Not something an instrument has or.
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
JS Mrunalini Lecturer RAKMHSU Data Collection Considerations: Validity, Reliability, Generalizability, and Ethics.
Experimental Research Methods in Language Learning Chapter 5 Validity in Experimental Research.
Ch 9 Internal and External Validity. Validity  The quality of the instruments used in the research study  Will the reader believe what they are readying.
Criteria for selection of a data collection instrument. 1.Practicality of the instrument: -Concerns its cost and appropriateness for the study population.
 A test is said to be valid if it measures accurately what it is supposed to measure and nothing else.  For Example; “Is photography an art or a science?
Reliability and Validity Themes in Psychology. Reliability Reliability of measurement instrument: the extent to which it gives consistent measurements.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Michigan Assessment Consortium Common Assessment Development Series Module 16 – Validity.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Qualitative Validity and Trustworthiness
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN.
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Ch. 5 Measurement Concepts.
Lecture 5 Validity and Reliability
QUESTIONNAIRE DESIGN AND VALIDATION
Concept of Test Validity
Associated with quantitative studies
Evaluation of measuring tools: validity
Human Resource Management By Dr. Debashish Sengupta
Week 3 Class Discussion.
پرسشنامه کارگاه.
Reliability and Validity of Measurement
Analyzing Reliability and Validity in Outcomes Assessment Part 1
Unit IX: Validity and Reliability in nursing research
RESEARCH METHODS Lecture 18
Measurement Concepts and scale evaluation
Analyzing Reliability and Validity in Outcomes Assessment
Chapter 8 VALIDITY AND RELIABILITY
Qualities of a good data gathering procedures
Presentation transcript:

Reliability and Validity in Research Spring 2013 University of Missouri -St. Louis

Believing what you read? There is a need for reliable and valid data on student learning outcomes. Reliability: the extent to which an assessment tool is consistent or free from random error in measurement Validity concerns the degree to which inferences about students based on their test scores are warranted. Validity: the extent to which an assessment tool measures what it is intended to measure

Validity Validity has been defined as referring to the appropriateness, correctness, meaningfulness, and usefulness of the specific inferences researchers make based on the data they collect. It is the most important idea to consider when preparing or selecting an instrument. Validation is the process of collecting and analyzing evidence to support such inferences.

Evidence of Validity There are 3 types of evidence a researcher might collect: Content-related evidence of validity Content and format of the instrument Criterion-related evidence of validity Relationship between scores obtained using the instrument and scores obtained Construct-related evidence of validity Psychological construct being measured by the instrument

Content-related Evidence A key element is the adequacy of the sampling of the domain it is supposed to represent. The other aspect of content validation is the format of the instrument. Attempts to obtain evidence that the items measure what they are supposed to measure typify the process of content-related evidence.

Criterion-related Evidence A criterion is a second test presumed to measure the same variable. There are two forms of criterion-related validity: Predictive validity: time interval elapses between administering the instrument and obtaining criterion scores Concurrent validity: instrument data and criterion data are gathered and compared at the same time A Correlation Coefficient (r) indicates the degree of relationship that exists between the scores of individuals obtained by two instruments.

Construct-related Evidence Considered the broadest of the three categories. There is no single piece of evidence that satisfies construct-related validity. Researchers attempt to collect a variety of types of evidence, including both content-related and criterion-related evidence. The more evidence researchers have from different sources, the more confident they become about the interpretation of the instrument.

How can validity be established? Quantitative studies: measurements, scores, instruments used, research design Qualitative studies: ways that researchers have devised to establish credibility: member checking, triangulation, thick description, peer reviews, external audits

Reliability Refers to the consistency of scores or answers provided by an instrument. Scores obtained can be considered reliable but not valid. An instrument should be reliable and valid, depending on the context in which an instrument is used.

Reliability, continued In statistics or measurement theory, a measurement or test is considered reliable if it produces consistent results over repeated tests. Refers to how well we are measuring whatever it is that is being measured (regardless of whether or not it is the right quantity to measure).

Reliability, continued Unlike the common understanding, in these contexts “reliability” does not imply a value judgment Your car always starts/doesn’t start Your friend is always/ never late

Reliability of Measurement

Errors of Measurement Because errors of measurement are always present to some degree, variation in test scores are common. This is due to: Differences in motivation Energy Anxiety Different testing situation

Observational Studies Some characteristics cannot be measured through a test Unobtrusiveness Multiple sources of error Reliability depends on the extent to which observers agree

How can reliability be established? Quantitative studies? Assumption of repeatability Qualitative studies? Reframe as dependability and confirmability

Reliability and Validity

Reliability and Validity Why do we bother? Terms used in conjunction with one another Quantitative Research: R & V are treated as separate terms Qualitative Research: R & V are often all under another, all encompassing term Semi-reciprocal relationship