Reliability n Consistent n Dependable n Replicable n Stable.

Slides:



Advertisements
Similar presentations
MEASUREMENT: RELIABILITY Lu Ann Aday, Ph.D. The University of Texas School of Public Health.
Advertisements

Consistency in testing
Topics: Quality of Measurements
Reliability and Validity checks S-005. Checking on reliability of the data we collect  Compare over time (test-retest)  Item analysis  Internal consistency.
Procedures for Estimating Reliability
Types of Reliability.
Reliability Definition: The stability or consistency of a test. Assumption: True score = obtained score +/- error Domain Sampling Model Item Domain Test.
© McGraw-Hill Higher Education. All rights reserved. Chapter 3 Reliability and Objectivity.
Chapter 5 Reliability Robert J. Drummond and Karyn Dayle Jones Assessment Procedures for Counselors and Helping Professionals, 6 th edition Copyright ©2006.
The Department of Psychology
Chapter 4 – Reliability Observed Scores and True Scores Error
Assessment Procedures for Counselors and Helping Professionals, 7e © 2010 Pearson Education, Inc. All rights reserved. Chapter 5 Reliability.
Chapter 3. Reliability: As my grand pappy, Old Reliable,
1Reliability Introduction to Communication Research School of Communication Studies James Madison University Dr. Michael Smilowitz.
 A description of the ways a research will observe and measure a variable, so called because it specifies the operations that will be taken into account.
Reliability Analysis. Overview of Reliability What is Reliability? Ways to Measure Reliability Interpreting Test-Retest and Parallel Forms Measuring and.
MEQ Analysis. Outline Validity Validity Reliability Reliability Difficulty Index Difficulty Index Power of Discrimination Power of Discrimination.
Methods for Estimating Reliability
Measurement. Scales of Measurement Stanley S. Stevens’ Five Criteria for Four Scales Nominal Scales –1. numbers are assigned to objects according to rules.
-生醫統計期末報告- Reliability 學生 : 劉佩昀 學號 : 授課老師 : 蔡章仁.
Reliability: Internal Consistency By Lynn Woolever AED 615 October 23, 2006.
Testing 05 Reliability.
Reliability and Validity of Research Instruments
Can you do it again? Reliability and Other Desired Characteristics Linn and Gronlund Chap.. 5.
Reliability Analysis. Overview of Reliability What is Reliability? Ways to Measure Reliability Interpreting Test-Retest and Parallel Forms Measuring and.
Reliability and Validity Dr. Roy Cole Department of Geography and Planning GVSU.
Reliability n Consistent n Dependable n Replicable n Stable.
Conny’s Office Hours will now be by APPOINTMENT ONLY. Please her at if you would like to meet with.
Lesson Seven Reliability. Contents  Definition of reliability Definition of reliability  Indication of reliability: Reliability coefficient Reliability.
Research Methods in MIS
Reliability of Selection Measures. Reliability Defined The degree of dependability, consistency, or stability of scores on measures used in selection.
Technical Issues Two concerns Validity Reliability
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
MEASUREMENT CHARACTERISTICS Error & Confidence Reliability, Validity, & Usability.
Data Analysis. Quantitative data: Reliability & Validity Reliability: the degree of consistency with which it measures the attribute it is supposed to.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Statistical Evaluation of Data
Reliability Chapter 3.  Every observed score is a combination of true score and error Obs. = T + E  Reliability = Classical Test Theory.
Chapter 4: Test administration. z scores Standard score expressed in terms of standard deviation units which indicates distance raw score is from mean.
Validity and Reliability THESIS. Validity u Construct Validity u Content Validity u Criterion-related Validity u Face Validity.
1 Chapter 4 – Reliability 1. Observed Scores and True Scores 2. Error 3. How We Deal with Sources of Error: A. Domain sampling – test items B. Time sampling.
Tests and Measurements Intersession 2006.
Reliability & Agreement DeShon Internal Consistency Reliability Parallel forms reliability Parallel forms reliability Split-Half reliability Split-Half.
Designs and Reliability Assessing Student Learning Section 4.2.
RELIABILITY Prepared by Marina Gvozdeva, Elena Onoprienko, Yulia Polshina, Nadezhda Shablikova.
1 LANGUAE TEST RELIABILITY. 2 What Is Reliability? Refer to a quality of test scores, and has to do with the consistency of measures across different.
Reliability n Consistent n Dependable n Replicable n Stable.
©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Reliability and Validity Themes in Psychology. Reliability Reliability of measurement instrument: the extent to which it gives consistent measurements.
Reliability: Introduction. Reliability Session Definitions & Basic Concepts of Reliability Theoretical Approaches Empirical Assessments of Reliability.
Reliability When a Measurement Procedure yields consistent scores when the phenomenon being measured is not changing. Degree to which scores are free of.
Chapter 6 Norm-Referenced Reliability and Validity.
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 10: Correlational Research 1.
Language Assessment Lecture 7 Validity & Reliability Instructor: Dr. Tung-hsien He
Chapter 6 Norm-Referenced Measurement. Topics for Discussion Reliability Consistency Repeatability Validity Truthfulness Objectivity Inter-rater reliability.
Professor Jim Tognolini
Lecture 5 Validity and Reliability
assessing scale reliability
Classical Test Theory Margaret Wu.
Reliability and Validity
Reliability and Validity
Calculating Reliability of Quantitative Measures
PSY 614 Instructor: Emily Bullock, Ph.D.
Instrumentation: Reliability Measuring Caring in Nursing
Reliability.
Reliability.
By ____________________
The first test of validity
15.1 The Role of Statistics in the Research Process
Procedures for Estimating Reliability
Presentation transcript:

Reliability n Consistent n Dependable n Replicable n Stable

Two Administrations n Test/retest same test - same group n Parallel/Alternate forms different versions of test - same group

Two administration procedure: n Administer tests (in two sessions) n Convert to z scores (if necessary) n Correlate (Pearson or Spearman)

Two administration issues: n Problems????? n Duration between???? n Type of variable?????

One Administration One test One test One group One group One administration One administration

One administration procedure:  Administer test to one group  Divide questions to score  Split Half  first/second or odd/even halves????  Correlate scores from halves  Apply Spearman-Brown formula  estimate changes in length

Uses of one administration: n Internal consistency of items n What types of tests to use this on???

Inter item consistency n Statistical estimation  Kuder-Richardson Formula 20 (KR20) = dichotomous questions  Cronbach alpha (alpha coefficient) = all questions n (factor analysis)

Reliability coefficient n n higher is better n score is relative, not absolute

Inter-rater reliability n Consensus between raters n Percentage of agreement n Kappa statistic (2 or many raters)

Threats to reliability n Construction n Administration (tester, testee, environment) n Scoring n Interpretation

Project homework n Which approaches were used to determine the reliability of your measure? Why were those measures selected? Could they have used other measures? Why/why not?