Unanswered Questions in Typical Literature Review 1. Thoroughness – How thorough was the literature search? – Did it include a computer search and a hand.

Slides:



Advertisements
Similar presentations
Questionnaire Development
Advertisements

Topics: Quality of Measurements
Reliability Definition: The stability or consistency of a test. Assumption: True score = obtained score +/- error Domain Sampling Model Item Domain Test.
© McGraw-Hill Higher Education. All rights reserved. Chapter 3 Reliability and Objectivity.
© 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Validity and Reliability Chapter Eight.
VALIDITY AND RELIABILITY
Reliability - The extent to which a test or instrument gives consistent measurement - The strength of the relation between observed scores and true scores.
Reliability & Validity.  Limits all inferences that can be drawn from later tests  If reliable and valid scale, can have confidence in findings  If.
Part II Sigma Freud & Descriptive Statistics
Part II Sigma Freud & Descriptive Statistics
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 12 Measures of Association.
Reliability and Validity of Research Instruments
REVIEW I Reliability Index of Reliability Theoretical correlation between observed & true scores Standard Error of Measurement Reliability measure Degree.
Research Synthesis (Meta-Analysis) Research Synthesis (Meta-Analysis) CHAPTER 1 CHAPTER 10.
Reliability Analysis. Overview of Reliability What is Reliability? Ways to Measure Reliability Interpreting Test-Retest and Parallel Forms Measuring and.
Research Methods in MIS
Validity and Reliability EAF 410 July 9, Validity b Degree to which evidence supports inferences made b Appropriate b Meaningful b Useful.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Classroom Assessment A Practical Guide for Educators by Craig A
Measurement Concepts & Interpretation. Scores on tests can be interpreted: By comparing a client to a peer in the norm group to determine how different.
Technical Issues Two concerns Validity Reliability
Measurement and Data Quality
Ch 6 Validity of Instrument
Reliability and Validity what is measured and how well.
Foundations of Educational Measurement
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
LECTURE 06B BEGINS HERE THIS IS WHERE MATERIAL FOR EXAM 3 BEGINS.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Inferential Statistics.
Reliability REVIEW Inferential Infer sample findings to entire population Chi Square (2 nominal variables) t-test (1 nominal variable for 2 groups, 1 continuous)
Chapter 4: Test administration. z scores Standard score expressed in terms of standard deviation units which indicates distance raw score is from mean.
The Basics of Experimentation Ch7 – Reliability and Validity.
Validity. Face Validity  The extent to which items on a test appear to be meaningful and relevant to the construct being measured.
Validity and Reliability THESIS. Validity u Construct Validity u Content Validity u Criterion-related Validity u Face Validity.
Reliability & Validity
Correlation & Prediction REVIEW Correlation BivariateDirect/IndirectCause/Effect Strength of relationships (is + stronger than negative?) Coefficient of.
Chapter 8 Validity and Reliability. Validity How well can you defend the measure? –Face V –Content V –Criterion-related V –Construct V.
Validity and Reliability Neither Valid nor Reliable Reliable but not Valid Valid & Reliable Fairly Valid but not very Reliable Think in terms of ‘the purpose.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
Validity and Item Analysis Chapter 4. Validity Concerns what the instrument measures and how well it does that task Not something an instrument has or.
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Reliability performance on language tests is also affected by factors other than communicative language ability. (1) test method facets They are systematic.
Reliability and Validity Themes in Psychology. Reliability Reliability of measurement instrument: the extent to which it gives consistent measurements.
Chapter 6 - Standardized Measurement and Assessment
2. Main Test Theories: The Classical Test Theory (CTT) Psychometrics. 2011/12. Group A (English)
Chapter 14 Research Synthesis (Meta-Analysis). Chapter Outline Using meta-analysis to synthesize research Tutorial example of meta-analysis.
Chapter 6 Norm-Referenced Reliability and Validity.
Measuring Research Variables
Assessing Student Performance Characteristics of Good Assessment Instruments (c) 2007 McGraw-Hill Higher Education. All rights reserved.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Chapter 6 Norm-Referenced Measurement. Topics for Discussion Reliability Consistency Repeatability Validity Truthfulness Objectivity Inter-rater reliability.
5. Evaluation of measuring tools: reliability Psychometrics. 2011/12. Group A (English)
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Survey Methodology Reliability and Validity
Ch. 5 Measurement Concepts.
Lecture 5 Validity and Reliability
Product Reliability Measuring
CHAPTER 5 MEASUREMENT CONCEPTS © 2007 The McGraw-Hill Companies, Inc.
Tests and Measurements: Reliability
Journalism 614: Reliability and Validity
Reliability & Validity
Introduction to Measurement
Reliability and Validity of Measurement
PSY 614 Instructor: Emily Bullock, Ph.D.
Evaluation of measuring tools: reliability
By ____________________
Methodology Week 5.
Measurement Concepts and scale evaluation
Chapter 8 VALIDITY AND RELIABILITY
Presentation transcript:

Unanswered Questions in Typical Literature Review 1. Thoroughness – How thorough was the literature search? – Did it include a computer search and a hand search? – In a computer search, what were the descriptors used? – Which journals were searched? – Were theses and dissertations searched and included?

2. Inclusion/Exclusion – On what basis were studies included or excluded from the written review? – Were theses and dissertations arbitrarily excluded? – Did the author make decisions about the inclusion or exclusion based on the perceived internal validity of the research? sample size? research design? use of appropriate statistics?

3. Conclusions – Were conclusions based on the number of studies supporting or refuting a point (called vote counting)? – Were studies weighted differently according to sample size? meaningfulness of the results? quality of the journal? internal validity of the research?

Establishing Cause and Effect The selection of a good theoretical framework The application of an appropriate experimental design The use of the correct statistical model and analyses The proper selection and control of the independent variable The appropriate selection and measurement of the dependent variable The use of appropriate subjects The correct interpretation of the results

Reliability and Validity Reliability - the consistency or repeatability of a measure; is the measure reproducible Validity - the truthfulness of a measure; validity depends on reliability and relevance

Reliability The observed measure is the summation of the true score and the error score The more reliable a test, the less error is involved Reliability is defined as the proportion of observed score variance that is true score variance Reliability is determined using Pearson’s correlation coefficient or ANOVA

Types of Reliability Interclass reliability - Pearson’s Product Moment (Correlation of only two variables) Test -retest reliability (stability)-determines if a single test is stable over time Equivalence reliability - are two tests similar in measuring the same item or trait Split halves reliability - estimates the reliable of a test based on the scores of the odd and even test items Spearman Brown prophecy - estimates test reliability based on addition or deletion of test items

Intraclass Reliability Reliability within an individual’s scores of more than two measures Cronbach’s alpha estimates the reliability of tests. Uses AVOVA to determine mean differences within and between an individual’s scores

Indices of Reliability Index of reliability - the theoretical correlation between true and observed scores; IR=  r xx Standard Error of Measure - SEM the degree of fluctuation of an individual’s observed score from their true score SEM = s  (1-r xx )

Factors Affecting Reliability Fatigue Practice Ability Time between tests Testing Circumstances Test Difficulty Type of measurement Environment

Validity Content validity - face validity, logical validity Criterion Validity- measures are related to specific criterion Predictive validity Concurrent validity Construct validity - test validity as a measure of psychological constraints

The Relationship between Reliability and Validity

Possible Test Items 1 Be able to define and differentiate between reliability and validity; what types of error do each try to explain. Know the different classes and types of reliability; be able to differentiate between the different scores that are included in reliability. Be able to calculate the reliability of test examples and describe what type/class of reliability is being defined.

Possible Test Items 2 Be able to define/describe/determine Cronbach’s Alpha, Reliability Index, and the Standard Error of Measurement. Know what factors affect test reliability and how to compensate to make a more reliable test. Be able to describe, define and differentiate between the types of validity Be able to describe the different methods for developing a criterion for the evaluation of validity and give examples of the different types of criterion validity.