Questionnaire Development

Slides:



Advertisements
Similar presentations
Agenda Levels of measurement Measurement reliability Measurement validity Some examples Need for Cognition Horn-honking.
Advertisements

Chapter 8 Flashcards.
Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Reliability IOP 301-T Mr. Rajesh Gunesh Reliability  Reliability means repeatability or consistency  A measure is considered reliable if it would give.
Chapter Eight & Chapter Nine
Survey Methodology Reliability and Validity EPID 626 Lecture 12.
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
© 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Validity and Reliability Chapter Eight.
VALIDITY AND RELIABILITY
Reliability & Validity.  Limits all inferences that can be drawn from later tests  If reliable and valid scale, can have confidence in findings  If.
Part II Sigma Freud & Descriptive Statistics
Reliability and Validity of Research Instruments
RESEARCH METHODS Lecture 18
Reliability Analysis. Overview of Reliability What is Reliability? Ways to Measure Reliability Interpreting Test-Retest and Parallel Forms Measuring and.
Concept of Measurement
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Research Methods in MIS
Validity and Reliability EAF 410 July 9, Validity b Degree to which evidence supports inferences made b Appropriate b Meaningful b Useful.
Technical Issues Two concerns Validity Reliability
Measurement and Data Quality
Validity and Reliability
Reliability, Validity, & Scaling
Measurement in Exercise and Sport Psychology Research EPHE 348.
Instrumentation.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
SELECTION OF MEASUREMENT INSTRUMENTS Ê Administer a standardized instrument Ë Administer a self developed instrument Ì Record naturally available data.
Unanswered Questions in Typical Literature Review 1. Thoroughness – How thorough was the literature search? – Did it include a computer search and a hand.
Instrumentation (cont.) February 28 Note: Measurement Plan Due Next Week.
Chapter 4: Test administration. z scores Standard score expressed in terms of standard deviation units which indicates distance raw score is from mean.
The Basics of Experimentation Ch7 – Reliability and Validity.
Validity and Reliability THESIS. Validity u Construct Validity u Content Validity u Criterion-related Validity u Face Validity.
Tests and Measurements Intersession 2006.
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
Chapter 8 Validity and Reliability. Validity How well can you defend the measure? –Face V –Content V –Criterion-related V –Construct V.
Measurement and Questionnaire Design. Operationalizing From concepts to constructs to variables to measurable variables A measurable variable has been.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
Chapter 9 Correlation, Validity and Reliability. Nature of Correlation Association – an attempt to describe or understand Not causal –However, many people.
SOCW 671: #5 Measurement Levels, Reliability, Validity, & Classic Measurement Theory.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Reliability a measure is reliable if it gives the same information every time it is used. reliability is assessed by a number – typically a correlation.
Reliability When a Measurement Procedure yields consistent scores when the phenomenon being measured is not changing. Degree to which scores are free of.
TEST SCORES INTERPRETATION - is a process of assigning meaning and usefulness to the scores obtained from classroom test. - This is necessary because.
Dr. Jeffrey Oescher 27 January 2014 Technical Issues  Two technical issues  Validity  Reliability.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Measurement and Scaling Concepts
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Survey Methodology Reliability and Validity
Professor Jim Tognolini
Reliability Analysis.
Sample Power No reading, class notes only
Ch. 5 Measurement Concepts.
Lecture 5 Validity and Reliability
Reliability and Validity in Research
Assessment Theory and Models Part II
Measurement: Part 1.
RELIABILITY OF QUANTITATIVE & QUALITATIVE RESEARCH TOOLS
Tests and Measurements: Reliability
Reliability & Validity
پرسشنامه کارگاه.
مركز مطالعات و توسعه آموزش دانشگاه علوم پزشكي كرمان
PSY 614 Instructor: Emily Bullock, Ph.D.
RESEARCH METHODS Lecture 18
Reliability Analysis.
The first test of validity
Measurement Concepts and scale evaluation
Chapter 8 VALIDITY AND RELIABILITY
Presentation transcript:

Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Definition of Validity Instrument measures what it is intended to measure: Appropriate Meaningful Useful Enables a performance analyst or evaluator to draw correct conclusions

Types of Validity Face Content Criterion Construct Concurrent Predictive Construct

Face Validity It looks OK Looks to measure what it is supposed to measure Look at items for appropriateness Client Sample respondents Least scientific validity measure Looks Good To Me

Content-Related Validity Organized review of format and content of instrument Comprehensiveness Adequate number of questions per objective No voids in content By subject matter experts Balance Definition Sample Content Format

Criterion-Related Validity How one measure stacks-up against another Concurrent = at same time Predictive = now and future Independent sources that measure same phenomena Seeking a high correlation Subject Instrument A Instrument B Task Observation Inventory Checklist John yes no Mary no no Lee yes no Pat no no Jim yes yes Scott yes yes Jill no yes Usually expressed as a correlation coefficient (0.70 or higher is generally accepted as representing good validity)

Construct-Related Validity A theory exists explaining how the concept being measured relates to other concepts Look for positive or negative correlation Often over time and in multiple settings Usually expressed as a correlation coefficient (0.70 or higher is generally accepted as representing good validity) Prediction 1 - Confirmed THEORY Prediction 2 - Confirmed Prediction 3 - Confirmed Prediction n - Confirmed

Definition of Reliability The degree to which measures obtained with an instrument are consistent measures of what the instrument is intended to measure Sources of error Random error = unpredictable error which is primarily affected by sampling techniques Select more representative samples Select larger samples Measurement error = performance of instrument

Types of Reliability Test-Retest Equivalent Forms Internal Consistency Split-Half Approach Kuder-Richardson Approach Cronbach Alpha Approach

Test-Retest Reliability Administer the same instrument twice to the same exact group after a time interval has elapsed. Calculate a reliability coefficient (r) to indicate the relationship between the two sets of scores. r of+.51 to +.75 moderate to good r over +.75 = very good to excellent T I M E

Equivalent Forms Reliability Also called alternate or parallel forms Instruments administered to same group at same time Vary: Calculate a reliability coefficient (r) to indicate the relationship between the two sets of scores. r of+.51 to +.75 moderate to good r over +.75 = very good to excellent Stem: -- Order -- Wording Response Set: -- Order -- Wording

Internal Consistency Reliability Split-Half Break instrument or sub-parts in ½ -- like two instruments Correlate scores on the two halves Best to consult statistics book and consultant and use computer software to do the calculations for these tests Kuder-Richardson (KR) Treats instrument as whole Compares variance of total scores and sum of item variances Cronbach Alpha Like KR approach Data scaled or ranked

Reliability and Validity So unreliable as to be invalid Fair reliability and fair validity Fair reliability but invalid Good reliability but invalid Good reliability and good validity The bulls-eye in each target represents the information that is desired. Each dot represents a separate score obtained with the instrument. A dot in the bulls-eye indicates that the information obtained (the score) is the information the analyst or evaluator desires.

Comments and Questions