Research Methods in MIS

Slides:



Advertisements
Similar presentations
Questionnaire Development
Advertisements

Consistency in testing
Topics: Quality of Measurements
The Research Consumer Evaluates Measurement Reliability and Validity
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
© McGraw-Hill Higher Education. All rights reserved. Chapter 3 Reliability and Objectivity.
The Department of Psychology
© 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Validity and Reliability Chapter Eight.
Chapter 4 – Reliability Observed Scores and True Scores Error
Assessment Procedures for Counselors and Helping Professionals, 7e © 2010 Pearson Education, Inc. All rights reserved. Chapter 5 Reliability.
VALIDITY AND RELIABILITY
Lesson Six Reliability.
1Reliability Introduction to Communication Research School of Communication Studies James Madison University Dr. Michael Smilowitz.
 A description of the ways a research will observe and measure a variable, so called because it specifies the operations that will be taken into account.
Reliability for Teachers Kansas State Department of Education ASSESSMENT LITERACY PROJECT1 Reliability = Consistency.
What is a Good Test Validity: Does test measure what it is supposed to measure? Reliability: Are the results consistent? Objectivity: Can two or more.
Methods for Estimating Reliability
CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY
Measurement. Scales of Measurement Stanley S. Stevens’ Five Criteria for Four Scales Nominal Scales –1. numbers are assigned to objects according to rules.
-生醫統計期末報告- Reliability 學生 : 劉佩昀 學號 : 授課老師 : 蔡章仁.
Reliability and Validity of Research Instruments
RESEARCH METHODS Lecture 18
Can you do it again? Reliability and Other Desired Characteristics Linn and Gronlund Chap.. 5.
Reliability n Consistent n Dependable n Replicable n Stable.
Reliability n Consistent n Dependable n Replicable n Stable.
Lecture 7 Psyc 300A. Measurement Operational definitions should accurately reflect underlying variables and constructs When scores are influenced by other.
Lesson Seven Reliability. Contents  Definition of reliability Definition of reliability  Indication of reliability: Reliability coefficient Reliability.
Validity, Reliability, & Sampling
Validity and Reliability EAF 410 July 9, Validity b Degree to which evidence supports inferences made b Appropriate b Meaningful b Useful.
Classroom Assessment A Practical Guide for Educators by Craig A
Reliability of Selection Measures. Reliability Defined The degree of dependability, consistency, or stability of scores on measures used in selection.
Technical Issues Two concerns Validity Reliability
Measurement and Data Quality
Validity and Reliability
Reliability, Validity, & Scaling
Validity and Reliability of Research and the Instruments
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Data Analysis. Quantitative data: Reliability & Validity Reliability: the degree of consistency with which it measures the attribute it is supposed to.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
SELECTION OF MEASUREMENT INSTRUMENTS Ê Administer a standardized instrument Ë Administer a self developed instrument Ì Record naturally available data.
Unanswered Questions in Typical Literature Review 1. Thoroughness – How thorough was the literature search? – Did it include a computer search and a hand.
Reliability Lesson Six
LECTURE 06B BEGINS HERE THIS IS WHERE MATERIAL FOR EXAM 3 BEGINS.
Validity and Reliability THESIS. Validity u Construct Validity u Content Validity u Criterion-related Validity u Face Validity.
Reliability & Validity
1 Chapter 4 – Reliability 1. Observed Scores and True Scores 2. Error 3. How We Deal with Sources of Error: A. Domain sampling – test items B. Time sampling.
Research in Communicative Disorders1 Research Design & Measurement Considerations (chap 3) Group Research Design Single Subject Design External Validity.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
RELIABILITY AND VALIDITY OF ASSESSMENT
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
McGraw-Hill/Irwin © 2012 The McGraw-Hill Companies, Inc. All rights reserved. Obtaining Valid and Reliable Classroom Evidence Chapter 4:
1 LANGUAE TEST RELIABILITY. 2 What Is Reliability? Refer to a quality of test scores, and has to do with the consistency of measures across different.
Reliability n Consistent n Dependable n Replicable n Stable.
©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Reliability performance on language tests is also affected by factors other than communicative language ability. (1) test method facets They are systematic.
Chapter 6 - Standardized Measurement and Assessment
Validity & Reliability. OBJECTIVES Define validity and reliability Understand the purpose for needing valid and reliable measures Know the most utilized.
Reliability When a Measurement Procedure yields consistent scores when the phenomenon being measured is not changing. Degree to which scores are free of.
Chapter 6 Norm-Referenced Reliability and Validity.
Language Assessment Lecture 7 Validity & Reliability Instructor: Dr. Tung-hsien He
©2013, The McGraw-Hill Companies, Inc. All Rights Reserved Chapter 5 What is a Good Test?
Dr. Jeffrey Oescher 27 January 2014 Technical Issues  Two technical issues  Validity  Reliability.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Chapter 6 Norm-Referenced Measurement. Topics for Discussion Reliability Consistency Repeatability Validity Truthfulness Objectivity Inter-rater reliability.
1 Measurement Error All systematic effects acting to bias recorded results: -- Unclear Questions -- Ambiguous Questions -- Unclear Instructions -- Socially-acceptable.
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Lecture 5 Validity and Reliability
PSY 614 Instructor: Emily Bullock, Ph.D.
RESEARCH METHODS Lecture 18
Chapter 8 VALIDITY AND RELIABILITY
Presentation transcript:

Research Methods in MIS Dr. Deepak Khazanchi

Measurement of Variables: Scaling, Reliability and Validity

Major Sources of Errors in Measurement Since 100% control for precise and unambiguous measurement of variables is unattainable, error does occur. Much potential error is SYSTEMATIC (results from a bias) while the remainder is RANDOM (occurs erratically). Sources of Measurement Differences Respondents Situational factors Measurer or researcher Data collection instruments

Validity and Reliability Accuracy of measurement. The degree to which an instrument measures that which is supposed to be measured. Validity Coefficient: An estimate of the validity of a measure in the form of a correlation coefficient. Reliability: Consistency of measurement. The degree to which an instrument measures the same way each time it is used under the same conditions with the same subjects. Reliability Coefficient: An estimate of the reliability of a measure usually in the form of a correlation coefficient. From Sproull (1995): Validity is an extremely important and perhaps the most important, aspect of a measure. If an instrument does not measure accurately what it is supposed to measure, there is no reasons to use it, even if it does measure consistently. Of validity and reliability, validity is far more important than reliability for the reason given in #1. It is questionable practice to use measures for which: (1) there is no evidence of validity or (2) there is evidence of validity but the validity estimates are low. Validity is specific. An instrument or measure can be valid for a specific criterion but not for other criteria, for a specific group of people but not other groups. There are several types of validity and reliability estimates. Because of the confusion in using different names and meanings for the various types of estimates, standard labels and standard definitions for the various types of both validity and reliability estimates have been determined. A single instrument may have several types of validity and reliability estimates. Each type of validity and reliability estimate has a different purpose. Validity and reliability are always estimated, not proven. The statistical symbol for a validity coefficient is rxy indicating correlation of two different measures. The statistical symbol for a reliability coefficient is rxx indicating two measures of the same variables. A typical validity coefficient would be approximately .45 or higher. Higher would be better, However, validity coefficients rarely exceed .60 and many are in the range of .30 to .40. A typical reliability coefficient for a a researcher designed instrument is approximately .70 or higher. For an instrument designed by a testing service, one would expect .90 or higher. If the researcher designs an instrument, validity and reliability estimates should be made by the researcher. If an instrument is purchased, the company from which it is purchased should provide validity and reliability information which the researcher examines prior to purchase.

Possible Conditions of Validity and Reliability When examining an instrument for validity and reliability remember that three types of conditions may exist. An instrument might show evidence of being: Both valid and reliable or Reliable but not valid or Neither reliable nor valid NOTE: An instrument which is valid will also have some degree of reliability.

About Reliability and Validity Coefficients Validity and reliability are estimated by using correlation coefficients. These statistics estimate the degree of validity or reliability. Thus, it is not a question of an instrument having or not having validity or reliability; It is a question of to what degree is an instrument valid for a specific purpose and to what degree does the instrument evidence specific types of reliability. Reliability estimates are done after validity is assessed. We will discuss the notions of internal and external validity in the context of experimental designs.

Types of Validity Content Validity The representativeness of the content of the instrument to the objectives of using the instrument. Usual Process: 1. Examine objectives; 2. Compare objectives to content of instrument.

Types of Validity (cont’d) Criterion-Related Validity Predictive: The degree to which a measure predicts a second future measure. Usual Process: 1. Assess validation sample on predictor; 2. Assess validation sample on criterion at later time; 3. Correlate scores Concurrent: The degree to which a measure correlates with another measure of the same variable which has already been validated. 1. Assess validation sample on new measure; 2. Assess validation on already validated measure of same variable at about the same time; 3. Correlate scores.

Types of Validity (cont’d) Construct Validity The degree to which a measure relates to expectations formed from theory for hypothetical constructs Usual Process 1. Assess validation sample on major variable 2. Assess validation sample on several hypothetically related variables 3. Analyze to see of major variable differentiates Ss on the related variables

Types of Reliability (Consistency) Estimates V.IMP: A MEASURE CAN BE RELIABLE BUT TOTALLY LACK VALIDITY. Stability Test-retest Equivalence Parallel forms Internal Consistency Split-half KR20 (Kuder-Richardson) Coefficient (Cronbach’s) alpha Interater reliability

Types of Reliability (Consistency) Estimates: STABILITY Test-Retest Reliability Used to assess the stability of a measure over time. Usually indicated by a correlation coefficient. Number of forms (of instrument): 1 Number of administrations: 2 Usual Process: Administer the instrument to the reliability sample at Time 1. Wait a period of time (e.g., 2-4 weeks) Administer copies of the same instrument to the same sample at Time 2. Correlate the scores from Time 2 and Time 1.

Types of Reliability (Consistency) Estimates: EQUIVALENCE Equivalence Forms Reliability (Also known as Parallel Forms or Alternate Forms Reliability). Used to assess the equivalence of two forms of the same instrument. Usually indicated by a correlation coefficient. Number of forms (of instrument): 2 Number of administrations: 2 Usual Process: Administer Form A of the instrument to the reliability sample Break the sample for a short rest period (10-20 minutes) Administer Form B of the instrument to the same reliability sample Correlate the scores from Form A and Form B Needed when two or more versions (Forms) of the instrument will be used.

Types of Reliability (Consistency) Estimates: INTERNAL CONSISTENCY Split-Half Reliability Used to assess the internal consistency or equivalence of two halves of an instrument. Usually indicated by a correlation coefficient plus Spearman-Brown Prophecy Formula. Number of forms (of instrument): 1 Number of administrations: 1 Usual Process: Obtain or generate an instrument in which the two halves were formulated to measure the same variable. Administer the instruments to the reliability sample. Correlate the summed scores from the first half (often the odd numbered items) with the summed scores from the second half (often the even numbered items). Computer the Spearman-Brown Prophecy Formula to correct for splitting one instrument into halves.

Types of Reliability (Consistency) Estimates: INTERNAL CONSISTENCY KR20 (Kuder-Richardson Reliability) Used to assess the internal consistency of items on an instrument when responses are dichotomous. Usually indicated by the correlation generated using the KR-20 formula (There are other forms of this formula. This is used when there are two responses: correct or incorrect). Number of forms (of instrument): 1 Number of administrations: 1 Usual Process: Generate or select an instrument. Administer the instrument to the reliability sample. Compute the variance (x)2 of the scores. Computer the proportion of correct and incorrect responses to each item. Compute the KR-20 formula.

Types of Reliability (Consistency) Estimates: INTERNAL CONSISTENCY Coefficient Alpha (Cronbach’s Alpha) Used to assess internal consistency of items on an instrument when responses are nondichotomous. Usually indicated by the coefficient generated using Cronbach’s formula (more generic version of the KR-20 formula) Number of forms (of instrument): 1 Number of administrations: 1 Usual Process: Same as previous slide.

Types of Reliability (Consistency) Estimates: INTERSUBJECTIVE INTERRATER RELIABILITY Used to assess the degree to which two or more judges (raters) rate the same variables in the same way. Usually needed when two or more judges (raters) will be used in a research study. Usual Process: Select or generate an instrument Randomly select a number of objects or events to be rated Train the raters Have each rater rate each object or event independently Correlate the scores of the two raters.

Practicality of Measurement Practicality has been defined in terms of the following three characteristics: Economy, Convenience and Interpretability Economy Some trade-off between ideal needs and budget Instrument length (limiting factor: cost) Choice of data collection method Need for Fast and economical scoring The scientific requirements of a project call for the measurement processes to be reliable and valid, while the operational requirements call for it to be practical.

Practicality of Measurement (cont’d) Convenience A measuring device passes the convenience test if it is easy to administer. Detailed and clear instructions with examples, if needed. Pay close attention to design and layout Avoid crowding of material, carryover of items from one page to another Interpretabililty Relevant when persons rather than test designers interpret results. In that case, test designers should include: A statement of the functions the test was designed to measure and the procedures by which it was developed Detailed instructions for administering and scoring Evidence of reliability etc.