Validity and Reliability. Validity Is the translation from concept to operationalization accurately representing the underlying concept. Does your variables.

Slides:



Advertisements
Similar presentations
Agenda Levels of measurement Measurement reliability Measurement validity Some examples Need for Cognition Horn-honking.
Advertisements

Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Conceptualization and Measurement
Survey Methodology Reliability and Validity EPID 626 Lecture 12.
The Research Consumer Evaluates Measurement Reliability and Validity
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
Reliability and Validity
Reliability And Validity
VALIDITY AND RELIABILITY
Research Methods in Psychology
Reliability and Validity of Research Instruments
Research in Psychology Chapter Two
RESEARCH METHODS Lecture 18
Chapter 4 Validity.
Concept of Measurement
MSc Applied Psychology PYM403 Research Methods Validity and Reliability in Research.
Developing a Hiring System Reliability of Measurement.
Research Methods in MIS
Classroom Assessment A Practical Guide for Educators by Craig A
Construct Validity and Measurement
1 Evaluating Psychological Tests. 2 Psychological testing Suffers a credibility problem within the eyes of general public Two main problems –Tests used.
Measurement Concepts & Interpretation. Scores on tests can be interpreted: By comparing a client to a peer in the norm group to determine how different.
Measurement and Data Quality
Validity and Reliability
Reliability, Validity, & Scaling
Experimental Research
Instrumentation.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
Technical Adequacy Session One Part Three.
Psychometrics William P. Wattles, Ph.D. Francis Marion University.
Final Study Guide Research Design. Experimental Research.
The Psychology of the Person Chapter 2 Research Naomi Wagner, Ph.D Lecture Outlines Based on Burger, 8 th edition.
Research Seminars in IT in Education (MIT6003) Research Methodology I Dr Jacky Pow.
Validity RMS – May 28, Measurement Reliability The extent to which a measurement gives results that are consistent.
Chapter 4 – Research Methods in Clinical Psych Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Measurement Validity.
Research: Conceptualization and Measurement Conceptualization Steps in measuring a variable Operational definitions Confounding Criteria for measurement.
Research: Conceptualization and Measurement Conceptualization Steps in measuring a variable Operational definitions Confounding Criteria for measurement.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
Evaluating Survey Items and Scales Bonnie L. Halpern-Felsher, Ph.D. Professor University of California, San Francisco.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
JS Mrunalini Lecturer RAKMHSU Data Collection Considerations: Validity, Reliability, Generalizability, and Ethics.
Reliability performance on language tests is also affected by factors other than communicative language ability. (1) test method facets They are systematic.
Reliability and Validity Themes in Psychology. Reliability Reliability of measurement instrument: the extent to which it gives consistent measurements.
RESEARCH METHODS IN INDUSTRIAL PSYCHOLOGY & ORGANIZATION Pertemuan Matakuliah: D Sosiologi dan Psikologi Industri Tahun: Sep-2009.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Measurement Chapter 6. Measuring Variables Measurement Classifying units of analysis by categories to represent variable concepts.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
RELIABILITY AND VALIDITY Dr. Rehab F. Gwada. Control of Measurement Reliabilityvalidity.
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Survey Methodology Reliability and Validity
Research Ethics & Reporting
Chapter 4 Research Methods in Clinical Psychology
VALIDITY by Barli Tambunan/
Reliability and Validity
Concept of Test Validity
Test Validity.
Understanding Results
Journalism 614: Reliability and Validity
Reliability & Validity
Introduction to Measurement
Human Resource Management By Dr. Debashish Sengupta
پرسشنامه کارگاه.
Reliability and Validity of Measurement
RESEARCH METHODS Lecture 18
Measurement Concepts and scale evaluation
Chapter 8 VALIDITY AND RELIABILITY
Misc Internal Validity Scenarios External Validity Construct Validity
Presentation transcript:

Validity and Reliability

Validity Is the translation from concept to operationalization accurately representing the underlying concept. Does your variables measure what you think in abstract concepts. This is more familiarly called Construct Validity. empirical study with high construct validity would ensure the studied parameters are relevant to the research questions. Without a valid design, valid scientific conclusions cannot be drawn

Types of construct validity Translation validity (Trochims term) Face validity Content validity Criterion-related validity Predictive validity Concurrent validity Convergent validity Discriminant validity

Translation validity Is the operationalization a good reflection of the construct? This approach is definitional in nature assumes you have a good detailed definition of the construct and you can check the operationalization against it. Example software success. Does your definition representative of SW success construct? E.g. Application software is a software used to assist end users.

Face Validity “ On its face" does it seems like a good translation of the construct. If the respondent knows what information we are looking for, they can use that “context” to help interpret the questions and provide more useful, accurate answers

Content Validity Check the operationalization against the relevant content domain for the construct. For example, a depression measure should cover the checklist of depression symptoms World history – its content must include major histories from all continents or countries Interface Usability – should include all valid usability measures: learnability, efficiency, memorability (low cognitive overload), error recovery and the like

Criteria-Related Validity Check the performance of operationalization against some criterion. it compares the test with other measures or outcomes (the criteria) already held to be valid. For example, employee selection tests are often validated against measures of job performance (the criterion), and IQ tests are often validated against measures of academic performance (the criterion).

Predictive Validity Assess the operationalization's ability to predict something it should theoretically be able to predict. A high correlation would provide evidence for predictive validity Examples Measures of job applicant is supposed to measure the new applicant performance at work. If the applicant performs well at his job when measured after one year, our applicant measurement is a good predictive measure. Measures of Interface Usability can predict later SW utilization. High correlation is an indication of measures of predictive validity

Concurrent Validity Assess the operationalization's ability to distinguish between groups that it should theoretically be able to distinguish between. It is similar to predictive validity but the measures are taken at the same time. If measure of subordinate rating and supervisor rating positively correlate on job performance, it has high concurrent validity Compares the results of two measures

Convergent Validity Examine the degree to which the operationalization is similar to (converges on) other operationalizations that it theoretically should be similar to. This compares two or more attributes of the same construct To show the convergent validity of a test of arithmetic skills, one might correlate the scores on a Math test with scores on other tests (e.g problem solving ability) that support to measure basic math ability, The measure learnability should have high correction with efficiency, memorability, errors and satisfaction All measures measure the same construct There is also instrument measure convergence – If measure of Interview and questionnaire produce the same result to say the instruments are convergent

Discriminant Validity Examine the degree to which the operationalization is not similar to (diverges from) other operationalizations that it theoretically should be not similar to. A test of a concept is not highly correlated with other tests designed to measure theoretically different concepts. Measure the overlap between two scales using a formula

Discriminate … where r xy is correlation between x and y, r xx is the reliability of x, and r yy is the reliability of y: a result less than.85 tells us existence of discriminant validity >.85, the two constructs overlap greatly and they are likely measuring the same thing.

Discriminate … Measuring the concept of Narcissism and Self-esteem Narcissism is a term with a wide range of meanings, usually is used to describe some kind of problem in a person or group's relationships with self and others. Self-esteem is a term in psychology to reflect a person's overall evaluation or appraisal of her or his own worth. Self-esteem encompasses beliefs (for example, "I am competent", "I am worthy") and emotions such as triumph, despair, pride and shame The Researchers show that their new scale measures Narcissism and not simply Self-esteem.

Discriminate … First, we can calculate the Average Inter-Item Correlations within and between the two scales: Narcissism — Narcissism: 0.47 Narcissism — Self-esteem: 0.30 Self-esteem — Self-esteem: 0.52 We then use the correction for attenuation formula:

Discriminate … Since is less than 0.85, we can conclude that discriminant validity exists between the scale measuring narcissism and the scale measuring self-esteem. e.g., a new measure of depression should … have negative correlations with measures of “happiness” have “minimal” correlations with tests of “physical health”,

Internal and External Validity

Internal Validity Inferences are said to possess internal validity if a causal relation between two variables is properly demonstrated. A causal inference may be based on a relation when three criteria are satisfied: 1. the "cause" precedes the "effect" in time (temporal precedence), 2. the "cause" and the "effect" are related (covariation), and 3. there are no plausible alternative explanations for the observed covariation

Example - Internal The researcher hypothesized that computer training will increase software usability Training (IV) and usability (DV) Positive correlation between the two indicates high internal validity. This can be done with Spearman Rank Correlation or Pearson Correlation Can be easily done with SPSS software

Internal … In many cases, however, the magnitude of effects found in the dependent variable may not just depend on variations in the independent variable, the power of the instruments and statistical procedures used to measure and detect the effects, and the choice of statistical methods Other variables or circumstances uncontrolled for (or uncontrollable) may lead to additional or alternative explanations (a) for the effects found and/or (b) for the magnitude of the effects found.

Internal … highly controlled true experimental designs, i.e random selection, random assignment to either the control or experimental groups, reliable instruments, reliable manipulation processes, and safeguards against confounding factors may be the "gold standard" of scientific research. the very strategies employed to control these factors may also limit the generalizability or External Validity of the findings.

External validity external validity refers to the applicability of study or experimental results to realms beyond those under immediate observation. Refers to generalizability of the research finding to other similar cases Does the software solution for one case is also applicable to other similar cases in other organization or country. Does the solution has wider application and audience or acceptance. We need that solution! Researchers prize studies with external validity, since the results can be widely applied to other scenarios.

External … External validity for a given study has several aspects: 1. whether the study generalizes to other subjects in the domain 2. whether there exist enough evidence and arguments to support the claimed generalizability 3. whether the study outcomes validate predicted theories

Reliability Means "repeatability" or "consistency". A measure is considered reliable if it would give us the same result over and over again (assuming that what we are measuring isn't changing!). Measuring the same distance at different times should give the same result if the instrument (e.g. meter) is reliable. There are four general classes of reliability estimates, each of which estimates reliability in a different way.

Types of Reliability Estimation 24  Inter-rater or inter-observer reliability  Is used to assess consistency of different raters  Test-retest reliability  Is used to assess the consistency of a measure from one time to another  Parallel-forms reliability  Is used to assess the consistency of the results of two tests constructed in the same way from the same content domain  Internal consistency reliability  Is used to assess the consistency of results across items within a test

Inter-Rater or Inter-Observer Reliability Used to assess the degree to which different raters/observers give consistent estimates of the same phenomenon. Establish reliability on pilot data or a subsample of data and retest often throughout. For categorical data a X 2 (Chai sqaure) can be used and for continuous data an R (regression) can be calculated.

Test-Retest Reliability Used to assess the consistency of a measure from one time to another. This approach assumes that there is no substantial change in the construct being measured between the two occasions. The amount of time allowed between measures is critical. The shorter the time gap, the higher the correlation; the longer the time gap, the lower the correlation

Parallel-Forms Reliability Used to assess the consistency of the results of two tests constructed in the same way from the same content domain. Create a large set of questions that address the same construct and then randomly divide the questions into two sets and administer both instruments to the same sample of people. The correlation between the two parallel forms is the estimate of reliability. One major problem with this approach is that you have to be able to generate lots of items that reflect the same construct.

Split Half Reliability Collect your data with the instrument to measure your construct. Split the data into halve and do correlation between the two data sets Positive correlation indicates high reliability

Reliability and Validity 29

Research Ethics

Ethics – a definition 31 “Research should avoid causing harm, distress, anxiety, pain or any other negative feeling to participants. Participants should be fully informed about all relevant aspects of the research, before they agree to take part” [1]

ARE YOU HOMOSEXUAL? 24 Nov 2008 Research Methodology 32 THIS IS A HYPOTHETICAL QUESTION - DO NOT ANSWER THIS

Research questions – ethical or not? 33 Research may ask a taboo or personal question What if you were asked if you are homosexual How would you feel if you were asked this? Would you feel awkward? Would you lie? Would you answer truthfully? Why are we asking this question anyway? Could we phrase the question better?

Pause for thought 34 Is it morally correct to carry out research by any means whatsoever providing that the end result increases the sum of human knowledge or provides some tangible benefit to mankind? Does the end justify the means? DISCUSS

Ethics before Research begins 35 Inform all participants fully What about children Mentally deficient people Those with poor language skills Obtain consent Define the ‘gatekeeper’ Craft your research methods carefully No distortion of the data

Ethics during Research Research Methodoogy 36 Field notes – what are they? Do we need these? Can we use these in our research? Consent issues Content issues Moral issues You have heard about a crime – do you report it? DISCUSS

Confidentiality of respondent data 37 How do we keep track of respondents? Should we keep track of respondents? How do we de-personalise gathered data? If data are depersonalised, is it morally correct to reuse this data for a new research project? DISCUSS

Ethics after Research 38 Disposal of data – paper or digital? Freedom of Information Act Reuse of data – is this ethical? Are there occasions where reuse of gathered data for another purpose is ok? Requesting permission from respondents Difficulties of contacting original respondents

Engineering and Ethics 39 Confidentiality of data Ownership of research results Consider research results Is a cure for a disease as the direct result of research ‘good’? Is the creation of a powerful bomb as the direct result of research ‘good’? e.g. the atom bomb DISCUSS

Research Ethics Committees 40 Monitor ethical issues in research programmes Before during and after research Makes decisions and enforces these Gives researchers organisational support Reassurance to researchers about moral issues related to a particular research project

Plagiarism 41 What is plagiarism? How do we avoid plagiarism? What are the dangers that plagiarism causes? State some examples of plagiarism. DISCUSS

Summary - Ethics Ethics are moral issues relating to the prior design, gathering and usage of data for research purposes Think before, during and after Consult gatekeepers and respondents Never act alone – consult your supervisor if in doubt