Measurement Issues General steps –Determine concept –Decide best way to measure –What indicators are available –Select intermediate, alternate or indirect.

Slides:



Advertisements
Similar presentations
Chapter 8 Flashcards.
Advertisements

Conceptualization, Operationalization, and Measurement
Survey Methodology Reliability and Validity EPID 626 Lecture 12.
The Research Consumer Evaluates Measurement Reliability and Validity
Taking Stock Of Measurement. Basics Of Measurement Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables:
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
Reliability and Validity checks S-005. Checking on reliability of the data we collect  Compare over time (test-retest)  Item analysis  Internal consistency.
© 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Validity and Reliability Chapter Eight.
Increasing your confidence that you really found what you think you found. Reliability and Validity.
VALIDITY AND RELIABILITY
Research Methodology Lecture No : 11 (Goodness Of Measures)
Measurement Reliability and Validity
Measurement. Scales of Measurement Stanley S. Stevens’ Five Criteria for Four Scales Nominal Scales –1. numbers are assigned to objects according to rules.
Reliability and Validity of Research Instruments
Part II Knowing How to Assess Chapter 5 Minimizing Error p115 Review of Appl 644 – Measurement Theory – Reliability – Validity Assessment is broader term.
RESEARCH METHODS Lecture 18
Operationalizing Concepts Issues in operationally defining your concepts, validity, reliability and scales.
RELIABILITY & VALIDITY What is Reliability? What is Reliability?What is Reliability?What is Reliability? How Can We Measure Reliability? How Can We Measure.
RELIABILITY & VALIDITY
Concept of Measurement
Beginning the Research Design
Lecture 7 Psyc 300A. Measurement Operational definitions should accurately reflect underlying variables and constructs When scores are influenced by other.
SOWK 6003 Social Work Research Week 4 Research process, variables, hypothesis, and research designs By Dr. Paul Wong.
Measurement: Reliability and Validity For a measure to be useful, it must be both reliable and valid Reliable = consistent in producing the same results.
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Validity, Reliability, & Sampling
Classroom Assessment A Practical Guide for Educators by Craig A
Validity and Validation: An introduction Note: I have included explanatory notes for each slide. To access these, you will probably have to save the file.
Measurement Concepts & Interpretation. Scores on tests can be interpreted: By comparing a client to a peer in the norm group to determine how different.
EDRS6208 Lecture Three Instruments and Instrumentation Data Collection.
Descriptive and Causal Research Designs
Reliability, Validity, & Scaling
Chapter 2: The Research Enterprise in Psychology
Chapter 2: The Research Enterprise in Psychology
The Vocabulary of Research. What is Credibility? A researcher’s ability to demonstrate that the study is accurate based on the way the study was conducted.
Instrumentation.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
MEASUREMENT CHARACTERISTICS Error & Confidence Reliability, Validity, & Usability.
Technical Adequacy Session One Part Three.
Final Study Guide Research Design. Experimental Research.
Chapter 1: Research Methods
Instrumentation (cont.) February 28 Note: Measurement Plan Due Next Week.
Validity. Face Validity  The extent to which items on a test appear to be meaningful and relevant to the construct being measured.
Reliability & Validity
Appraisal and Its Application to Counseling COUN 550 Saint Joseph College For Class # 3 Copyright © 2005 by R. Halstead. All rights reserved.
Developing Measures Concepts as File Folders Three Classes of Things That can be Measured (Kaplan, 1964) Direct Observables--Color of the Apple or a Check.
VALIDITY AND VALIDATION: AN INTRODUCTION Note: I have included explanatory notes for each slide. To access these, you will probably have to save the file.
Reliability, Validity, and Bias. Reliability Reliability Reliability is the extent to which an experiment, test, or any measuring procedure yields the.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Validity: Introduction. Reliability and Validity Reliability Low High Validity Low High.
RESEARCH METHODS IN INDUSTRIAL PSYCHOLOGY & ORGANIZATION Pertemuan Matakuliah: D Sosiologi dan Psikologi Industri Tahun: Sep-2009.
DENT 514: Research Methods
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Reliability a measure is reliable if it gives the same information every time it is used. reliability is assessed by a number – typically a correlation.
Methodology: How Social Psychologists Do Research
Measuring Research Variables
Measurement and Scaling Concepts
QUESTIONNAIRE DESIGN AND VALIDATION
Reliability and Validity
Questions What are the sources of error in measurement?
Understanding Results
Tests and Measurements: Reliability
Chapter 2 Sociological Research Methods
Introduction to Measurement
Human Resource Management By Dr. Debashish Sengupta
پرسشنامه کارگاه.
5. Reliability and Validity
Measurement Concepts and scale evaluation
Presentation transcript:

Measurement Issues General steps –Determine concept –Decide best way to measure –What indicators are available –Select intermediate, alternate or indirect measures

Measurement Issues General steps –Consider limitations of measures selected –Collect or secure info/data –Summarize findings in writing

What is the relation between concepts, variables, instruments & measures?

Concepts Program is based on conceptual basis of why people behave the way they do

Why do you think people behave the way they do? Think of food and nutrition issues

Variables A theory has variables Variables define concepts Theory states how the variables interact or are related

Variables Variables of the theory are what you measure Variables are the verbal or written abstractions of the ideas that exist in the mind

Why should an intervention be based on a theory?

Why use theory? Know what you are to address in the intervention Makes evaluation easier Know what to measure to evaluate

Figure 6.1 A simple social learning theory model for reducing salt in the diet Comes next

Fig. 6.1 Social learning theory

Need measurements and instruments to assess changes in the variables of interest

Instruments Something that produces a measure of an object Series of questions to measure the variable, concept Includes instructions

Measures The numbers that come from the person answering questions on the instrument

Figure 6.2 Relation among models, variables, measures, and an instrument Comes next

Fig. 6.2

Based on why you think people behave the way the do, list possible variables to consider to measure this variable. What might be variables of the social learning theory?

What about variables that would verify if a change has or has not taken place?

Figure 6.1 A simple social learning theory model for reducing salt in the diet Comes next See how the program links with the theory & what measure

Fig. 6.1 Social learning theory

Reliability The extent to which an instrument will produce the same result (measure or score) if applied two different or more times.

Reliability X = T + E X is measure T is true value E is random error

Reliability Measurement error reduces the ability to have reliable and valid results.

Reliability Random error is all chance factors that confound the measurement. Always present Effects reliability but doesn’t bias results

Reliability Figure 6.5 Distribution of scores of multiple applications of a test with random error A is true score a is measure

Fig. 6.5 Distribution of scores of multiple applications of a test and random error

Distribution Can have the same mean with two different distributions Figure 6.6 next

Fig. 6.6 Two distributions of scores around the true mean

Which distribution has less variability? Which distribution has less random error?

Sources of Random Error Day-to-day variability Confusing instructions Unclear instrument Sloppy data collector

Sources of Random Error Distracting environment Respondents Data-management error

What can you do to reduce random error and increase reliability?

Variability & the Subject What you want to measure will vary from day to day and within the person

Variability & the Subject Intraindividual variability –variability among the true scores within a person over time

Figure 6.7 True activity scores (A, B, C) for 3 days with three measures (a, b, c) per day Comes next

Fig. 6.7 True activity (A, B, C) for 3 days with three measures (a, b, c) per day

Variability & the Subject Interindividual variability –variability between each person in the sample

Figure 6.8 Interindividual (A, X) and intraindividual (A1, A2, A3) variability for two people (A, X) in level of physical activity Comes next

Fig. 6.8 Interindividual (A, X) and intraindividual (A1, A2, A3) variability for two people (A, X) in level of physical activity

Assessing Reliability Need to know the reliability of your instruments Reliability coefficient of 1 is highest, no error Reliability coefficient of 0 is lowest, all error

Factors of Reliability Type of instrument –observer –self-report Times instrument applied –same time –different time

Figure 6.9 Types of reliability Comes next

Fig. 6.9 Types of reliability

Assessing Reliability Interobserver reliability –have 2 different observers rate same action at same time –reproducibility

Assessing Reliability Intraobserver reliability –1 observer assesses same person at two different times –video tape the action & practice

Assessing Reliability Repeat method –self-report or survey –repeat the same item/question at 2 points in survey

Assessing Reliability Internal consistency –average inter-item correlation among items in an instrument that are cognitively related

Assessing Reliability Internal consistency –Cronbach’s alpha –0.70 & above a good score

Assessing Reliability Test-retest reliability (internal consistency method) –same survey/test at 2 different times to same person

Validity Degree to which an instrument measures what the evaluator wants it to measure

Bias Systematic error that produces a systematic difference between an obtained score and the true score Bias threatens validity

Bias Figure 6.10 Distribution of scores of multiple applications of a test with systematic error Comes next

Fig Distribution of scores of multiple applications of a test with systematic error

What will basis do to your ability to make conclusions about your subjects?

Figure 6.11 Effect of bias on conclusions Comes next

Fig Effect of bias on conclusions

Types of Validity Face Content Criterion

Face Validity Describes the extent to which an instrument appears to measure what it is suppose to measure How many veg did you eat yesterday?

Content Validity Extent to which an instrument is expected to cover several domains of the content Consult a group of experts

Criterion Validity How accurate is a less costly way to measure the variable compared to the valid and more expensive instrument

What can lower validity? Guinea pig effect –awareness of being tested Role selection –awareness of being measured may make people feel they have to play a role

What can lower validity? Measurement as a change agent –act of measurement could change future behavior

What can lower validity? Response sets –respond in a predictable way that has nothing to do with the questions

What can lower validity? Interviewer effects –characteristics of the interviewer affects the receptivity and answers of the respondent

What can lower validity? Population restrictions –if people can’t use the method of data collection, can’t generalize to others

End of reliability and validity Questions Look at CNEP Survey