Research Method Step 1 – Formulate research question Step 2 – Operationalize concepts ◦ Valid and reliable indicators Step 3 – Decide on sampling technique.

Slides:



Advertisements
Similar presentations
Chapter 8 Flashcards.
Advertisements

Conceptualization and Measurement
Survey Methodology Reliability and Validity EPID 626 Lecture 12.
Taking Stock Of Measurement. Basics Of Measurement Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables:
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
MEASUREMENT CONCEPTS © 2012 The McGraw-Hill Companies, Inc.
VALIDITY AND RELIABILITY
Research Methodology Lecture No : 11 (Goodness Of Measures)
Measurement Reliability and Validity
Designing Research Concepts, Hypotheses, and Measurement
How To Evaluate Programs/Interventions? Step 1 – Identify independent and dependent variables/concepts Independent – Intervention Dependent – Stated objective.
CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY
RESEARCH METHODS Lecture 18
Beginning the Research Design
Chapter 4: Conceptualization & Measurement
Scaling and Attitude Measurement in Travel and Hospitality Research Research Methodologies CHAPTER 11.
Reliability and Validity. Criteria of Measurement Quality How do we judge the relative success (or failure) in measuring various concepts? How do we judge.
Test Validity S-005. Validity of measurement Reliability refers to consistency –Are we getting something stable over time? –Internally consistent? Validity.
Technical Issues Two concerns Validity Reliability
Measurement and Data Quality
Validity and Reliability
Reliability, Validity, & Scaling
Evaluation Research Step by Step Step 1 – Formulate research question Step 2 – Operationalize concepts ◦ Valid and reliable indicators Step 3 – Decide.
MEASUREMENT OF VARIABLES: OPERATIONAL DEFINITION AND SCALES
Ch 6 Validity of Instrument
Participant Observation Purpose  Observe Human Social Behavior. Often used to observe behavior over time.  This data collection technique is used when.
VALIDITY, RELIABILITY, and TRIANGULATED STRATEGIES
Instrumentation.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
MEASUREMENT CHARACTERISTICS Error & Confidence Reliability, Validity, & Usability.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
Start at the Beginning How do we collect information to answer questions?
6. Conceptualization & Measurement
ScWk 240 Week 6 Measurement Error Introduction to Survey Development “England and America are two countries divided by a common language.” George Bernard.
Chapter Five Measurement Concepts. Terms Reliability True Score Measurement Error.
Assessing the Quality of Research
Measurement Validity.
Research: Conceptualization and Measurement Conceptualization Steps in measuring a variable Operational definitions Confounding Criteria for measurement.
Research: Conceptualization and Measurement Conceptualization Steps in measuring a variable Operational definitions Confounding Criteria for measurement.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
Reliability: The degree to which a measurement can be successfully repeated.
Week 4 Slides. Conscientiousness was most highly voted for construct We will also give other measures – protestant work ethic and turnover intentions.
RESEARCH METHODS IN INDUSTRIAL PSYCHOLOGY & ORGANIZATION Pertemuan Matakuliah: D Sosiologi dan Psikologi Industri Tahun: Sep-2009.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
1 Announcement Movie topics up a couple of days –Discuss Chapter 4 on Feb. 4 th –[ch.3 is on central tendency: mean, median, mode]
Dr. Jeffrey Oescher 27 January 2014 Technical Issues  Two technical issues  Validity  Reliability.
Measurement Chapter 6. Measuring Variables Measurement Classifying units of analysis by categories to represent variable concepts.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
Measurement and Scaling Concepts
Survey Methodology Reliability and Validity
Reliability and Validity
Chapter 2 Theoretical statement:
Ch. 5 Measurement Concepts.
Measurement: Part 2.
Reliability and Validity
Concept of Test Validity
Test Validity.
CHAPTER 5 MEASUREMENT CONCEPTS © 2007 The McGraw-Hill Companies, Inc.
Measurement: Part 2.
Tests and Measurements: Reliability
Journalism 614: Reliability and Validity
Human Resource Management By Dr. Debashish Sengupta
پرسشنامه کارگاه.
Measurement: Part 2.
Ch 5: Measurement Concepts
Presentation transcript:

Research Method Step 1 – Formulate research question Step 2 – Operationalize concepts ◦ Valid and reliable indicators Step 3 – Decide on sampling technique ◦ Draw sample Step 4 – Select data collection technique ◦ Collect data Step 5 – Analyze data Step 6 – Write up the report

It is critically important to develop valid and reliable measurements/indicators If your measurements/indicators are not valid and reliable then you are wasting your time. When you input your data, never forget…………… If you put trash into the computer, you will get trash out, no matter how sophisticated your analysis.

What is a Valid and Reliable Measurement? Validity ◦ Refers to the relationships between a concept and its indicator ◦ Is the indicator an accurate measurement of the concept? Reliability ◦ Refers to consistency across time and place ◦ Do you get consistent or same results when indicator is used in different, but comparable, time and/or place? ◦ NOTE – If you don’t get consistent results then it could be that  Measurement is ambiguous – faulty  Questions are double barreled or confusing, even in same setting  Example – Do you agree with the following statement – Men and Women are Good Communicators? – is a double barreled statement  Situation is different and measurement doesn’t hold across these situations  Terms have different meanings in different subcultures  Example - Do you think it’s BAD to get a tattoo? – is a statement that means something entirely different to teenagers than to their parents

Four Types of Validity to Consider Face Validity ◦ Does indicator “obviously” measure the concept? Is it a “sensible” indicator? Content Validity ◦ Does indicator cover entire range of meaning of the concept?  If concept is multi-dimensional, then is indicator multi-dimensional? Construct Validity ◦ Is indicator related to other indicators as specified by the literature? Criterion Related or Predictive Validity ◦ If the concept is supposed to predict a future event, then does the indicator predict that same future event accurately?

A More Detailed Look at Face Validity Face Validity—Indicator is a sensible or obvious measurement of the concept ◦ If concept is a type of behavior, then indicator should measure behavior  Common mistake—using the number of workers of color hired by a company as a measure of prejudice  Hiring is a behavior and prejudice is an attitude  This would measure discrimination not prejudice ◦ If concept is a value laden concept, then we must take social desirability into account when constructing a measure  Common mistake—measuring crime by asking people if they have committed a crime  No one wants to admit they’ve committed a crime

A More Detailed Look at Content Validity The indicator must cover the entire range of the meaning of the concept Examples ◦ If you measure attitudes toward a workshop, you must ask multiple questions to cover the multiple aspects of the workshop (i.e., quality of handouts, quality of presentations, relevancy of information, etc. ) ◦ If you measure social class (a multi-dimensional concept) you must measure income, occupation and education ◦ If you measure prejudice, you must either think about and measure all of the different types of prejudice (i.e., racial, religious, social class prejudice) or limit yourself to one type and indicate that when you discuss your concept

A More Detailed Look at Construct Validity Indicator must be related to other indicators and/or concepts as determined by past research reported in the literature Theoretical Construct Validity—Indicator is related to other concepts/indicators as specified by a theory Example – As predicted by theory, your indicator of poverty is related to whether or not they live in a single parent household

A More Detailed Look at Construct Validity Indicator must be related to other indicators and/or concepts as determined by past research reported in the literature. Discriminate Validity—Indicator is related to other indicators, measurements or behaviors as predicted by the literature or past research Example – As predicted in the literature, your volunteers are happier when they have some “voice” in the decisions that are made. Your measurements on happiness and decision making power are related as they should be.

A More Detailed Look at Construct Validity Convergent Validity—Indicator is related to data using other data collection methods as predicted (multi-methods) Example – When children who attend your workshops and “appear” to be happier when observed, also score higher on a happiness measurement. Known Groups Validity—Indicator is related to groups with known characteristics as expected Example – KKK members score higher on prejudice index than members of civil rights movement.

A More Detailed Look at Construct Validity Factor Validity—Indicator is related to other items in same subscale more strongly than to items in different subscale Example – the CES-D scale measures 4 components of depression (negative affect, lack of positive affect, somatic symptoms and interpersonal). Each of these components is measured by several items/statements that form a subscale. To have factor validity, a single item/statement must be more strongly related to other items in that subscale than to items in another subscale. For instance, in the negative affect subscale there are items measuring feeling blue, feeling sad and feeling depressed. These items are more strongly related to each other than to items in the somatic symptoms subscale (i.e., overeating, difficulty concentrating, sleeping too much). You would use a factor analysis to determine this.

A More Detailed Look at Criterion, Concurrent or Predictive Validity Criterion Related Validity—Scores on one indicator can be used to predict scores on another. Example - Scores on marital happiness scale can predict scores on personal happiness scale. Concurrent Validity—Scores on your indicator can be used to predict current behavior. Example – SAT/ACT scores are related to current performance in school (GPA)

A More Detailed Look at Criterion, Concurrent or Predictive Validity Predictive Validity—Indicator can be used to predict future events Example - SAT/ACT scores related to performance in college (GPA)

Reliability Reliability Reliability refers to consistency across time. An indicator can be reliable (provide consistent results), but NOT valid (accurate). It can provide consistently WRONG answers. ◦ There are different ways to measure reliability, which include:  Test/retest  Internal consistency  Using alternative forms  Inter-rater reliability  Intra-rater reliability

Reliability – Consistency of Indicators Reliability – Consistency of Indicators Test/retest ◦ Subjects provide same answers to the same items at different times ◦ Individuals should score the same each time Internal consistency ◦ Scale items are highly correlated/associated with each other ◦ Use a Cronbach’s Alpha to determine this. Alternative forms ◦ Use slightly different forms – see example on next slide Inter-rater reliability ◦ Two or more researchers get same results Intra-rater reliability ◦ Same researcher get similar results across time

Using different ways of asking the question should yield the same answers SDDASA I liked the workshop presentation1234 SA AD SD I like the workshop presentation I did not like the workshop SDDASA presentation1234 I liked the presentationSDDUASA 12345

Relationship etween Validity and Reliability Relationship Between Validity and Reliability Definition of terms ◦ Validity – accuracy ◦ Reliability – consistency Relationships ◦ If it is valid (accurate) then it is reliable (consistent) ◦ BUT if it is reliable – it may not be valid, it could be consistently WRONG

Examples – Bathroom Scales Valid ◦ Scales provide accurate measurement of weight ◦ As long as you don’t gain or lose weight, then they will also provide consistent weight Reliable ◦ Scales provide consistent measurement of weight ◦ BUT if you have not calibrated scales accurately, they may be consistently wrong

Questions or comments? Please contact: Carol Albrecht Assessment Specialist USU Extension