Presentation is loading. Please wait.

Presentation is loading. Please wait.

Research Methodology Lecture No : 11 (Goodness Of Measures)

Similar presentations


Presentation on theme: "Research Methodology Lecture No : 11 (Goodness Of Measures)"— Presentation transcript:

1 Research Methodology Lecture No : 11 (Goodness Of Measures)

2 Recap Measurement is the process of assigning numbers or labels to objects, persons, states of nature, or events. Scales are a set of symbols or numbers, assigned by rule to individuals, their behaviors, or attributes associated with them

3

4 Using these scales we complete the development of our instrument.
It is to bee seen if these instruments accurately and measure the concept.

5 Sources of Measurement Differences
Why do ‘scores’ vary? Among the reasons legitimate differences, are differences due to error (systematic or random) 1. That there is a true difference in what is being measured. 2. That there are differences in stable characteristics of individual respondents On satisfaction measures, there are systematic differences in response based on the age of the respondent. 4/12/2017

6 3.Differences due to short term personal factors – mood swings, fatigue, time constraints, or other transistory factors. Example – telephone survey of same person, difference may be due to these factors (tired versus refreshed) may cause differences in measurement. 4.Differences due to situational factors – calling when someone may be distracted by something versus full attention. 4/12/2017

7 5.Differences resulting from variations in administering the survey – voice inflection, non verbal communication, etc. Differences due to the sampling of items included in the questionnaire.

8 Differences due to a lack of clarity in measurement instrument
(measurement instrument error). Example; unclear or ambiguous questions. 8. Differences due to mechanical or instrument factors – blurred questionnaires, bad phone connections. 4/12/2017

9 Goodness of Measure Once we have operationalized, and assigned scales we want to make sure that these instruments developed measure the concept accurately and appropriately. Measure what is suppose to be measured Measure as well as possible 4/12/2017

10 Validity : checks as to how well an instrument that is developed measured the concept
Reliability: checks how consistently an instrument measures

11

12 Ways to Check for Reliability
How to check for reliability of measurement instruments or the stability of measures and internal consistency of measures? Two methods are discussed to check the stability . Stability (a) Test – Retest Use the same instrument, administer the test shortly after the first time, taking measurement in as close to the original conditions as possible, to the same participants. 4/12/2017

13 If there are few differences in scores between the two tests, then the instrument is stable. The instrument has shown test-retest reliability. Problems with this approach. Difficult to get cooperation a second time Respondents may have learned from the first test, and thus responses are altered Other factors may be present to alter results (environment, etc.)

14 (b) Equivalent Form Reliability
This approach attempts to overcome some of the problems associated with the test-retest measurement of reliability. Two questionnaires, designed to measure the same thing, are administered to the same group on two separate occasions (recommended interval is two weeks). 4/12/2017

15 If the scores obtained from these tests are correlated, then the instruments have equivalent form reliability. Tough to create two distinct forms that are equivalent. An impractical method (as with test-retest) and not used often in applied research.

16 (2)Internal Consistency Reliability
This is a test of the consistency of respondents answer to all the items in a measure . The items should ‘hang together as a set. i.e. the items are independent measures of the same concept, they will correlated with one another 4/12/2017

17 Developing questions on the Concept Enriched Job

18 Validity Definition: Whether what was intended to be measured was actually measured? 4/12/2017

19 The weakest form of validity
Face Validity The weakest form of validity Researcher simply looks at the measurement instrument and concludes that it will measure what is intended. Thus it is by definition subjective. 4/12/2017

20 Content Validity The degree to which the instrument items represent the universe of the concepts under study. In English: did the measurement instrument cover all aspects of the topic at hand? 4/12/2017

21 Criterion Related Validity
The degree to which the measurement instrument can predict a variable known as the criterion variable. 4/12/2017

22 Two subcategories of criterion related validity
Predictive Validity Is the ability of the test or measure to differentiate among individuals with reference to a future criterion. E.g. an instrument which is suppose to measure the aptitude of an individual, when used can be compared with the future job performance of a different individual. Good performance (Actual) should also have scored high in the aptitude test and vise versa

23 Concurrent Validity Is established when the scale discriminates individuals who are known to be different that is they should score differently on the test. E.g. individuals who are happy at availing welfare and individuals who prefer to do job must score differently on a scale/ instrument which measures work ethics.

24 This is the territory of academic researchers
Construct Validity Does the measurement conform to some underlying theoretical expectations. If so then the measure has construct validity. i.e. If we are measuring consumer attitudes about product purchases then do the measure adhere to the constructs of consumer behavior theory. This is the territory of academic researchers 4/12/2017

25 Two approaches are used to measure construct validity
Convergent Validity A high degree of correlation among 2 different measures intended to measure same construct Discriminant Validity The degree of low correlation among varaibles that are assumed to be different. 4/12/2017

26 To check validity through Correlation analysis, Factor Analysis, Multi trait , Multi matrix correlation etc

27 Reflective vs Formative measure scales:
In some multi item measure where it is measuring different dimensions of a concept do not hang together Such is the case of Job Description Index measure which measures job satisfaction from 5 different dimension i.e Regular Promotions, Fairly good chance for promotion, Income adequate, Highly Paid, good opportunity for accomplishment.

28 In this case some items of dimensions Income adequate and Highly paid to be correlated but dimension items of Opportunity for Advancement and Highly Paid might not correlated. In this measure not all the items would related to each other as it’s dimensions address different aspect of job satisfaction. This measure /scale is termed as Formative scale

29 In some cases the measure dimensions and items correlate.
In this kind of measure/scale the different dimensions share a common basis ( common interest) An example is of a scale on Attitude towards the Offer scale. Since the items are all focused on the price of an item, all the items are related hence this scale is termed as Reflective Scale.

30 Recap


Download ppt "Research Methodology Lecture No : 11 (Goodness Of Measures)"

Similar presentations


Ads by Google