Measurement the process by which we test hypotheses and theories. assesses traits and abilities by means other than testing obtains information by comparing.

Slides:



Advertisements
Similar presentations
Allyn & Bacon 2003 Social Work Research Methods: Qualitative and Quantitative Approaches Topic 7: Basics of Measurement Examine Measurement.
Advertisements

Conceptualization and Measurement
Topics: Quality of Measurements
Taking Stock Of Measurement. Basics Of Measurement Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables:
© McGraw-Hill Higher Education. All rights reserved. Chapter 3 Reliability and Objectivity.
Chapter 4 – Reliability Observed Scores and True Scores Error
VALIDITY AND RELIABILITY
Chapter 5 Measurement, Reliability and Validity.
King Fahd University of Petroleum & Minerals Department of Management and Marketing MKT 345 Marketing Research Dr. Alhassan G. Abdul-Muhmin Introduction.
Professor Gary Merlo Westfield State College
 A description of the ways a research will observe and measure a variable, so called because it specifies the operations that will be taken into account.
Part II Sigma Freud & Descriptive Statistics
Part II Sigma Freud & Descriptive Statistics
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
Brown, Suter, and Churchill Basic Marketing Research (8 th Edition) © 2014 CENGAGE Learning Basic Marketing Research Customer Insights and Managerial Action.
CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY
LECTURE 9.
RESEARCH METHODS Lecture 18
Chapter 4 Validity.
MEASUREMENT. Measurement “If you can’t measure it, you can’t manage it.” Bob Donath, Consultant.
Concept of Measurement
Beginning the Research Design
A quick introduction to the analysis of questionnaire data John Richardson.
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Chapter10 Measurement in Marketing Research. The Measurement Process Empirical System (MKT Phenomena) Abstract System (Construct) Number System measurement.
Research Methods in MIS
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT OSMAN BIN SAIF Session 14.
The Practice of Social Research
Measurement and Data Quality
Validity and Reliability
MEASUREMENT OF VARIABLES: OPERATIONAL DEFINITION AND SCALES
Instrumentation.
Foundations of Educational Measurement
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Copyright © 2008 by Nelson, a division of Thomson Canada Limited Chapter 11 Part 3 Measurement Concepts MEASUREMENT.
Reliability & Validity
Tests and Measurements Intersession 2006.
Chapter 7 Measurement and Scaling Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin.
Learning Objective Chapter 9 The Concept of Measurement and Attitude Scales Copyright © 2000 South-Western College Publishing Co. CHAPTER nine The Concept.
Chapter 4-Measurement in Marketing Research
Selecting a Sample. Sampling Select participants for study Select participants for study Must represent a larger group Must represent a larger group Picked.
Measurement and Questionnaire Design. Operationalizing From concepts to constructs to variables to measurable variables A measurable variable has been.
CHAPTER OVERVIEW The Measurement Process Levels of Measurement Reliability and Validity: Why They Are Very, Very Important A Conceptual Definition of Reliability.
Chapter 2: Behavioral Variability and Research Variability and Research 1. Behavioral science involves the study of variability in behavior how and why.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
Psychometrics. Goals of statistics Describe what is happening now –DESCRIPTIVE STATISTICS Determine what is probably happening or what might happen in.
Criteria for selection of a data collection instrument. 1.Practicality of the instrument: -Concerns its cost and appropriateness for the study population.
©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Reliability EDUC 307. Reliability  How consistent is our measurement?  the reliability of assessments tells the consistency of observations.  Two or.
Measurement Chapter 6. Measuring Variables Measurement Classifying units of analysis by categories to represent variable concepts.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
5. Evaluation of measuring tools: reliability Psychometrics. 2011/12. Group A (English)
© 2009 Pearson Prentice Hall, Salkind. Chapter 5 Measurement, Reliability and Validity.
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
Measurement and Scaling Concepts
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Chapter 2 Theoretical statement:
Associated with quantitative studies
CHAPTER 5 MEASUREMENT CONCEPTS © 2007 The McGraw-Hill Companies, Inc.
Reliability & Validity
Measurement with Numbers Scaling: What is a Number?
پرسشنامه کارگاه.
RESEARCH METHODS Lecture 18
The Concept of Measurement and Attitude Scales
Presentation transcript:

Measurement the process by which we test hypotheses and theories. assesses traits and abilities by means other than testing obtains information by comparing participants’ performance or status against an established scale  a procedure that helps determine the extent or quality of a variable trait and permits assigning a value to it.

A measurement scale is a set of rules for quantifying or assigning numerical scores to a particular variable. 1. Nominal scale does not measure but rather names classifies observations into categories 2. Ordinal scale a rank ordering of things shows position in rank

3. Interval scale tells not only the order of things but also the distance between judgments 4. Ratio scale  includes a true zero value, a point on the scale that represents the complete absence of the characteristic involved.

Scaling Design Scaling is applied to procedures for attempting to determine quantitative measures of subjective abstract concepts  A procedure for the assignment of numbers or other symbols to a property of objects in order to impart some of the characteristics of numbers to the properties in question. Scales are devices constructed or employed by researchers to quantify the responses of a subject on a particular variable.

1. Likert Scale a five-point scale in which the interval between each point on the scale is assumed to be equal called an equal-appearing interval scale used to register the extent of agreement or disagreement with a particular statement of an attitude, belief, or judgment utilizes response scales that examines quality, frequency of occurrence, level of comfort or degree of satisfaction

2. Semantic Differential Scale  presents pairs of adjectives related to a word or phrase in a continuum, usually consisting of seven blanks.

The process of measurement involves: Conceptualization – A clear meaning must be assigned to the concept being measured. Operationalization – A reasonable indicator of the meaning assigned to a concept must be selected.

Structuring – Specific values or categories of the indicator must be considered. Reliability and Validity Testing – Whether or not the indicator is reliable and valid must be determined by testing specific hypotheses that relate the indicator to other variables.

RELIABILITY is the extent of accuracy, consistency, stability or repeatability of a measurement. It is the degree to which individuals’ deviation scores remain relatively consistent over repeated administration of the same test or alternate test form.

Procedures requiring two test administrations ALTERNATE FORM METHOD – constructing two similar forms of a test and administering both forms to the same group of examinees (Coefficient of Equivalence) TEST-RETEST METHOD – administering the test to a group of examinees and re-administering the same test to the same group after a period of time (Coefficient of Stability) TEST-RETEST WITH ALTERNATE FORMS (Coefficient of Stability and Equivalence)

Procedures requiring single test administration SPLIT-HALF METHOD – method requiring a single test administration (internal consistency method); creating two half-tests which are as nearly parallel as possible. COEFFICIENT ALPHA (  ) - The extent to which any random sample of items correlate with total scores determines the reliability of the sample items.

a. KUDER RICHARDSON COEFFICIENT – used only with dichotomously scored items. a. KUDER RICHARDSON COEFFICIENT – used only with dichotomously scored items. b. CRONBACH’S ALPHA – used to estimate the internal consistency of items which are dichotomously scored or items which have a wide range of scoring weights.

Interpretation of Coefficient Alpha 1. It is a characteristic of a test possessed by virtue of the positive intercorrelations of the items composing it. This estimate implies nothing about the stability of the test scores over time or their equivalence to scores on one particular alternate form of the test. 2. It can be considered as the lower bound to a theoretical reliability coefficient (coefficient of precision).

3. It is the mean of all possible split-half coefficients. 4. It is not a measure of the test’s unidimensionality. COEFFICIENT ALPHA is generally applicable to any situation where the reliability of a composite is estimated, e.g., estimating the reliability of a total score based upon the sum of several subtest scores.

INTERRATER RELIABILITY - only one set of items is used but multiple observations are collected for each examinee by having two or more raters complete the instrument (consistency of observations over raters); should not be considered substitutes for reliability estimates in describing an observational instruments

Factors that affect reliability coefficients Factors that affect reliability coefficients 1. Group Homogeneity If a sample of examinees is highly homogeneous on the trait being measured, the reliability estimate will be lower than if the sample were more heterogeneous. 1. Group Homogeneity If a sample of examinees is highly homogeneous on the trait being measured, the reliability estimate will be lower than if the sample were more heterogeneous. 2. Test Length Longer tests are more reliable than shorter tests composed of similar items. 2. Test Length Longer tests are more reliable than shorter tests composed of similar items. 3. Time Limit Speededness may artificially inflate test reliability coefficients. 3. Time Limit Speededness may artificially inflate test reliability coefficients.

VALIDITY is the extent to which the instrument actually and accurately measures the concept to which it has been assigned (useful scientifically). Relationship of reliability and validity: A measure may be reliable but not valid, but it cannot be valid without being reliable. Reliability is a necessary but not sufficient condition for validity.

CONTENT VALIDITY – Purpose is to assess whether the items adequately represent a construct of specific interest. ►There must be a representative collection of items  Have a panel of independent experts judge whether the items adequately sample the domain of interest.

FACE VALIDITY – the extent to which items appear to measure a construct that is meaningful to laypersons or typical examinees.

Questionnaire Construction Criteria 1. It must be short enough so that the respondents will not reject it completely and so that it will not take too much time which might be a serious drain on the work of the respondents. 2. It must be of sufficient interest and have enough face appeal so that the respondent will be inclined to respond to it and to complete it.

3. It should contain some in-depth questions in order to avoid superficial response. 4. It must not be too suggestive or too un-stimulating. 5. It should elicit responses which are definite but not mechanically forced. 6. Questions must be asked in such a way that the responses will not embarrass the individual. 7. Questions must be asked in such a manner as to allay suspicion on the part of the respondent concerning hidden purposes of the questionnaire.

8. It must not be too narrow, restrictive or limited in its scope or philosophy. 9. The responses to the questionnaire must be valid and the entire body of data taken as a whole must answer the basic question for which the questionnaire was designed. Types of Questions 1.Closed or fixed-alternative  Limit the respondents to a choice among specific alternatives.

 Produce greater uniformity among respondents along the specific dimensions in which the investigator is interested. 2. Open-ended  Designed to permit free response from the respondent rather than limit him to stated alternatives.  More effective in revealing a respondent’s own definition of the situation. Difficult to categorize and analyze responses.

Advantages of a Questionnaire 1.The respondents may have greater confidence in their anonymity and thus, feel free to express views they fear might be disapproved or might get them into trouble. 2. Less likely to be expensive procedure. 3. It requires much less skill to administer. 4. It can be administered to a large number of individuals simultaneously.

Interview  Conducted in person.  The interviewer asks questions directly of respondents either face-to-face or by telephone.  Time- and cost-intensive, which limits the number of respondents that can be included in most research projects. Uses interview guide where questions can be highly structured, semi-structured, or open-ended. The interview is beneficial as a supplementary and extending device of questionnaires.