Week 14 More Data Collection Techniques Chapter 5

Slides:



Advertisements
Similar presentations
Research Methodology Lecture No : 11 (Goodness Of Measures)
Advertisements

Part II Sigma Freud & Descriptive Statistics
Independent & Dependent Variables
Copyright © Allyn & Bacon (2007) Data and the Nature of Measurement Graziano and Raulin Research Methods: Chapter 4 This multimedia product and its contents.
LECTURE 9.
Survey Questionnaire Research
Beginning the Research Design
Research Ethics Levels of Measurement. Ethical Issues Include: Anonymity – researcher does not know who participated or is not able to match the response.
Statistical Analysis SC504/HS927 Spring Term 2008 Week 17 (25th January 2008): Analysing data.
STATISTICS Statistics refers to a set of techniques that are used to transform raw data into useful information.
Statistical Decision-Making Process: Decision Problem Statistical Plan Data Description and Summary Analysis and Conclusions Testing Validity of Results.
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
8-1 Chapter Eight MEASUREMENT. 8-2 Measurement Selecting observable empirical events Using numbers (0, 1, #, %) or symbols (M, F, etc.) to represent aspects.
Research Methods in MIS
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT OSMAN BIN SAIF Session 14.
Measurement and Data Quality
Descriptive Statistics: Maarten Buis Lecture 1: Central tendency, scales of measurement, and shapes of distributions.
Collecting Quantitative Data
Chapter 1: Introduction to Statistics
Instrumentation.
Foundations of Educational Measurement
SELECTION OF MEASUREMENT INSTRUMENTS Ê Administer a standardized instrument Ë Administer a self developed instrument Ì Record naturally available data.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Copyright © 2008 by Nelson, a division of Thomson Canada Limited Chapter 11 Part 3 Measurement Concepts MEASUREMENT.
Chapter Eight The Concept of Measurement and Attitude Scales
Psychometrics William P. Wattles, Ph.D. Francis Marion University.
Chapter 4: Conceptualization and Measurement
An Introduction to Measurement and Evaluation Emily H. Wughalter, Ed.D. Summer 2008 Department of Kinesiology.
An Introduction to Measurement and Evaluation Emily H. Wughalter, Ed.D. Summer 2010 Department of Kinesiology.
Chapter 4: Test administration. z scores Standard score expressed in terms of standard deviation units which indicates distance raw score is from mean.
Chapter 1 Measurement, Statistics, and Research. What is Measurement? Measurement is the process of comparing a value to a standard Measurement is the.
Basic Business Statistics
Learning Objective Chapter 9 The Concept of Measurement and Attitude Scales Copyright © 2000 South-Western College Publishing Co. CHAPTER nine The Concept.
MOI UNIVERSITY SCHOOL OF BUSINESS AND ECONOMICS CONCEPT MEASUREMENT, SCALING, VALIDITY AND RELIABILITY BY MUGAMBI G.K. M’NCHEBERE EMBA NAIROBI RESEARCH.
Psychometrics. Goals of statistics Describe what is happening now –DESCRIPTIVE STATISTICS Determine what is probably happening or what might happen in.
Chapter 3: Organizing Data. Raw data is useless to us unless we can meaningfully organize and summarize it (descriptive statistics). Organization techniques.
 Measuring Anything That Exists  Concepts as File Folders  Three Classes of Things That can be Measured (Kaplan, 1964) ▪ Direct Observables--Color of.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
Reliability performance on language tests is also affected by factors other than communicative language ability. (1) test method facets They are systematic.
Chapter 6 - Standardized Measurement and Assessment
Chapter 3 Selection of Assessment Tools. Council of Exceptional Children’s Professional Standards All special educators should possess a common core of.
Measurement Chapter 6. Measuring Variables Measurement Classifying units of analysis by categories to represent variable concepts.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Measurement & Data Collection
An Introduction to Measurement and Evaluation
Elementary Statistics
S1 IDAC Research Methodology Module II Data Preparation.
Chapter 2 Theoretical statement:
Data and the Nature of Measurement
Introduction to Statistics
Chapter 7 Cooper and Schindler
Selecting Employees – Validation
Associated with quantitative studies
CHAPTER 5 MEASUREMENT CONCEPTS © 2007 The McGraw-Hill Companies, Inc.
Reliability & Validity
Human Resource Management By Dr. Debashish Sengupta
Part Two THE DESIGN OF RESEARCH
پرسشنامه کارگاه.
Introduction to Statistics
Week Three Review.
Introduction to Statistics
Measurement Concepts and scale evaluation
Marketing Research: Course 4
Chapter 8 VALIDITY AND RELIABILITY
Measurement Basics Chapter 13.
Qualities of a good data gathering procedures
Introduction to Statistics
Presentation transcript:

Week 14 More Data Collection Techniques Chapter 5 Research Methods Week 14 More Data Collection Techniques Chapter 5

Review – Data Collection Primary Data Observations Interviews Personal Telephone Questionnaires Case Studies Secondary data

Measurement Techniques Scales for data Nominal Ordinal Interval Ratio

Measurement Techniques Nominal scale We use a number as a shorthand description Male = 1 Female = 2 Or Sultan Qaboos U = 2 University of Nizwa = 3 The number cannot be used mathematically. It makes no sense to divide Sultan Qaboos U by U Nizwa

Measurement Techniques Nominal scale – statistics possible on nominal data We can count We can calculate a mode (which number was the highest)

Measurement Techniques Ordinal scale – We align some results in order – Key success factors 1 – teamwork – 44 votes 2 – leader training – 43 votes 3 - team member training – 33 votes 4 – size of budget – 12 votes We have an order and can compare sizes (1 >2, 3 <2) We cannot assume size of difference – 1 may not have twice as many votes as 2.

Measurement Techniques Ordinal scale – Allowable statistical procedures – Count Median Key success factors 1 – teamwork – 44 votes 2 – leader training – 43 votes 3 - team member training – 33 votes 4 – size of budget – 12 votes

Measurement Techniques Interval Scale – There are equal distances between each number. But there is no absolute 0. A common example is temperature. Absolute 0 is only a theory for temperature. Most of our research data does have an absolute zero - 0 – number of hours studied 0 – number of errors in data entry 0 – cost of motivational incentives

Measurement Techniques Interval Scale – Allowable statistical procedures: Mean – 1.9 hours Standard deviation

Measurement Techniques Ratio Scale – The scale has regular intervals and an absolute 0. Example – number of hours each student used on-line learning 0 – 8 students 1 – 3 students 2 – 6 students 3 – 2 students 4 – 5 students 5 – 0 students

Tests of Measurement Quality Validity Tests Is the test “valid”? Do results represent the factor we are exploring? Example 1: One student gets a higher score on a test than another. Does the student with the higher score know the material better than the other student? What might influence the result?

Tests of Measurement Quality Validity Tests Is the test “valid”? Do results represent the factor we are exploring? Example 2: Project team 1 completes its project faster than Project team 2. Is project team 1 better than project team 2? What might influence the result? What factors other than speed might we test?

Tests of Measurement Quality Measures of Validity Content Validity – does the test adequately cover the subject under review? Example – reasons for outsourcing: 1- cost 2- speed 3- accuracy The number one result might be cost, but only because subjects were not able to give the real reason – lack of local talent, or low regard for a particular department.

Tests of Measurement Quality Measures of Validity Content Validity – does the test adequately cover the subject under review? Example – reasons for not using on-line learning 1- no access to computers 2- dislike for computers 3- poor grasp of English used in software What other reasons might there be for lack of usage? How would you know? What would you have to do to be sure your list was complete?

Tests of Measurement Quality Measures of Validity Predictive Validity – does the test predict the results we later find? For example, does the student with the higher grades actually know more when it comes time to use the knowledge? Example – A student who gets high grades in all accounting classes should easily pass the CPA exam. What if she doesn’t? Could the accounting class exams still be valid? How could you make the course exams more valid? Could there be other ways to find the predictive validity of course exams?

Tests of Measurement Quality Measures of Validity Predictive Validity – does the test predict the results we later find? Example 1: Your list of critical success factors shows that team training is the most important factor. How would you test the predictive validity of your results? Example 2: Your survey shows the top reason to outsource is the need for speed. How would you test the predictive validity of that result?

Tests of Measurement Quality Measures of Validity Concurrent Validity – does the test result agree with other findings? Example 1: Students taking your tests get As in your class, but they get Cs in other classes. Are the tests concurrently valid? How would you check the validity of the results?

Tests of Measurement Quality Measures of Validity Concurrent Validity – does the test result agree with other findings? Before you did your study, you read all prior research and you knew the results others had found. You find different results. Now what?

Tests of Measurement Quality Measures of Validity Construct validity: The degree to which your results match the results based on the primary theory upon which you based your research. Example – You predict that outsourced employees will be more productive because of Taylorism (they do the same job over and over and know it well), but your results show that their advantage is based on motivation– the need for money. Now what?

Tests of Measurement Quality Measures of Validity What would we have to do to determine if the results on last week’s midterm exam were valid? Content Validity Predictive Validity Concurrent Validity Construct Validity

Tests of Measurement Quality Measures of Reliability Reliability – the instrument produces consistent results. If we administer a questionnaire to the same people a week later, we would expect to get the same answers. What might cause the answers to change?

Tests of Measurement Quality Measures of Reliability Reliability – the instrument produces consistent results. If we administer a questionnaire to the same people a week later, we would expect to get the same answers. What might cause the answers to change?

Tests of Measurement Quality Measures of Reliability Reliability – the instrument produces consistent results. Stability – the same person gives the same answers. But this might not be the case. The person might be tired, busy, bored, angry with a co-worker. Since people are never truly stable, how much variation can you find and still determine a test is reliable?

Tests of Measurement Quality Measures of Reliability Reliability – the instrument produces consistent results. Equivalence – two test administrators should get the same results. My observations should be the same as yours. My interview results should be the same. What might cause the results to be different?

Tests of Measurement Quality Measures of Reliability Reliability – the instrument produces consistent results. Equivalence – two test administrators should get the same results. My observations should be the same as yours. My interview results should be the same. Problem – you find a questionnaire in a published study and administer it to 30 students, but you get different results than the people who created the survey. Now what?

Tests of Measurement Quality Measures of Practicality Practicality – we are looking for economy, convenience, and interpretability. Economy and convenience are obvious. Interpretability may be more important. When you get the results, will it be clear what they mean? Example – your results show that team members value leadership above all else. Can you define “leadership”? If you went to a CEO and said “ leadership is crucial to project success,” how would you answer if he said, “So what would make my managers better leaders?”

Tests of Measurement Quality Measures of Practicality Practicality – we are looking for economy, convenience, and interpretability. Interpretability - When you get the results, will it be clear what they mean? Example – You think students will be more likely to use on-line learning websites if the sites use images for clarity. How can you create your survey questions so the results are easy to interpret?

Tests of Measurement Quality Summary Measures of Validity Content Validity Predictive Validity Concurrent Validity Construct Validity Measures of reliability Stability Equivalence Measures of Practicality