Measurement in Exercise and Sport Psychology Research EPHE 348.

Slides:



Advertisements
Similar presentations
Agenda Levels of measurement Measurement reliability Measurement validity Some examples Need for Cognition Horn-honking.
Advertisements

Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Conceptualization and Measurement
Taking Stock Of Measurement. Basics Of Measurement Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables:
Assessment Procedures for Counselors and Helping Professionals, 7e © 2010 Pearson Education, Inc. All rights reserved. Chapter 5 Reliability.
Chapter 5 Measurement, Reliability and Validity.
4/25/2015 Marketing Research 1. 4/25/2015Marketing Research2 MEASUREMENT  An attempt to provide an objective estimate of a natural phenomenon ◦ e.g.
Part II Sigma Freud & Descriptive Statistics
Part II Sigma Freud & Descriptive Statistics
CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY
LECTURE 9.
Measurement. Scales of Measurement Stanley S. Stevens’ Five Criteria for Four Scales Nominal Scales –1. numbers are assigned to objects according to rules.
Culture and psychological knowledge: A Recap
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
MEASUREMENT. Measurement “If you can’t measure it, you can’t manage it.” Bob Donath, Consultant.
Concept of Measurement
Beginning the Research Design
Measurement: Reliability and Validity For a measure to be useful, it must be both reliable and valid Reliable = consistent in producing the same results.
Psych 231: Research Methods in Psychology
Variables cont. Psych 231: Research Methods in Psychology.
Validity, Reliability, & Sampling
VALIDITY. Validity is an important characteristic of a scientific instrument. The term validity denotes the scientific utility of a measuring instrument,
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
Measurement and Data Quality
Defining and Measuring Variables Slides Prepared by Alison L. O’Malley Passer Chapter 4.
Collecting Quantitative Data
Experimental Research
Slide 9-1 © 1999 South-Western Publishing McDaniel Gates Contemporary Marketing Research, 4e Understanding Measurement Carl McDaniel, Jr. Roger Gates Slides.
Instrumentation.
Foundations of Educational Measurement
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Copyright © 2008 by Nelson, a division of Thomson Canada Limited Chapter 11 Part 3 Measurement Concepts MEASUREMENT.
Collecting Quantitative Data
What is a Measurement? Concept of measurement is intuitively simple  Measure something two concepts involved  The thing you are measuring  The measurement.
Introduction to Descriptive Statistics Objectives: 1.Explain the general role of statistics in assessment & evaluation 2.Explain three methods for describing.
Chapter 4: Test administration. z scores Standard score expressed in terms of standard deviation units which indicates distance raw score is from mean.
Chapter Five Measurement Concepts. Terms Reliability True Score Measurement Error.
Counseling Research: Quantitative, Qualitative, and Mixed Methods, 1e © 2010 Pearson Education, Inc. All rights reserved. Basic Statistical Concepts Sang.
Tests and Measurements Intersession 2006.
Descriptive Statistics becoming familiar with the data.
Quantitative SOTL Research Methods Krista Trinder, College of Medicine Brad Wuetherick, GMCTE October 28, 2010.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
CHAPTER OVERVIEW The Measurement Process Levels of Measurement Reliability and Validity: Why They Are Very, Very Important A Conceptual Definition of Reliability.
Chapter 2: Behavioral Variability and Research Variability and Research 1. Behavioral science involves the study of variability in behavior how and why.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
The Theory of Sampling and Measurement. Sampling First step in implementing any research design is to create a sample. First step in implementing any.
MOI UNIVERSITY SCHOOL OF BUSINESS AND ECONOMICS CONCEPT MEASUREMENT, SCALING, VALIDITY AND RELIABILITY BY MUGAMBI G.K. M’NCHEBERE EMBA NAIROBI RESEARCH.
Measurement and Scaling
McGraw-Hill/Irwin © 2012 The McGraw-Hill Companies, Inc. All rights reserved. Obtaining Valid and Reliable Classroom Evidence Chapter 4:
Measurement Theory in Marketing Research. Measurement What is measurement?  Assignment of numerals to objects to represent quantities of attributes Don’t.
Psychometrics. Goals of statistics Describe what is happening now –DESCRIPTIVE STATISTICS Determine what is probably happening or what might happen in.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
SECOND EDITION Chapter 5 Standardized Measurement and Assessment
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Chapter 3 Selection of Assessment Tools. Council of Exceptional Children’s Professional Standards All special educators should possess a common core of.
Classroom Assessment Chapters 4 and 5 ELED 4050 Summer 2007.
Lesson 3 Measurement and Scaling. Case: “What is performance?” brandesign.co.za.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
© 2009 Pearson Prentice Hall, Salkind. Chapter 5 Measurement, Reliability and Validity.
Measurement and Scaling Concepts
Ch. 5 Measurement Concepts.
CHAPTER 5 MEASUREMENT CONCEPTS © 2007 The McGraw-Hill Companies, Inc.
Reliability & Validity
Introduction to Measurement
Week 3 Class Discussion.
5. Reliability and Validity
15.1 The Role of Statistics in the Research Process
Measurement Concepts and scale evaluation
Presentation transcript:

Measurement in Exercise and Sport Psychology Research EPHE 348

Measurement We measure performance on a variable – an attribute that different people possess in different amounts Measurement is not specifically defined but has levels and properties

Levels of Measurement ( from least rigorous to most rigorous ) Nominal – numbers are used to classify (gender, eye color) Ordinal – numbers have an order (ranking in finishing a race) Interval – equal differences in numbers imply equal differences in attributes (calendar time) Ratio – interval with a zero point reflecting the absence of the attribute (height)

Basic Measurement Theory An observed score on a measure is made up of two components: –1) true score –2) error Error may have a systematic component and a random component –examples of systematic error: social desirability, gender bias, cultural bias etc.

Thinking more about Error....In measurement theory we assume: -1) Error is random across items -2) Error is independent across items -3) Error has a normal distribution with a mean of 0 (cancels itself out)

Key Point in Scale Construction Multiple items generally are needed in a measure to approximate the “true score” and reduce random error towards 0

Most common Types of Reliability Assessment 1) Internal consistency of a scale –Purpose: to try to identify the commonality of variability in the scale items and interpret it as “true score variance” Examples: –split half –Cronbach’s 

Most common Types of Reliability Assessment 2) Comparison between time and testers –Purpose: To identify consistency across time, and across testers Examples: -test-retest reliability -inter-rater reliability

Factors that Influence Reliability 1) Heterogeneity of the construct 2) Method of estimation 3) Number of items 4) Variability in the participants that answer the measure

Validity Are we even measuring what we want to measure? If you do not know what your measurement means, you do not know anything (Gulliksen, 1950) Validity is a matter of degree and we must gather multiple lines of evidence

The Holy Trinity of Validity -1) Content validity – based on professional judgements about the relevance of the item content to the content of a particular domain of interest and about the representativeness with which the item covers that domain -2) Criterion validity – based on the degree of empirical relationships, usually correlations between the scale and criterions

The Holy Trinity of Validity -3) Construct validity – evaluated by investigating what qualities a scale measures All evidence for and against a measure is actually construct validity (Rogers, 1994)

Threats to Construct Validity 1) Construct-irrelevant variance –including something that should not have been included 2) Construct under representation –leaving out something that should be included according to the theory surrounding the construct of interest

Constructing A Measure Phase One: Item Construction –strive for representativeness and relevance to the domain of interest –items should be clear, short and simple –items should not have double meaning (conjunctions) –avoid items that are endorsed by no one or everyone

Constructing a Measure –items should be balanced in positive and negative wording –arrange items in random order General problems: – Social desirability – tendency to always answer favorably –Acquiescence - tendency to agree

Constructing a Measure Phase 2: Judges Analysis –Items are sent to expert judges in the domain and evaluated for relevance and representativness –The more judges the better –Evaluation is rated objectively and with room for comments

Constructing a Measure Phase 3: Protocol Analysis –Items are examined by a sample of participants in a think out loud procedure and focus group –Helps identify any differences in meaning between the experts and the population sample

Constructing a Measure Phase 3: Structure Analysis –The measure is administered to a population with very similar and very different measures –Can then examine: 1) structure of the measure 2) Divergence from the very different measures 3) Convergence with the similar measure

Exercise In a group of five, develop a five item measure Consider –Representativeness & relevance –Phrase simplicity & variance –Type of scaling