Part II Sigma Freud & Descriptive Statistics

Slides:



Advertisements
Similar presentations
Chapter 8 Flashcards.
Advertisements

Conceptualization and Measurement
Taking Stock Of Measurement. Basics Of Measurement Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables:
MEASUREMENT CONCEPTS © 2012 The McGraw-Hill Companies, Inc.
VALIDITY AND RELIABILITY
Chapter 5 Measurement, Reliability and Validity.
Part II Sigma Freud & Descriptive Statistics
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
Business Research for Decision Making Sixth Edition by Duane Davis Chapter 7 Foundations of Measurement PowerPoint Slides for the Instructor’s Resource.
LECTURE 9.
Reliability and Validity of Research Instruments
RELIABILITY & VALIDITY
MEASUREMENT. Measurement “If you can’t measure it, you can’t manage it.” Bob Donath, Consultant.
1 Introduction to Policy Processes Dan Laitsch. 2 Overview Sign in Business –Crashed blog –Grades and extensions Review last class –Stats –Research –Policy.
Concept of Measurement
Beginning the Research Design
Evaluating Hypotheses Chapter 9. Descriptive vs. Inferential Statistics n Descriptive l quantitative descriptions of characteristics.
Lecture 7 Psyc 300A. Measurement Operational definitions should accurately reflect underlying variables and constructs When scores are influenced by other.
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Validity, Reliability, & Sampling
Classroom Assessment A Practical Guide for Educators by Craig A
Measurement and Data Quality
Collecting Quantitative Data
Slide 9-1 © 1999 South-Western Publishing McDaniel Gates Contemporary Marketing Research, 4e Understanding Measurement Carl McDaniel, Jr. Roger Gates Slides.
Measurement in Exercise and Sport Psychology Research EPHE 348.
Instrumentation.
Foundations of Educational Measurement
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Chapter Five Measurement Concepts. Terms Reliability True Score Measurement Error.
Tests and Measurements Intersession 2006.
Chapter 1 Measurement, Statistics, and Research. What is Measurement? Measurement is the process of comparing a value to a standard Measurement is the.
Research methods in clinical psychology: An introduction for students and practitioners Chris Barker, Nancy Pistrang, and Robert Elliott CHAPTER 4 Foundations.
Learning Objective Chapter 9 The Concept of Measurement and Attitude Scales Copyright © 2000 South-Western College Publishing Co. CHAPTER nine The Concept.
Chapter 7 Hypotheticals and You: Testing Your Questions Part III Taking Chances for Fun and Profit.
CHAPTER OVERVIEW The Measurement Process Levels of Measurement Reliability and Validity: Why They Are Very, Very Important A Conceptual Definition of Reliability.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
MOI UNIVERSITY SCHOOL OF BUSINESS AND ECONOMICS CONCEPT MEASUREMENT, SCALING, VALIDITY AND RELIABILITY BY MUGAMBI G.K. M’NCHEBERE EMBA NAIROBI RESEARCH.
Measurement and Scaling
SOCW 671: #5 Measurement Levels, Reliability, Validity, & Classic Measurement Theory.
S519: Evaluation of Information Systems Social Statistics Inferential Statistics Chapter 16: reliability and validity.
Chapter Eight: Using Statistics to Answer Questions.
Foundations of Inferential Statistics PADM 582 University of La Verne Soomi Lee, Ph.D.
Psychometrics. Goals of statistics Describe what is happening now –DESCRIPTIVE STATISTICS Determine what is probably happening or what might happen in.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
Technical Adequacy of Tests Dr. Julie Esparza Brown SPED 512: Diagnostic Assessment.
SECOND EDITION Chapter 5 Standardized Measurement and Assessment
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Lesson 3 Measurement and Scaling. Case: “What is performance?” brandesign.co.za.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
S519: Evaluation of Information Systems Social Statistics Inferential Statistics Chapter 16: reliability and validity.
© 2009 Pearson Prentice Hall, Salkind. Chapter 5 Measurement, Reliability and Validity.
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 25 Critiquing Assessments Sherrilene Classen, Craig A. Velozo.
Chapter 2 Theoretical statement:
Ch. 5 Measurement Concepts.
Lecture 5 Validity and Reliability
CHAPTER 4 Research in Psychology: Methods & Design
Product Reliability Measuring
S519: Evaluation of Information Systems
Reliability and Validity in Research
Associated with quantitative studies
CHAPTER 5 MEASUREMENT CONCEPTS © 2007 The McGraw-Hill Companies, Inc.
Part Two THE DESIGN OF RESEARCH
Measurement Concepts and scale evaluation
Ch 5: Measurement Concepts
Presentation transcript:

Part II Sigma Freud & Descriptive Statistics Chapter 6    Just the Truth: An Introduction to Understanding Reliability and Validity

What you will learn in Chapter 6 What reliability and validity are and why they are important Basic measurement scales Computing and interpreting reliability coefficients Computing and interpreting validity coefficients

Why Measurement? You need to know that the data you are collecting represents what it is you want to know about. How do you know that the instrument you are using to collect data works every time (reliability) and measures what it is supposed to (validity)?

Scales of Measurement Measurement is the assignment of values to outcomes following a set of rules There are four types of measurement scales Nominal Ordinal Interval Ratio

Nominal Level of Measurement Characteristics of an outcome that fits one and only one category Mutually exclusive categories such as male or female, Caucasian or African-American, etc. Categories cannot be ordered meaningfully Least precise level of measurement

Ordinal Level of Measurement Characteristics being measured are ordered Rankings such as #1, #2, #3 You know that a higher rank is better, but not by how much

Interval Level of Measurement Test or tool is based on an underlying continuum that allows you to talk about how much higher one score is than another Intervals along the scale are equal to one another

Ratio Level of Measurement Characterized by the presence of absolute zero on the scale An absence of any of the trait being measured

Things to Remember Any outcome can be assigned one of four scales of measurement Scales of measures have an order The “higher” up the scale of measurement, the more precise the data More precise scales contain all of the qualities of the scales below it

Classical Test Theory: Os = Ts + E Observed score the actual score on a test True score theoretical reflection of the actual amount of a trait or characteristic an individual possesses Error score part of the score that is random Reliability = True Score / True Score + Error

Types of Reliability Test-Retest Parallel Forms Internal Consistency Measure of Stability Parallel Forms Measure of Equivalence Internal Consistency Measure of Consistency Cronbach’s Alpha (coefficient alpha) Inter-Rater Measure of Agreement

Using the Computer SPSS and Cronbach’s Alpha

How Big is Big? Reliability coefficients should be positive 0.0 to 1.0 General Rules of Thumb… Test-Retest = .60-1.0 Inter-Rater = 85% agreement Internal Consistency = .70 – 1.0 High Reliability DOES NOT mean quality!!

Establishing Reliability Make sure instructions are standardized across all settings Increase number of items or observations Delete unclear items Moderate easiness or difficulty of tests Minimize the effect of external events

What is the Truth? Validity The extent to which inferences made from a test are… Appropriate Meaningful Useful (American Psychological Association & the National Council on Measurement) Does the test measure what it is supposed to measure?

Types of Validity Traditionally speaking there are three types of validity evidence: Content Validity Criterion Validity Predictive Criterion validity Concurrent Criterion validity Construct Validity

Content Validity Property of a test such that the test items sample the universe of items for which the test is designed. How to Establish… Content Expert Do items represent all possible items? How well do the number of items reflect what was taught?

Criterion Validity Assesses whether a test reflects a set of abilities in a current (concurrent) or future (predictive) setting as measured by some other test. Concurrent Validity How well does my test correlate with the outcomes of a similar test right now? Predictive Validity How well does my test predict performance on a similar measure in the future?

Construct Validity Most interesting…most difficult source of validity to establish Construct = group of interrelated variables such as... Aggression Hope Intelligence Want your construct to correlate with related behaviors and not correlate with behaviors that are not related.

All About Validity

Validity & Reliability The “Kissing Cousins” A test can be reliable but not valid A test cannot be valid unless it is reliable because… “A test cannot do what it is supposed to do (validity) until it does what it is supposed to do consistently (reliability).”

Part III Taking Chances for Fun and Profit Chapter 7    Hypotheticals and You: Testing Your Questions

What you will learn in Chapter 7 The difference between samples and populations The importance of… The null hypothesis The research hypotheses How to judge a good hypothesis

What is a hypothesis? An “educated guess” Their role is to reflect the general problem statement or question that is driving the research Translates the problem or research question into a form that can be tested.

Samples and Populations The large group to which you would like to generalize your findings Sample The smaller, representative group of the population that is used to do the research Sampling error – a measure of how well a sample represents the population

The Null Hypothesis Statements that contain two or more things that are equal (unrelated) to one another H0 : m1 = m2 The starting point and is accepted as true without knowing more information Benchmark to compare actual outcomes

The Research Hypothesis Statement that there is a relationship between two variables Two Types… Nondirectional -- H1 : X1 ≠ X2 Reflects a difference; direction is not specified Two-tailed test Directional -- H1 : X1 > X2 Reflects a difference; direction is specified One-tailed test

Null & Research Hypotheses

Differences Between Null and Research Hypotheses No relationship between variables Relationship between variables Refers to the population Refers to the sample Indirectly tested Directly tested Written using Greek symbols Written using Roman symbols Implied hypothesis Explicit hypothesis

What Makes a Good Hypothesis? Stated in a declarative form rather than a question Defines an expected relationship between variables Reflects the theory or literature on which they are based Brief and to the point Testable – includes variables that can be measured

Glossary Terms to Know Hypothesis Null Hypothesis Research Hypothesis Directional & Non-directional hypotheses One-tailed & Two-tailed test Population Sample Sampling error