Selecting a Sample. Sampling Select participants for study Select participants for study Must represent a larger group Must represent a larger group Picked.

Slides:



Advertisements
Similar presentations
VALIDITY AND RELIABILITY
Advertisements

Tickle: IQ and Personality Tests - Tickle.com: The Classic IQ Test
Part II Sigma Freud & Descriptive Statistics
Part II Sigma Freud & Descriptive Statistics
LECTURE 9.
Measurement in Marketing Research
Developing the Research Question
Concept of Measurement
Beginning the Research Design
Variables in Educational Research Most used term in research. Most used term in research. A variable is a label or name that represents a concept or characteristic.
© 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Chapter 5 Making Systematic Observations.
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Variables cont. Psych 231: Research Methods in Psychology.
Chapter 9 Flashcards. measurement method that uses uniform procedures to collect, score, interpret, and report numerical results; usually has norms and.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Educational Research: Instruments (“caveat emptor”)
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
Quantitative Research
Chapter 5 Selecting Measuring Instruments Gay, Mills, and Airasian
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
Technical Issues Two concerns Validity Reliability
Methods of Psychology Hypothesis: A tentative statement about how or why something happens. e.g. non experienced teachers use corporal punishment more.
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
Measurement and Data Quality
Chapter 3 Goals After completing this chapter, you should be able to: Describe key data collection methods Know key definitions:  Population vs. Sample.
Collecting Quantitative Data
Instrumentation.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
Technical Adequacy Session One Part Three.
Collecting Quantitative Data
Introduction to Statistics What is Statistics? : Statistics is the sciences of conducting studies to collect, organize, summarize, analyze, and draw conclusions.
Final Study Guide Research Design. Experimental Research.
Chapter 1: Introduction to Statistics. 2 Statistics A set of methods and rules for organizing, summarizing, and interpreting information.
Probability & Statistics – Bell Ringer  Make a list of all the possible places where you encounter probability or statistics in your everyday life. 1.
Quantitative Research. Quantitative Methods based in the collection and analysis of numerical data, usually obtained from questionnaires, tests, checklists,
Counseling Research: Quantitative, Qualitative, and Mixed Methods, 1e © 2010 Pearson Education, Inc. All rights reserved. Basic Statistical Concepts Sang.
Chapter 1 Introduction to Statistics. Statistical Methods Were developed to serve a purpose Were developed to serve a purpose The purpose for each statistical.
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 3: The Foundations of Research 1.
Measurement Validity.
Learning Objective Chapter 9 The Concept of Measurement and Attitude Scales Copyright © 2000 South-Western College Publishing Co. CHAPTER nine The Concept.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Validity and Reliability Neither Valid nor Reliable Reliable but not Valid Valid & Reliable Fairly Valid but not very Reliable Think in terms of ‘the purpose.
© 2011 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
© 2011 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
SOCW 671: #5 Measurement Levels, Reliability, Validity, & Classic Measurement Theory.
Measurement Theory in Marketing Research. Measurement What is measurement?  Assignment of numerals to objects to represent quantities of attributes Don’t.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
Introduction To Statistics
Sampling Design & Measurement Scaling
SECOND EDITION Chapter 5 Standardized Measurement and Assessment
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Elin Driana, Ph.D.  “a systematic attempt to provide answers to questions” (Tuckman, 1999, p. 4)  “the more formal, systematic, and intensive process.
Educational Research Chapter 8. Tools of Research Scales and instruments – measure complex characteristics such as intelligence and achievement Scales.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Educational Research Chapter 5 Selecting Measuring Instruments Gay and Airasian.
 Tool: A specific mechanism or strategy the researcher uses  Method: is the general approach (how to) that is taken to carry out research.
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Chapter 3 Designing Research Concepts, Hypotheses, and Measurement.
Chapter 2 Theoretical statement:
Reliability and Validity in Research
Concept of Test Validity
Chapter 6: Selecting Measurement Instruments
Analyzing Reliability and Validity in Outcomes Assessment Part 1
The Nature of Probability and Statistics
Ch 5: Measurement Concepts
Presentation transcript:

Selecting a Sample

Sampling Select participants for study Select participants for study Must represent a larger group Must represent a larger group Picked from a population Picked from a population

It’s all about convenience Total population too large Total population too large Would be too costly Would be too costly Sample done well, results can become a generalization Sample done well, results can become a generalization

Defining Populations Target – entire group (ex. All Maryland 5 th grade students) Target – entire group (ex. All Maryland 5 th grade students) Accessible – realistic number of participants Accessible – realistic number of participants

Simple random sampling All individuals in population All individuals in population –Have equal and independent chance of being selected –Best way to obtain a representative sample of your population –Table of random numbers may be used

Stratified Sampling Examples in text Examples in text Equal-sized stratified groups Equal-sized stratified groups Proportional stratified Proportional stratified

Cluster sampling Intact groups are selected Intact groups are selected Example – instead of a sample of all 5 th graders, one class would be used Example – instead of a sample of all 5 th graders, one class would be used Similar characteristics and abilities Similar characteristics and abilities Systematic Sampling Systematic Sampling –Rarely used (every Kth name) see pg. 108

Sample size? No agreement here No agreement here 30 is usual guide for some quantitative 30 is usual guide for some quantitative 60 needed however for causal- comparative and experimental 60 needed however for causal- comparative and experimental

Qualitative sampling Select small number Select small number Persons selected must help researcher understand certain phenomenon Persons selected must help researcher understand certain phenomenon Intensity, homogenous, snowball sampling are a few examples from text Intensity, homogenous, snowball sampling are a few examples from text

MeasuringInstruments

Data All types of research require the collection, analysis, and interpretation of data. All types of research require the collection, analysis, and interpretation of data. Data are pieces of evidence used to examine a research topic or test a hypothesis. Data are pieces of evidence used to examine a research topic or test a hypothesis.

Constructs Mental abstractions such as personality, creativity, and intelligence that cannot be measured or observed directly. Mental abstractions such as personality, creativity, and intelligence that cannot be measured or observed directly. Constructs become variables when different levels or scores can be used to measure the constructs (example: Personality / introverts vs. extroverts). Constructs become variables when different levels or scores can be used to measure the constructs (example: Personality / introverts vs. extroverts).

Variables/Instruments Variable Variable –A concept that can assume any one of a range of values, e.g., intelligence, height, aptitude. Instrument Instrument –Tool used to collect data (Iowa Test of Basic Skills).

Characteristics of Measuring Instruments Three main ways to collect data: Three main ways to collect data: –Administer an existing instrument. –Construct an instrument. –Observation(record naturally occurring events). –Collect existing data.

Measurement Scale Group of several related statements that participants select to indicate degree of agreement or lack of agreement. Group of several related statements that participants select to indicate degree of agreement or lack of agreement. There are four types of measurement scales and associated variables. There are four types of measurement scales and associated variables.

Nominal Variables Also called categorical variable. Also called categorical variable. Represents lowest form of measurement. Represents lowest form of measurement. Simply classifies persons. Simply classifies persons. –Female; male –Public or private school Represented by a number (1-female; 2- male) – doesn’t mean one category is higher or lower than another. Represented by a number (1-female; 2- male) – doesn’t mean one category is higher or lower than another.

Ordinal variables Classifies persons or objects. Classifies persons or objects. In addition, it ranks them In addition, it ranks them –Ex. – height, #1 = 6’ 5” ; #2 = 6’ 0”

Interval variables Has all characteristics of nominal and ordinal variables. Has all characteristics of nominal and ordinal variables. Also has equal intervals. Also has equal intervals. –The difference between a score of 70 and 80 is basically the same as the difference between a 90 and 100. –A score of zero does not mean a total lack of knowledge.

Ratio variables Represents highest level of measurement. Represents highest level of measurement. In addition to having all characteristics of the three previous variables, it has a true zero point. In addition to having all characteristics of the three previous variables, it has a true zero point. Mainly used for physical measures. Mainly used for physical measures. –One person is 5 feet tall, and her friend is two thirds as tall as she.

Qualitative and Quantitative Variables Nominal variables Nominal variables –Show qualitative differences only –i.e. religion, gender, political party Ordinal, interval, and ratio variables Ordinal, interval, and ratio variables –Show quantitative differences –i.e. test scores, heights, age, class size

Dependent and Independent Dependent variable Dependent variable –Depends on or caused by: Independent variable Independent variable –Manipulated by researcher

Testing Instruments It is more time efficient and less skillful to select an existing instrument than to create a new one that measures the same thing. It is more time efficient and less skillful to select an existing instrument than to create a new one that measures the same thing.

Standardized Tests There are thousands of standardized tests that measure a broad range of variables. There are thousands of standardized tests that measure a broad range of variables. A standardized test is one that is administered, scored, and interpreted in the same way no matter when and where it is administered. A standardized test is one that is administered, scored, and interpreted in the same way no matter when and where it is administered.

Tests Most tests are paper and pencil. Most tests are paper and pencil. Common formats are: Common formats are: –true/false –multiple choice –essay/short answer

Interpreting Test Results Raw scores indicate the number of items a person got right. Raw scores indicate the number of items a person got right. They can be transformed into derived scores (percentile ranks, stanines, and standard scores). They can be transformed into derived scores (percentile ranks, stanines, and standard scores). Derived scores are most often used with standardized tests. Derived scores are most often used with standardized tests.

Norm Reference Scores Compares a students test performance to the performance of other test takers. Compares a students test performance to the performance of other test takers.

Criterion-Reference Scores Compares a student’s test performance to predetermined standards of performance. Compares a student’s test performance to predetermined standards of performance.

Self Reference Scores Compares a students performance over time. Compares a students performance over time. Looks for improvement or regression Looks for improvement or regression –Similar or equal assessment is repeated over time

Types of Tests Achievement tests measure the current status of individuals on school taught subjects. Achievement tests measure the current status of individuals on school taught subjects. Aptitude tests are used to predict how well a test taker is likely to perform in the future. Aptitude tests are used to predict how well a test taker is likely to perform in the future.

Types of Tests Affective tests measure characteristics such as interest, values, attitude, and personality. Affective tests measure characteristics such as interest, values, attitude, and personality. Projective tests are not self-reporting instruments. They present an ambiguous situation to the test taker that requires him or her to “project” his or her true feelings on the ambiguous situation. Projective tests are not self-reporting instruments. They present an ambiguous situation to the test taker that requires him or her to “project” his or her true feelings on the ambiguous situation.

Validity of Measuring Instruments Most important quality of a test. It is the degree to which a test measures what it is supposed to measure, and consequently, permits appropriate interpretations of test scores. Most important quality of a test. It is the degree to which a test measures what it is supposed to measure, and consequently, permits appropriate interpretations of test scores.

A test is not valid per se; it is valid for a particular interpretation and for a particular group. Each intended score interpretation requires its own validation. A test is not valid per se; it is valid for a particular interpretation and for a particular group. Each intended score interpretation requires its own validation.

Validation Validation is a matter of degree; tests are not valid or invalid, they are highly valid, moderately valid, or generally invalid. Validation is a matter of degree; tests are not valid or invalid, they are highly valid, moderately valid, or generally invalid.

The three main forms of validity are content, criterion-related, and construct. They are viewed as interrelated, not independent aspects of validity. The three main forms of validity are content, criterion-related, and construct. They are viewed as interrelated, not independent aspects of validity.

Content Validity The degree to which a test measures an intended content area. It requires both item validity and sampling validity. The degree to which a test measures an intended content area. It requires both item validity and sampling validity. Item validity is concerned with whether the test items are relevant to the intended content area. Item validity is concerned with whether the test items are relevant to the intended content area.

Content Validity Content validity is concerned with how well the test sample represents the total content area. Content validity is concerned with how well the test sample represents the total content area. Content validity is of prime importance for achievement tests. Content validity is of prime importance for achievement tests. Content validity is determined by expert judgment of item and sample validity, not statistical means. Content validity is determined by expert judgment of item and sample validity, not statistical means.

Criterion-Related Validity Concurrent Validity Concurrent Validity –the degree to which the scores on a test are related to scores on another test administered. Predictive Validity Predictive Validity –the degree to which scores can determine how well someone will do in a future situation. –

Construct Validity The most important form of validity. The most important form of validity. Seeks to determine whether the construct underlying a variable is actually being measured. Seeks to determine whether the construct underlying a variable is actually being measured.

Construct Validity Determined by a series of validation studies that can include content and criterion-related approaches. Determined by a series of validation studies that can include content and criterion-related approaches.

Construct Validity The validity of any test can be diminished by factors such as: The validity of any test can be diminished by factors such as: –unclear test directions –inappropriate vocabulary –subjective scoring –failing to follow administration procedures.

Reliability of Measuring Instruments The degree to which a test measures whatever it measures. The degree to which a test measures whatever it measures. It is expressed numerically, usually as a coefficient ranging from 0.0 to 1.0; a high coefficient indicates high reliability. It is expressed numerically, usually as a coefficient ranging from 0.0 to 1.0; a high coefficient indicates high reliability.

Reliability Scores obtained when administered in November 2005 would be similar to results from the same test in December 2005 Scores obtained when administered in November 2005 would be similar to results from the same test in December 2005 Same test-takers would be used Same test-takers would be used