Study of the day Misattribution of arousal (Dutton & Aron, 1974)

Slides:



Advertisements
Similar presentations
Reliability and Validity of Researcher-Made Surveys.
Advertisements

Research Curriculum Session II –Study Subjects, Variables and Outcome Measures Jim Quinn MD MS Research Director, Division of Emergency Medicine Stanford.
Developing a Questionnaire
Conceptualization and Measurement
The Research Consumer Evaluates Measurement Reliability and Validity
Taking Stock Of Measurement. Basics Of Measurement Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables:
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
Validity and Reliability
4/25/2015 Marketing Research 1. 4/25/2015Marketing Research2 MEASUREMENT  An attempt to provide an objective estimate of a natural phenomenon ◦ e.g.
Part II Sigma Freud & Descriptive Statistics
General Information --- What is the purpose of the test? For what population is the designed? Is this population relevant to the people who will take your.
QUESTIONNAIRES ORANGE BOOK CHAPTER 9. WHAT DO QUESTIONNAIRES GATHER? BEHAVIOR ATTITUDES/BELIEFS/OPINIONS CHARACTERISTICS (AGE / MARITAL STATUS / EDUCATION.
The Marketing Survey Chapter 29.2.
CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY
RESEARCH METHODS Lecture 18
Reliability and Validity Dr. Roy Cole Department of Geography and Planning GVSU.
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Concept of Measurement
Manipulation and Measurement of Variables
© 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Chapter 5 Making Systematic Observations.
Psych 231: Research Methods in Psychology
Manipulation and Measurement of Variables
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Variables cont. Psych 231: Research Methods in Psychology.
Validity, Reliability, & Sampling
Research Methods in MIS
STRUCTURED INTERVIEWS AND INSTRUMENT DESIGN PART I Lecture 7.
Notes for Social Sciences Constructing Survey Questions.
Validity and Reliability Neither Valid nor Reliable Reliable but not Valid Valid & Reliable Fairly Valid but not very Reliable Think in terms of ‘the purpose.
4.12 & 4.13 UNDERSTAND DATA-COLLECTION METHODS TO EVALUATE THEIR APPROPRIATENESS FOR THE RESEARCH PROBLEM/ISSUE Understand promotion and intermediate.
Descriptive and Causal Research Designs
Experimental Research
Measurement in Exercise and Sport Psychology Research EPHE 348.
Lesson 2 – Studying Marriages and Families Robert Wonser.
Instrument Validity & Reliability. Why do we use instruments? Reliance upon our senses for empirical evidence Senses are unreliable Senses are imprecise.
Instrumentation.
Chapter Eight The Concept of Measurement and Attitude Scales
Technical Adequacy Session One Part Three.
Statistics: Basic Concepts. Overview Survey objective: – Collect data from a smaller part of a larger group to learn something about the larger group.
1 Chapter 11: Survey Research Summary page 343 Asking Questions Obtaining Answers Multi-item Scales Response Biases Questionnaire Design Questionnaire.
WELNS 670: Wellness Research Design Chapter 5: Planning Your Research Design.
CONDUCTING A SURVEY Adapted from Del, Balso Michael, and Aland Lewis D. First Steps: A Guide to Social Research. Toronto: Nelson Thomson Learning
Chapter 12 Survey Research.
Quantitative SOTL Research Methods Krista Trinder, College of Medicine Brad Wuetherick, GMCTE October 28, 2010.
SURVEY RESEARCH.  Purposes and general principles Survey research as a general approach for collecting descriptive data Surveys as data collection methods.
Designing Survey Instruments. Creating a Survey Instrument  Survey instruments should help researchers collect the most accurate data and reach the most.
Introduction to Social Survey Methodology Map Your Hazards! Combining Natural Hazards with Societal Issues.
Validity and Reliability Neither Valid nor Reliable Reliable but not Valid Valid & Reliable Fairly Valid but not very Reliable Think in terms of ‘the purpose.
Chapter 2: Behavioral Variability and Research Variability and Research 1. Behavioral science involves the study of variability in behavior how and why.
DESIGNING, CONDUCTING, ANALYZING & INTERPRETING DESCRIPTIVE RESEARCH CHAPTERS 7 & 11 Kristina Feldner.
Educational Action Research Todd Twyman Week 2. Gathering Quantitative Data Numbers! Attendance records, test scores, grades, specific counts of behavior,
Measurement Theory in Marketing Research. Measurement What is measurement?  Assignment of numerals to objects to represent quantities of attributes Don’t.
© 2006 by The McGraw-Hill Companies, Inc. All rights reserved. 1 Chapter 9 Surveys, Questionnaires, and Polls Most commonly used quantitative method –Used.
©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
1 Introduction to Statistics. 2 What is Statistics? The gathering, organization, analysis, and presentation of numerical information.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Reliability a measure is reliable if it gives the same information every time it is used. reliability is assessed by a number – typically a correlation.
Measurement Chapter 6. Measuring Variables Measurement Classifying units of analysis by categories to represent variable concepts.
Journal Entry §Do you think taking surveys online has had a positive or negative effect of marketing research? Why?
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
Quality instrument* Questions are determined by objectives Resist the temptation to ask questions that are interesting but not relevant to your hypothesis.
Measurement and Scaling Concepts
Conducting surveys and designing questionnaires. Aims Provide students with an understanding of the purposes of survey work Overview the stages involved.
Validity and Reliability
Writing Survey Questions
5. Reliability and Validity
Measurement Concepts and scale evaluation
Presentation transcript:

Study of the day Misattribution of arousal (Dutton & Aron, 1974)

Can situations influence how much we like someone? Answer: Yes! Dutton & Aron’s bridge study Misattribution of arousal

Writing good survey questions

Learning objectives You will learn: What makes a quality instrument How to write questions What validity means How to assess validity What reliability means How to assess reliability Scale development

Quality instrument* Questions are determined by objectives Resist the temptation to ask questions that are interesting but not relevant to your hypothesis Concrete and clearly phrased Pilot with potential respondents Contain reverse-scored items

Learning objectives What makes a quality instrument How to write questions What validity means How to assess validity What reliability means How to assess reliability Scale development

Good Questions* 1.Are not double-barreled (asks only about one item per question) 2.Do not have double negatives 3.Are not phrased in a leading manner 4. Are not biased in their language or make participants feel uncomfortable 5. Are concrete and specific

Open-ended Advantages Gives you the respondents view without interference Good when unsure what type of answers to expect Difficulties Time-consuming for respondent Answers must be cataloged and interpreted Two types of questions Mussweiler, T., & Damisch, L. (2008). Going back to Donald: How comparisons shape judgmental priming effects. Journal of Personality and Social Psychology, 95(6), 1295.

Two types of questions Closed-ended Advantages Easier to interpret More conducive to statistical analyses Good for large samples Answers are usually more reliable and consistent Difficulties Researcher must be familiar enough with phenomenon to write questions May not accurately capture all of respondents opinions Less illustrative

Response Choices 1.Nominal: categorical, discrete, mutually exclusive, no inherent order 2.Ordinal: inherent order but not associated with numbers – Strongly agree, agree, neither agree nor disagree, disagree, strongly disagree – Excellent, very good, good, fair, poor 3.Continuous: numbers meaningful

Ordinal Measures* Include a “do not know” if appropriate Include a neutral response if appropriate Balance all responses Use a 5- or 7-point numbered scale

Don’t forget demographics Important for: 1. Describing sample (especially gender and race) 2. Exploring your findings Typically asked at end Can include gender, race/ethnicity, education, job, age, marital status, geographic place of residence, etc.

Learning objectives What makes a quality instrument How to write questions What validity means How to assess validity What reliability means How to assess reliability Scale development

Validity Assesses the construct of interest Measuring what it is supposed to Cannot be directly measured – must be inferred from evidence

Face Validity Appears to measure what it is supposed to Let’s say you want to assess how social someone is Which ones would be best? – How often do you socialize with friends? – How many parties did you attend last month? – Do you get angry easily? – How much do you enjoy meeting new people? – Do you enjoy playing sports?

Construct Validity Does it relate to other constructs as would be predicted? Assess by correlating it with other measures E.g. How social someone is should relate to how much they have conversations with other people -& maybe how healthy they are

Criterion Validity How well does the measure predict or estimate real- world outcomes? E.g., If you asked students how interested they were in majoring in computer science, the measure can be validated by looking at how it relates to who actually majors in computer science

Learning objectives What makes a quality instrument How to write questions What validity means How to assess validity What reliability means How to assess reliability Scale development

Reliability Consistency in the type of result a test yields E.g., a reliable person Not perfectly overlapping, but questions “hang together”

Measuring Reliability* Cronbach Alpha: Inter-item consistency Test – retest : Same measure twice to the same group Equivalent forms: Administer two forms of the test to the same group Split-half : Administer half the measures at a time (odd items vs. even items) Inter-rater Reliability (coding qualitative data): How much coders’ answers overlap (percent agree and kappa)

How to improve reliability? Use clear and concise measures Include more items – less distorted by chance factors Developing a scoring plan for qualitative coding

Learning objectives What makes a quality instrument How to write questions What validity means How to assess validity What reliability means How to assess reliability Scale development

Scale development* 1.Conceptualize the target construct What are you trying to measure? Lit review Open-ended questions 2.Write the items Start with a larger pool Choose an appropriate response format 3.Pilot the scale Analyze reliability (kappa) and validity (correlations with other measures) Factor analysis to see if there are subscales Decide which need to be eliminated 4.Pilot the new measures

Available scales* The easiest thing (and the thing looked upon favorably by reviewers) is to use other people’s scales You can find these in: Scale databases e.g. Google Journal articles Books of scales

Other Suggestions For Qualtrics Include a question that asks participants NOT to respond – Helps rule out random clicking In general: Put DVs as close to manipulation as possible

Congratulations! You have learned: Characteristics of good scales How to write good questions What validity means and how to assess it What reliability means and how to assess it The steps involved in developing a scale