Independent and Dependent Variables

Slides:



Advertisements
Similar presentations
Chapter 3 Introduction to Quantitative Research
Advertisements

Chapter 3 Introduction to Quantitative Research
MEASUREMENT CONCEPTS © 2012 The McGraw-Hill Companies, Inc.
Defining, Measuring and Manipulating Variables. Operational Definition  The activities of the researcher in measuring and manipulating a variable. 
Part II Sigma Freud & Descriptive Statistics
Part II Sigma Freud & Descriptive Statistics
The Basics of Experimentation I: Variables and Control
Copyright © Allyn & Bacon (2007) Data and the Nature of Measurement Graziano and Raulin Research Methods: Chapter 4 This multimedia product and its contents.
LECTURE 9.
Data and the Nature of Measurement
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
EXPERIMENTAL DESIGNS Criteria for Experiments
Reliability and Validity in Experimental Research ♣
Concept of Measurement
EXPERIMENTAL DESIGNS What Is Required for a True Experiment? What Are the Independent and Dependent Variables? What Is a Confounding Variable? What Are.
© 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Chapter 4 Choosing a Research Design.
Manipulation and Measurement of Variables
Psych 231: Research Methods in Psychology
Manipulation and Measurement of Variables
Variables cont. Psych 231: Research Methods in Psychology.
Validity, Reliability, & Sampling
Experimental Research
Basic Principles of Research Design
EVALUATING YOUR RESEARCH DESIGN EDRS 5305 EDUCATIONAL RESEARCH & STATISTICS.
Chapter 8 Experimental Research
Defining and Measuring Variables Slides Prepared by Alison L. O’Malley Passer Chapter 4.
Applying Science Towards Understanding Behavior in Organizations Chapters 2 & 3.
Collecting Quantitative Data
Variation, Validity, & Variables Lesson 3. Research Methods & Statistics n Integral relationship l Must consider both during planning n Research Methods.
MEASUREMENT OF VARIABLES: OPERATIONAL DEFINITION AND SCALES
Collecting Quantitative Data Creswell Chapter 6. Who Will You Study? Identify unit of analysis Specify population Describe sampling approach  Class =
Group Discussion Explain the difference between assignment bias and selection bias. Which one is a threat to internal validity and which is a threat to.
Consumer Preference Test Level 1- “h” potato chip vs Level 2 - “g” potato chip 1. How would you rate chip “h” from 1 - 7? Don’t Delicious like.
The Psychology of the Person Chapter 2 Research Naomi Wagner, Ph.D Lecture Outlines Based on Burger, 8 th edition.
Which Test Do I Use? Statistics for Two Group Experiments The Chi Square Test The t Test Analyzing Multiple Groups and Factorial Experiments Analysis of.
Reasoning in Psychology Using Statistics Psychology
Selecting and Recruiting Subjects One Independent Variable: Two Group Designs Two Independent Groups Two Matched Groups Multiple Groups.
The Basics of Experimentation Ch7 – Reliability and Validity.
Chapter 1 Introduction to Statistics. Statistical Methods Were developed to serve a purpose Were developed to serve a purpose The purpose for each statistical.
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 3: The Foundations of Research 1.
CDIS 5400 Dr Brenda Louw 2010 Validity Issues in Research Design.
1 Evaluating Research This lecture ties into chapter 17 of Terre Blanche We know the structure of research Understand designs We know the requirements.
Variables and their Operational Definitions
Research in Communicative Disorders1 Research Design & Measurement Considerations (chap 3) Group Research Design Single Subject Design External Validity.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Introduction section of article
Experimental Research
Chapter 2: Behavioral Variability and Research Variability and Research 1. Behavioral science involves the study of variability in behavior how and why.
Chapter 6 Research Validity. Research Validity: Truthfulness of inferences made from a research study.
Chapter 10 Experimental Research Gay, Mills, and Airasian 10th Edition
Chapter Six: The Basics of Experimentation I: Variables and Control.
Experimental Research Methods in Language Learning Chapter 5 Validity in Experimental Research.
Introduction To Statistics. Statistics, Science, ad Observations What are statistics? What are statistics? The term statistics refers to a set of mathematical.
Variables It is very important in research to see variables, define them, and control or measure them.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
Chapter7 & 8 Experimental Design & Measurement of variables Adnan Khurshid.
RESEARCH METHODS IN INDUSTRIAL PSYCHOLOGY & ORGANIZATION Pertemuan Matakuliah: D Sosiologi dan Psikologi Industri Tahun: Sep-2009.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Lesson 3 Measurement and Scaling. Case: “What is performance?” brandesign.co.za.
Some Terminology experiment vs. correlational study IV vs. DV descriptive vs. inferential statistics sample vs. population statistic vs. parameter H 0.
1. INTRODUCTION TO RESEARCH PROCEDURES
CHAPTER 5 Introduction to Experimental Research
Product Reliability Measuring
Independent and Dependent Variables
CHAPTER 5 MEASUREMENT CONCEPTS © 2007 The McGraw-Hill Companies, Inc.
Research Methods: Concepts and Connections First Edition
Chapter Eight: Quantitative Methods
2 independent Groups Graziano & Raulin (1997).
Ch 5: Measurement Concepts
Presentation transcript:

Independent and Dependent Variables Operational Definitions Evaluating Operational Definitions Planning the Method Section

What is an independent variable? An independent variable (IV) is the variable (antecedent condition) an experimenter intentionally manipulates. Levels of an independent variable are the values of the IV created by the experimenter. An experiment requires at least two levels. Independent and Dependent Variables

Independent and Dependent Variables Explain confounding. An experiment is confounded when the value of an extraneous variable systematically changes along with the independent variable. For example, we could confound our experiment if we ran experimental subjects in the morning and control subjects at night. Independent and Dependent Variables

What is a dependent variable? A dependent variable is the outcome measure the experimenter uses to assess the change in behavior produced by the independent variable. The dependent variable depends on the value of the independent variable. Independent and Dependent Variables

What is an operational definition? An operational definition specifies the exact meaning of a variable in an experiment by defining it in terms of observable operations, procedures, and measurements. Operational Definitions

What is an operational definition? An experimental operational definition specifies the exact procedure for creating values of the independent variable. A measured operational definition specifies the exact procedure for measuring the dependent variable. Operational Definitions

What are the properties of a nominal scale? A nominal scale assigns items to two or more distinct categories that can be named using a shared feature, but does not measure their magnitude. Example: you can sort canines into friendly and shy categories. Evaluating Operational Definitions

What are the properties of an ordinal scale? An ordinal scale measures the magnitude of the dependent variable using ranks, but does not assign precise values. This scale allows us to make statements about relative speed, but not precise speed, like a runner’s place in a marathon. Evaluating Operational Definitions

What are the properties of an interval scale? An interval scale measures the magnitude of the dependent variable using equal intervals between values with no absolute zero point. Example: degrees Celsius or Fahrenheit and Sarnoff and Zimbardo’s (1961) 0-100 scale. Evaluating Operational Definitions

What are the properties of a ratio scale? A ratio scale measures the magnitude of the dependent variable using equal intervals between values and an absolute zero. This scale allows us to state that 2 meters are twice as long as 1 meter. Example: distance in meters or time in seconds. Evaluating Operational Definitions

What does reliability mean? Reliability refers to the consistency of experimental operational definitions and measured operational definitions. Example: a reliable bathroom scale should display the same weight if you measure yourself three times in the same minute. Evaluating Operational Definitions

Explain interrater reliability. Interrater reliability is the degree to which observers agree in their measurement of the behavior. Example: the degree to which three observers agree when scoring the same personal essays for optimism. Evaluating Operational Definitions

Explain test-retest reliability. Test-retest reliability means the degree to which a person's scores are consistent across two or more administrations of a measurement procedure. Example: highly correlated scores on the Wechsler Adult Intelligence Scale-Revised when it is administered twice, 2 weeks apart. Evaluating Operational Definitions

Explain interitem reliability. Interitem reliability measures the degree to which different parts of an instrument (questionnaire or test) that are designed to measure the same variable achieve consistent results. Evaluating Operational Definitions

What does validity mean? Validity means the operational definition accurately manipulates the independent variable or measures the dependent variable. Evaluating Operational Definitions

Evaluating Operational Definitions What is face validity? Face validity is the degree to which the validity of a manipulation or measurement technique is self-evident. This is the least stringent form of validity. For example, using a ruler to measure pupil size. Evaluating Operational Definitions

What is content validity? Content validity means how accurately a measurement procedure samples the content of the dependent variable. Example: an exam over chapters 1-4 that only contains questions about chapter 2 has poor content validity. Evaluating Operational Definitions

What is predictive validity? Predictive validity means how accurately a measurement procedure predicts future performance. Example: the ACT has predictive validity if these scores are significantly correlated with college GPA. Evaluating Operational Definitions

What is construct validity? Construct validity is how accurately an operational definition represents a construct. Example: a construct of abusive parents might include their perception of their neighbors as unfriendly. Evaluating Operational Definitions

Explain internal validity. Internal validity is the degree to which changes in the dependent variable across treatment conditions were due to the independent variable. Internal validity establishes a cause-and-effect relationship between the independent and dependent variables. Evaluating Operational Definitions

Explain the problem of confounding. Confounding occurs when an extraneous variable systematically changes across the experimental conditions. Example: a study comparing the effects of meditation and prayer on blood pressure would be confounded if one group exercised more. Evaluating Operational Definitions

Explain history threat. History threat occurs when an event outside the experiment threatens internal validity by changing the dependent variable. Example: subjects in group A were weighed before lunch while those in group B were weighed after lunch. Evaluating Operational Definitions

Explain maturation threat. Maturation threat is produced when physical or psychological changes in the subject threaten internal validity by changing the DV. Example: boredom may increase subject errors on a proofing task (DV). Evaluating Operational Definitions

Explain testing threat. Testing threat occurs when prior exposure to a measurement procedure affects performance on this measure during the experiment. Example: experimental subjects used a blood pressure cuff daily, while control subjects only used one during a pretest measurement. Evaluating Operational Definitions

Explain instrumentation threat. Instrumentation threat is when changes in the measurement instrument or measuring procedure threatens internal validity. Example: if reaction time measurements became less accurate during the experimental than the control conditions. Evaluating Operational Definitions

Explain statistical regression threat. Statistical regression threat occurs when subjects are assigned to conditions on the basis of extreme scores, the measurement procedure is not completely reliable, and subjects are retested using the same procedure to measure change on the dependent variable. Evaluating Operational Definitions

Explain selection threat. Selection threat occurs when individual differences are not balanced across treatment conditions by the assignment procedure. Example: despite random assignment, subjects in the experimental group were more extroverted than those in the control group. Evaluating Operational Definitions

Explain subject mortality threat. Subject mortality threat occurs when subjects drop out of experimental conditions at different rates. Example: even if subjects in each group started out with comparable anxiety scores, drop out could produce differences on this variable. Evaluating Operational Definitions

Explain selection interactions. Selection interactions occur when a selection threat combines with at least one other threat (history, maturation, statistical regression, subject mortality, or testing). Evaluating Operational Definitions

What is the purpose of the Method section of an APA report? The Method section of an APA research report describes the Participants, Apparatus or Materials, and Procedure of the experiment. This section provides the reader with sufficient detail (who, what, when, and how) to exactly replicate your study. Planning the Method Section

When is an Apparatus section needed? An Apparatus section of an APA research report is appropriate when the equipment used in a study was unique or specialized, or when we need to explain the capabilities of more common equipment so that the reader can better evaluate or replicate the experiment. Planning the Method Section