Presentation is loading. Please wait.

Presentation is loading. Please wait.

Independent and Dependent Variables Operational Definitions Evaluating Operational Definitions Planning the Method Section.

Similar presentations


Presentation on theme: "Independent and Dependent Variables Operational Definitions Evaluating Operational Definitions Planning the Method Section."— Presentation transcript:

1 Independent and Dependent Variables Operational Definitions Evaluating Operational Definitions Planning the Method Section

2 What is an independent variable? Independent and Dependent Variables An independent variable (IV) is the variable (antecedent condition) an experimenter intentionally manipulates. Levels of an independent variable are the values of the IV created by the experimenter. An experiment requires at least two levels.

3 Explain confounding. Independent and Dependent Variables An experiment is confounded when the value of an extraneous variable systematically changes along with the independent variable. For example, we could confound our experiment if we ran experimental subjects in the morning and control subjects at night.

4 What is a dependent variable? Independent and Dependent Variables A dependent variable is the outcome measure the experimenter uses to assess the change in behavior produced by the independent variable. The dependent variable depends on the value of the independent variable.

5 What is an operational definition? Operational Definitions An operational definition specifies the exact meaning of a variable in an experiment by defining it in terms of observable operations, procedures, and measurements.

6 What is an operational definition? Operational Definitions An experimental operational definition specifies the exact procedure for creating values of the independent variable. A measured operational definition specifies the exact procedure for measuring the dependent variable.

7 What are the properties of a nominal scale? Evaluating Operational Definitions A nominal scale assigns items to two or more distinct categories that can be named using a shared feature, but does not measure their magnitude. Example: you can sort canines into friendly and shy categories.

8 What are the properties of an ordinal scale? Evaluating Operational Definitions An ordinal scale measures the magnitude of the dependent variable using ranks, but does not assign precise values. This scale allows us to make statements about relative speed, but not precise speed, like a runner’s place in a marathon.

9 What are the properties of an interval scale? Evaluating Operational Definitions An interval scale measures the magnitude of the dependent variable using equal intervals between values with no absolute zero point. Example: degrees Celsius or Fahrenheit and Sarnoff and Zimbardo’s (1961) 0-100 scale.

10 What are the properties of a ratio scale? Evaluating Operational Definitions A ratio scale measures the magnitude of the dependent variable using equal intervals between values and an absolute zero. This scale allows us to state that 2 meters are twice as long as 1 meter. Example: distance in meters or time in seconds.

11 What does reliability mean? Evaluating Operational Definitions Reliability refers to the consistency of experimental operational definitions and measured operational definitions. Example: a reliable bathroom scale should display the same weight if you measure yourself three times in the same minute.

12 Explain interrater reliability. Evaluating Operational Definitions Interrater reliability is the degree to which observers agree in their measurement of the behavior. Example: the degree to which three observers agree when scoring the same personal essays for optimism.

13 Explain test-retest reliability. Evaluating Operational Definitions Test-retest reliability means the degree to which a person's scores are consistent across two or more administrations of a measurement procedure. Example: highly correlated scores on the Wechsler Adult Intelligence Scale-Revised when it is administered twice, 2 weeks apart.

14 Explain interitem reliability. Evaluating Operational Definitions Interitem reliability measures the degree to which different parts of an instrument (questionnaire or test) that are designed to measure the same variable achieve consistent results.

15 What does validity mean? Evaluating Operational Definitions Validity means the operational definition accurately manipulates the independent variable or measures the dependent variable.

16 What is face validity? Evaluating Operational Definitions Face validity is the degree to which the validity of a manipulation or measurement technique is self-evident. This is the least stringent form of validity. For example, using a ruler to measure pupil size.

17 What is content validity? Evaluating Operational Definitions Content validity means how accurately a measurement procedure samples the content of the dependent variable. Example: an exam over chapters 1-4 that only contains questions about chapter 2 has poor content validity.

18 What is predictive validity? Evaluating Operational Definitions Predictive validity means how accurately a measurement procedure predicts future performance. Example: the ACT has predictive validity if these scores are significantly correlated with college GPA.

19 What is construct validity? Evaluating Operational Definitions Construct validity is how accurately an operational definition represents a construct. Example: a construct of abusive parents might include their perception of their neighbors as unfriendly.

20 Explain internal validity. Evaluating Operational Definitions Internal validity is the degree to which changes in the dependent variable across treatment conditions were due to the independent variable. Internal validity establishes a cause-and-effect relationship between the independent and dependent variables.

21 Explain the problem of confounding. Evaluating Operational Definitions Confounding occurs when an extraneous variable systematically changes across the experimental conditions. Example: a study comparing the effects of meditation and prayer on blood pressure would be confounded if one group exercised more.

22 Explain history threat. Evaluating Operational Definitions History threat occurs when an event outside the experiment threatens internal validity by changing the dependent variable. Example: subjects in group A were weighed before lunch while those in group B were weighed after lunch.

23 Explain maturation threat. Evaluating Operational Definitions Maturation threat is produced when physical or psychological changes in the subject threaten internal validity by changing the DV. Example: boredom may increase subject errors on a proofing task (DV).

24 Explain testing threat. Evaluating Operational Definitions Testing threat occurs when prior exposure to a measurement procedure affects performance on this measure during the experiment. Example: experimental subjects used a blood pressure cuff daily, while control subjects only used one during a pretest measurement.

25 Explain instrumentation threat. Evaluating Operational Definitions Instrumentation threat is when changes in the measurement instrument or measuring procedure threatens internal validity. Example: if reaction time measurements became less accurate during the experimental than the control conditions.

26 Explain statistical regression threat. Evaluating Operational Definitions Statistical regression threat occurs when subjects are assigned to conditions on the basis of extreme scores, the measurement procedure is not completely reliable, and subjects are retested using the same procedure to measure change on the dependent variable.

27 Explain selection threat. Evaluating Operational Definitions Selection threat occurs when individual differences are not balanced across treatment conditions by the assignment procedure. Example: despite random assignment, subjects in the experimental group were more extroverted than those in the control group.

28 Explain subject mortality threat. Evaluating Operational Definitions Subject mortality threat occurs when subjects drop out of experimental conditions at different rates. Example: even if subjects in each group started out with comparable anxiety scores, drop out could produce differences on this variable.

29 Explain selection interactions. Evaluating Operational Definitions Selection interactions occur when a selection threat combines with at least one other threat (history, maturation, statistical regression, subject mortality, or testing).

30 What is the purpose of the Method section of an APA report? Planning the Method Section The Method section of an APA research report describes the Participants, Apparatus or Materials, and Procedure of the experiment. This section provides the reader with sufficient detail (who, what, when, and how) to exactly replicate your study.

31 When is an Apparatus section needed? Planning the Method Section An Apparatus section of an APA research report is appropriate when the equipment used in a study was unique or specialized, or when we need to explain the capabilities of more common equipment so that the reader can better evaluate or replicate the experiment.


Download ppt "Independent and Dependent Variables Operational Definitions Evaluating Operational Definitions Planning the Method Section."

Similar presentations


Ads by Google