Presentation is loading. Please wait.

Presentation is loading. Please wait.

Experimental Design.

Similar presentations


Presentation on theme: "Experimental Design."— Presentation transcript:

1 Experimental Design

2 Experimental Method Goal: Establish cause-and-effect
AIM of study: see if one variable has an effect on another variable. Many factors influence our behavior. Experiments (1) manipulate factors that interest us while keeping other factors under (2) control. Effects generated by manipulated factors isolate cause and effect relationships. OBJECTIVE 3-8| Explain how experiments help researchers isolate cause and effect.

3 Quantitative Research
Experiments are an example of quantitative research. This generates numerical data. Quantitative methods are used so they can be statistically tested for significance in order to rule out the role of chance in the results.

4 Variables Independent Variable-manipulated Dependent Variable-measured
Operationalized- clearly defining what is being measured Confounding variables-undesirable variables within the groups that may influence the relationship between the IV and DV (ex. Strategies in gifted vs special needs classes-need random assignment) Blue Box page 26 and 27

5 Experiments compare at least two groups or two conditions
Experimental Condition Group that is exposed to the treatment, to one version of the IV Control Condition Group that contrasts with the experimental condition and serves as a comparison for evaluating the effect of the treatment

6 Ways participants are allocated to groups
Matched pairs: Pairs of subjects are matched to eliminate individual differences Random allocation: Assigning participants by chance, minimizing pre-existing conditions between the groups

7 A summary of steps during experimentation.

8 Experimental Design Refers to the way groups or conditions are compared Specific designs are used to heighten control in the experiment and reduce errors Independent Samples (between-subjects design): compares 2 different groups Repeated Measures (within-subjects design): compares the subject to himself before and after the IV is introduced Matched Pairs: Measure the differences in values of the DV in pairs of subjects who are matched to eliminate individual differences

9 Specific Control Techniques
Single-blind Participants don’t know what the study is about. Helps counteract Hawthorne effect and Demand characteristics Double-blind Neither the participants or the researcher know the aim of the study and which group is the control or treatment group. Used to control researcher bias

10 Effects of participant expectations and bias
Demand Characteristics: participants act differently because they know they are in an experiment. If they find out the aim of the study, they think that certain behavior is “demanded” of them by the researcher. Hawthorne Effect: participants alter their behavior by trying to please the researcher and guess the correct answer. Participant Expectancy: participants change their behavior since they are being observed. Use Single-blind technique to control these biases

11 Effects of researcher expectations and bias
Observer or Researcher Bias: experimenter sees what he or she is looking for. His or her expectations consciously or unconsciously affect the findings of the study. The researcher behaviors influence the participant behavior. Use Double-blind technique to control these biases.

12 Validity Study tests what it says it will test
Internal Validity Refers to the control within the experiment Design choices try to address threats but can’t get all of them Random errors-characteristics subjects bring with them to the study (confounding variables) Systematic errors- mistakes the researcher makes External Validity Extent that results can be generalized outside the study Population Validity Generalize results to a larger population, but must be target population with similar characteristics to sample Ecological Validity Study represents what will happen in real life. Generalize based on experimental conditions (realistic or too controlled?) Cross-cultural Validity

13 Reliability Results can be replicated
Internal Reliability Used to assess the consistency of results across items within a test. Split-half reliability Half the test items are compared with the other half External Reliability refers to the extent to which a measure varies from one use to another Test-retest reliability Assesses the consistency of a measure from one time to another

14 Ethics Participants should be treated in an ethical manner
Informed Consent is required Slight deception is allowed if it doesn’t cause stress and it’s explained at end Right to withdraw from a study at any time All information is confidential Protection from potential physical or mental harm Debrief the aims and purpose so participant doesn’t leave with stress


Download ppt "Experimental Design."

Similar presentations


Ads by Google