Experimental Design. Check in Exams? – Will be returned on Weds Proposal assignment – Posted on the wiki, due 11/9 Experimental design – Internal validity.

Slides:



Advertisements
Similar presentations
PhD Research Seminar Series: Valid Research Designs
Advertisements

Independent and Dependent Variables
Increasing your confidence that you really found what you think you found. Reliability and Validity.
Variance in Research Design Sources, threats to internal validity, and “Noise”
Defining Characteristics
GROUP-LEVEL DESIGNS Chapter 9.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Correlation AND EXPERIMENTAL DESIGN
Research Design and Validity Threats
Experimental Design. What is an experiment? – When the researcher manipulates the independent variable to view change in the dependent variable Why do.
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Experimental Design.
Non-Experimental designs: Developmental designs & Small-N designs
Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Experiments and Observational Studies.  A study at a high school in California compared academic performance of music students with that of non-music.
Chapter 8 Experimental Research
Statistical Analyses & Threats to Validity
Consumer Preference Test Level 1- “h” potato chip vs Level 2 - “g” potato chip 1. How would you rate chip “h” from 1 - 7? Don’t Delicious like.
Please review this power point presentation after reading Chapter 1 in the text – you will have quiz questions that pertain to this material.
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Chapter 13 Notes Observational Studies and Experimental Design
Chapter 13 Observational Studies & Experimental Design.
Experiments Unit 2 – Mod 5. Experiment Carefully controlled method of investigation used to establish a cause-and-effect relationship Experimenter purposely.
Experimental Research Validity and Confounds. What is it? Systematic inquiry that is characterized by: Systematic inquiry that is characterized by: An.
Experiments Part 3. The purpose of the design is to rule out these alternative causes, leaving only the actual factor that is the real cause.
Slide 13-1 Copyright © 2004 Pearson Education, Inc.
Control in Experimentation & Achieving Constancy Chapters 7 & 8.
The Research Enterprise in Psychology
Causal inferences During the last two lectures we have been discussing ways to make inferences about the causal relationships between variables. One of.
Reliability and Validity Why is this so important and why is this so difficult?
Research Methods in Psychology Descriptive Methods Naturalistic observation Intensive individual case study Surveys/questionnaires/interviews Correlational.
Types of Research and Designs This week and next week… Covering –Research classifications –Variables –Steps in Experimental Research –Validity –Research.
1 Experimental Research Cause + Effect Manipulation Control.
1 Evaluating Research This lecture ties into chapter 17 of Terre Blanche We know the structure of research Understand designs We know the requirements.
Chapter 3.1.  Observational Study: involves passive data collection (observe, record or measure but don’t interfere)  Experiment: ~Involves active data.
Experimental Design Showing Cause & Effect Relationships.
Section 6: The Experiment: Hunting for Causes
Review of Research Methods. Overview of the Research Process I. Develop a research question II. Develop a hypothesis III. Choose a research design IV.
Introduction section of article
CHAPTER 4 – RESEARCH METHODS Psychology 110. How Do We Know What We Know? You can know something because a friend told you You can know something because.
Research Strategies. Why is Research Important? Answer in complete sentences in your bell work spiral. Discuss the consequences of good or poor research.
1.) *Experiment* 2.) Quasi-Experiment 3.) Correlation 4.) Naturalistic Observation 5.) Case Study 6.) Survey Research.
Chapter 6 Research Validity. Research Validity: Truthfulness of inferences made from a research study.
SP 2015 CP PROBABILITY & STATISTICS Observational Studies vs. Experiments Chapter 11.
11 Chapter 9 Experimental Designs © 2009 John Wiley & Sons Ltd.
CJ490: Research Methods in Criminal Justice UNIT #4 SEMINAR Professor Jeffrey Hauck.
Research Design Quantitative. Quantitative Research Design Quantitative Research is the cornerstone of evidence-based practice It provides the knowledge.
Psychological Experimentation The Experimental Method: Discovering the Causes of Behavior Experiment: A controlled situation in which the researcher.
It’s actually way more exciting than it sounds!!! It’s actually way more exciting than it sounds!!! Research Methods & Statistics.
Research designs Research designs Quantitative Research Designs.
Experimental Research
CHOOSING A RESEARCH DESIGN
Research Methods 3. Experimental Research.
2 independent Groups Graziano & Raulin (1997).
Designing an Experiment
Chapter 6 Research Validity.
Experiments and Quasi-Experiments
Experimental Design.
Experimental Design.
Evaluating research Is this valid research?.
Experiments and Quasi-Experiments
Introduction to Experimental Design
Experiments II: Validity and Design Considerations
Group Experimental Design
Research Design Quantitative.
Research Methods for the Behavioral Sciences
Presentation transcript:

Experimental Design

Check in Exams? – Will be returned on Weds Proposal assignment – Posted on the wiki, due 11/9 Experimental design – Internal validity and threats – In class exercise

Experimental Design What is an experiment? – When the researcher manipulates the independent variable to view change in the dependent variable Why do we do experiments? – To establish and study cause and effect relationships

Famous Experiments Alexander Fleming-Discovery of Penicillen – Initially an accident – But he had to experimentally test its influence to be sure How would you test if a “mold” killed bacteria? How could you make sure something else didn’t kill the bacteria?

Experimental Design-Example Experimental Group Zombie Virus Control Group Bacteria Control Group Zombie Virus

Experimental Design-Example Experimental Group Zombie Virus Control Group Bacteria Control Group Zombie Virus Anti Zombie Serum

Experimental Design-Example Experimental Group Zombie Virus Dies Control Group Bacteria Control Group Zombie Virus LIVES!!!

How can we be sure of cause and effect? The bacteria died How do we know it was the serum? – Not environment (cold, hot, wind) – Not some other contaminant – Not time/age – Not the shape of the dish – Not a preexisting problem with our virus

How can we be sure? We control the circumstances We have a “control group” that is exposed to everything in exactly the same way…except our intervention A true experiment is when the researcher has complete control over who gets the intervention and who does not

Internal Validity When we have high control over an experiment, and can confidently dismiss alternate influences, we say that it has “good internal validity” Internal validity is the degree to which we can be sure the IV influenced the DV – And we ‘know’ that nothing else caused our outcome

Internal Validity Can be hard to achieve in psychology experiments There are many specific “threats” to internal validity There are ways to deal with some of these threats

Experimental Design-Example The researcher Blue Schools Green Schools The researcher wants to know if a program will Prevent early onset sexual activity

Experimental Design-Example The researcher Blue Schools Green Schools The researcher wants to know if a program will Prevent early onset sexual activity SEX ED PROGRAM

How will I know? If green schools have lower rates of sexual activity, how will I know that its my program? – Can I be sure its not something else? What are some threats to knowing that my program is the cause of changes in sexual activity?

This was a real study (is) 24 schools middle schools in the greater Boston area Randomly assigned by school to – treatment (special sex ed program each year for three years) – Control (whatever they would normally do) Surveyed in 6 th, 7 th, 8 th and 9 th grade

Threats to Internal Validity-History History – External events or circumstances that influence outcome – If kids in one or more blue schools had a “pact” to get pregnant? More blue kids might get pregnant, and make my green schools look “better” – If some green schools gave kids even more sex ed than they are getting from my program? This actually happened

How to “control” history? Can be quite difficult Ask participants to avoid other treatments… – Though this won’t control all exposures – Schools in our study violated this Try to collect good, comprehensive data and be aware of the circumstances of your participants as much as possible – At least then you can statistically control for some influences in your data

Threats…Maturation Maturation – When growth may account for our effect – This could certainly happen with a sex program over a 3 year period

How to control maturation? You can’t control it But you have a control group Both the experiment and control group should mature at the same rate – If random assignment has “worked” – And factors related to maturation have been balanced across conditions So any difference would reflect your intervention

Maturation? Factors related to sex behavior maturation in teens? – Poverty – Violent neighborhood – Lack of parental monitoring Single parenting Parents who have to work “swing shift” So you better hope that random assignment fixed this…

Threats--Selection When treatment groups are unequal in a way that influences outcome – If blue schools kids are poorer than green schools? – Poverty increases early onset sexual behavior So poorer kids might be harder to “treat” – Poor kids may have higher rates of sex behavior even before my study starts – These things could make my treatment look “better”

How to control for selection? Randomization – If many poor and wealthy schools, and randomly assign, groups should be pretty equal But sample size is crucial here Stratification may help Controlling for Baseline – Measure sexual onset before intervention – Adjust for differences between blue and green schools – Remaining difference should be due to my program

Threats-Participant expectations When participants believe they are getting a treatment So they improve Also known as placebo effect E.g. kids know they are getting an education program This makes them behave differently – Not the program itself

Controlling placebo? When possible, have a “placebo control” – Control group gets a “treatment” – Gives them attention like intervention group – That should not influence the outcome – That way both treatment and control get “attention” Groups “blind” to treatment type – If possible

Threats—researcher expectations When the researcher treats treatment groups differently – Has a different expectation of them – Or actually gives them something extra – Or actually evaluates them differently This actually changes the outcome

Researcher expectations Can be very subtle Positive reinforcement to tx group for being “special” Giving tx people special information or extra attention or help

Avoiding researcher expectations? Separate the researcher from the treatment Researcher should be blind to who is in what treatment – If possible Or researcher should be external to treatment process – As in our study

Instrumentation/Measurement If your way of measuring changes over time, it could alter how your outcome is measured – E.g. I ask the question “did you have sex” differently, responses may vary – E.g. If grading papers, and my view of what a “good answer” changes as I read answers, this could change how I grade later versus earlier responses

Instrumentation? Avoid this by piloting your measure first and then sticking with it If you must change your measure then collect new data Keep track of changes in questions