Neobehaviorists. Neobehaviorism Life after Watson Life after Watson Optimism Optimism But…. But….

Slides:



Advertisements
Similar presentations
A.P. Psychology Modules 20-22
Advertisements

©John Wiley & Sons, Inc Huffman: Psychology in Action (8e) Psychology in Action (8e) by Karen Huffman PowerPoint  Lecture Notes Presentation Chapter.
$100 $400 $300$200$400 $200$100$100$400 $200$200$500 $500$300 $200$500 $100$300$100$300 $500$300$400$400$500.
Behavioral Theories Of Learning
Operant Conditioning Operant conditioning - the learning of voluntary behavior through the effects of pleasant and unpleasant consequences to responses.
Lecture Overview Classical Conditioning Operant Conditioning Cognitive-Social Learning The Biology of Learning Using Conditioning & Learning Principles.
Chapter 8 Learning.  Learning  relatively permanent change in an organism’s behavior due to experience.
Learning Operant Conditioning.  Operant Behavior  operates (acts) on environment  produces consequences  Respondent Behavior  occurs as an automatic.
Myers EXPLORING PSYCHOLOGY (6th Edition in Modules) Module 19 Operant Conditioning James A. McCubbin, PhD Clemson University Worth Publishers.
Chapter 8 Operant Conditioning.  Operant Conditioning  type of learning in which behavior is strengthened if followed by reinforcement or diminished.
OPERANT CONDITIONING DEF: a form of learning in which responses come to be controlled by their consequences.
Learning the Consequences of Behavior
Reinforcement Schedules Intermittent Reinforcement: A type of reinforcement schedule by which some, but not all, correct responses are reinforced. Intermittent.
PowerPoint  Lecture Notes Presentation Chapter 9: Learning
1 Famous Psychology Experiments. 2 Ivan Pavlov Classical Conditioning Experiments on dogs Smarty Pants: Nobel Prize Dog.
1 Famous Psychology Experiments. 2 Ivan Pavlov Classical Conditioning Experiments on dogs Smarty Pants: Nobel Prize Dog.
Learning Part II. Overview Habituation Classical conditioning Instrumental/operant conditioning Observational learning.
Chapter 7: Learning 1 What is learning? A relatively permanent change in behavior due to experience First test - purpose? To assess learning First test.
Chapter 6: Learning. Classical Conditioning Ivan Pavlov Terminology –Unconditioned Stimulus (UCS): evokes an unconditioned response without previous conditioning.
Learning/Behaviorism Operant and Observational learning.
A Brief Introduction to Learning Theory The concept of learning is fundamental to education We can teach. We can re-teach. We can teach alternatives.
© 2013 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any manner.
Copyright © Allyn & Bacon 2007 Big Bang Theory. I CAN Explain key features of OC – Positive Reinforcement – Negative Reinforcement – Omission Training.
Classical Conditioning, Operant Conditioning, and Observational Learning Learning Conditioning Watson Thorndike Behavior Reinforcement Skinner Operants.
Learning Theories Learning To gain knowledge, understanding, or skill, by study, instruction, or experience.
Learning: A Lesson on Behaviorism Psychology Unit 2: Learning Grade Level: 11 th and 12 th Kyle Muntzinger Psychology Unit 2: Learning Grade Level: 11.
Operant Conditioning  B.F. Skinner ( ) elaborated Thorndike’s Law of Effect developed behavioral technology.
Operant Conditioning Operant Conditioning A type of learning in which behavior is strengthened if followed by reinforcement or diminished if.
Classical Conditioning
1 The Learning Perspective: How the Environment Influences Behavior.
Learning. This is happening when you respond to a second stimulus that is similar to a conditioned stimulus without additional training Generalization.
Learning Review Flashcards for Terms on the Test.
Chapter 6 Learning.
Dr. M. Davis-Brantley.  Learning is the process that produces a relatively enduring change in behavior or knowledge as a result of an individual’s past.
Table of Contents CHAPTER 6 Learning. Table of ContentsLEARNING  Learning  Classical conditioning  Operant/Instrumental conditioning  Observational.
4 th Edition Copyright 2004 Prentice Hall5-1 Learning Chapter 5.
Copyright McGraw-Hill, Inc Chapter 5 Learning.
LEARNING  a relatively permanent change in behavior as the result of an experience.  essential process enabling animals and humans to adapt to their.
Schedules of Reinforcement 11/11/11. The consequence provides something ($, a spanking…) The consequence takes something away (removes headache, timeout)
Observational learning Modeling We learn from and examples. Higher animals, especially humans, learn through observing others’ experiences and imitation.
B.F. Skinner and Operant Conditioning
Learning  relatively permanent change in an organism’s behavior due to experience  Helps us …
Unit 6: Learning. How Do We Learn? Learning = a relatively permanent change in an organism’s behavior due to experience. 3 Types:  Classical  Operant.
Learning Definition: The process of acquiring new and enduring information or behaviors Associative learning is the key Conditioning – the process of.
Def: a relatively permanent change in behavior that results from experience Classical Conditioning: learning procedure in which associations are made.
Operant Conditioning. Objective(s) What is operant conditioning and how does it differ from classical conditioning? What is operant conditioning and how.
AP PSYCHOLOGY UNIT VI Part Two: Operant Conditioning: Reward and Punishment.
Learning 7-9% of the AP Psychology exam. Thursday, December 3 Sit with your group from yesterday’s test review!
CHAPTER 8 Learning. Learning is a relatively permanent change in behavior due to experience Adaptability  Our capacity to learn new behaviors that allow.
Behavioral Learning Theory : Pavlov, Thorndike & Skinner M. Borland E.P. 500 Dr. Mayton Summer 2007.
Table of Contents Chapter 6 Learning. Table of Contents Learning –Classical conditioning –Operant/Instrumental conditioning –Observational learning Ivan.
Psychology in Action (8e) PowerPoint  Lecture Notes Presentation Chapter 6: Learning 1.
Learning Chapter 4. What Is Learning? Learning – any relatively permanent change in behavior brought about by experience or practice. –When people learn.
Learning Principles & Applications 7-9% of AP Exam.
Chapter 5 Learning. What is Learning?  A relatively permanent change in behavior that results from experience  Learning is adaptive  Three major types.
Operant Conditioning. Agenda 1. Review Classical Conditioning (10) 2. Skinner and Operant Conditioning (25) Puzzle Box Clip Embedded 3. BF Skinner Clip.
Chapter 6 LEARNING. Learning Learning – A process through which experience produces lasting change in behavior or mental processes. Behavioral Learning.
Conditioning and Learning Unit 6 Conditioning and Learning Modules
© 2008 The McGraw-Hill Companies, Inc.
PSYCHOLOGY: LEARNING Learning- the process of acquiring new and relatively enduring information or behaviors.
Unit 6 Learning.
Learning: Operant Conditioning.
Chapter 6 Learning.
Principles of Learning: Classical and Operant Conditioning, and Social Learning Psychology I Mrs. Hart.
Operant Conditioning.
Psychology in Action (8e) by Karen Huffman
Learning.
Learning Any relatively permanent change in behavior (or behavior potential) produced by experience.
Chapter 6: Learning.
Chapter 7: Learning.
Presentation transcript:

Neobehaviorists

Neobehaviorism Life after Watson Life after Watson Optimism Optimism But…. But….

Neobehaviorists Influence by Watson?....Clearly. Influence by Watson?....Clearly. Hull Hull Tolman Tolman Skinner Skinner

Clark Hull Hull ( ) Ph.D from University of Wisconsin in 1918 Invited to Yale in 1929 President of APA in 1935

Hull –Early Interests University of Wisconsin University of Wisconsin Books: Books: Aptitude Testing (1928) Aptitude Testing (1928) Hypnosis and Suggestibility (1933) Hypnosis and Suggestibility (1933) 32 papers 32 papers

Hull’s System - Yale Stimuli and responses are assumed to be bridged by intervening variables such as: Stimuli and responses are assumed to be bridged by intervening variables such as: Drive Drive Fatigue Fatigue Habit strength Habit strength Incentive Incentive

Hull’s System Example: Example: S E R = S H R x D x V x K E refers to action potential in a given situation E refers to action potential in a given situation H refers to habit strength (or number of previous trials in the situation) H refers to habit strength (or number of previous trials in the situation) D is drive strength (e.g., the number of hours of deprivation) D is drive strength (e.g., the number of hours of deprivation) V refers to stimulus intensity V refers to stimulus intensity K refers to incentive motivation K refers to incentive motivation

Hull’s Theory Reinforcement: Reinforcement: Played a key role Played a key role Law of reinforcement: Law of reinforcement: Stimuli that reduce drive stimuli are reinforcing. Stimuli that reduce drive stimuli are reinforcing. Secondary reinforcement Secondary reinforcement Any stimulus consistently associated with primary reinforcers takes on reinforcing properties. Any stimulus consistently associated with primary reinforcers takes on reinforcing properties.

Hull - Legacy Central figure in the development of quantitative approaches to behavior. Central figure in the development of quantitative approaches to behavior. Principles of Behavior (1943) Principles of Behavior (1943) A Behavior System (1952) A Behavior System (1952)

Edward C. Tolman Tolman ( ) Graduates with B.S. from MIT (1911) Harvard (1915) – Ph.D in Psychology President APA (1937)

Tolman’s System Great range of topics that we encounter in our daily lives Great range of topics that we encounter in our daily lives Focus on the role of cognition and purpose Focus on the role of cognition and purpose Wanted a psychology with true breadth of perspective that retained the desirable objective features of classical behaviorism. Wanted a psychology with true breadth of perspective that retained the desirable objective features of classical behaviorism.

Tolman’s System Believed that psychological processes intervene between stimuli and responses. Believed that psychological processes intervene between stimuli and responses. Intervening variables: Intervening variables: Cognitions Cognitions Expectancies Expectancies Purposes Purposes Hypotheses Hypotheses Appetite Appetite

Example: Expectancies: Expectancies: Develops when a reward follows each successful response. Develops when a reward follows each successful response. Then becomes involved in directing and controlling behavior Then becomes involved in directing and controlling behavior

Tolman - Reinforcement A reinforcer (e.g., food) has nothing to do with learning, as such, but do regulate the performance of learned responses. A reinforcer (e.g., food) has nothing to do with learning, as such, but do regulate the performance of learned responses. Learning vs. Reinforcement vs. Performance Learning vs. Reinforcement vs. Performance Cognitive maps Cognitive maps

Tolman - Reinforcement Latent learning Latent learning Reinforcement influences motivation and hence performance, but learning itself is an independent process. Reinforcement influences motivation and hence performance, but learning itself is an independent process.

Tolman’s - Legacy Behaviorism could be more… Behaviorism could be more… Set up the cognitive movement… Set up the cognitive movement… Springboard for work in: Springboard for work in: Motivation Motivation Clinical Psychology Clinical Psychology Neuropsychology Neuropsychology

B.F. Skinner

Skinner Box

Skinner’s Basic Law of Operant Conditioning A response that is followed by a reinforcer is strengthened and is therefore more likely to occur again. A response that is followed by a reinforcer is strengthened and is therefore more likely to occur again. A reinforcer is a stimulus or event that increases the frequency of a response it follows. A reinforcer is a stimulus or event that increases the frequency of a response it follows.

Operant Conditioniing 1) The reinforcer must follow the response. 1) The reinforcer must follow the response. 2) The reinforcer must follow immediately. 2) The reinforcer must follow immediately. 3) The reinforcer must be contingent on the response. 3) The reinforcer must be contingent on the response.

What Behaviors Can Be Reinforced? Academic Academic Social Social Psychomotor Psychomotor Aggression Aggression Criminal Activity Criminal Activity

Basic Concepts in OC Shaping (Successive approximations) Shaping (Successive approximations) Shaping is a means of teaching a behavior when the free operant level for that behavior is very low (or when the desired terminal behavior is different in form from any responses that the organism exhibits). Shaping is a means of teaching a behavior when the free operant level for that behavior is very low (or when the desired terminal behavior is different in form from any responses that the organism exhibits).

The Nature of Reinforcers Primary Reinforcer: Primary Reinforcer: One that satisfies a built-in (perhaps biological) need or desire. One that satisfies a built-in (perhaps biological) need or desire. Examples: Examples: Food Food Water Water Oxygen Oxygen Warmth Warmth

The Nature of Reinforcers Secondary (Conditioned) Reinforcers: Secondary (Conditioned) Reinforcers: A previously neutral stimulus that has become reinforcing to an organism through repeated association with another reinforcer. A previously neutral stimulus that has become reinforcing to an organism through repeated association with another reinforcer. Examples: Examples: Praise Praise Good grades Good grades $$$ $$$ Feelings of success Feelings of success

What Kinds of Consequences Do We Find Reinforcing? Activity reinforcers Activity reinforcers An opportunity to engage in a favorite activity. An opportunity to engage in a favorite activity. Premack Principle: Premack Principle: A normally high-frequency response, when it follows a normally low-frequency response, will increase the frequency of the low-frequency response. A normally high-frequency response, when it follows a normally low-frequency response, will increase the frequency of the low-frequency response.

What Kinds of Consequences Do We Find Reinforcing? Material reinforcers Material reinforcers Actual objects like food or toys Actual objects like food or toys Social reinforcers Social reinforcers Gesture or sign from one person to another that communicates positive regard like praise or a smile. Gesture or sign from one person to another that communicates positive regard like praise or a smile.

What Kinds of Consequences Do We Find Reinforcing? Positive feedback: Positive feedback: Provides information as to which responses are desirable (and which are not). Provides information as to which responses are desirable (and which are not). Examples: material and social reinforcers Examples: material and social reinforcers

What Kinds of Consequences Do We Find Reinforcing? Intrinsic reinforcers Intrinsic reinforcers When an individual engages in a response not because of any external reinforcers but because of the internal good feelings (the intrinsic reinforcers) that such a response brings. When an individual engages in a response not because of any external reinforcers but because of the internal good feelings (the intrinsic reinforcers) that such a response brings. Examples: feelings of success, feeling relieved, feeling proud Examples: feelings of success, feeling relieved, feeling proud

Schedules of Reinforcement Ratio Schedules: Ratio Schedules: A schedule in which reinforcement occurs after a certain number of responses have been emitted (fixed or variable) A schedule in which reinforcement occurs after a certain number of responses have been emitted (fixed or variable) Interval Schedules: Interval Schedules: A schedule in which reinforcement is contingent on the first response emitted after a certain time interval has elapsed (fixed or variable. A schedule in which reinforcement is contingent on the first response emitted after a certain time interval has elapsed (fixed or variable.

Ratio Schedules Fixed Ratio (FR): Fixed Ratio (FR): Reinforcer is presented after a certain constant number of responses have occurred. Reinforcer is presented after a certain constant number of responses have occurred. Example - 1:3 or 1:10 Example - 1:3 or 1:10 Produces a high and consistent response rate Produces a high and consistent response rate

Ratio Schedules Variable Ratio (VR): Variable Ratio (VR): Reinforcement is presented after a particular, yet changing, number of responses have been emitted. Reinforcement is presented after a particular, yet changing, number of responses have been emitted. Example – In a 1:5 VR you may first be reinforced after four responses, then after seven more, then after three, etc. Example – In a 1:5 VR you may first be reinforced after four responses, then after seven more, then after three, etc.

Interval Schedules Fixed Interval (FI): Fixed Interval (FI): Reinforcement is contingent on the first response emitted after a certain fixed time interval has elapsed. Reinforcement is contingent on the first response emitted after a certain fixed time interval has elapsed. Example: The organism may be reinforced for the first response emitted after five minutes have elapsed. Example: The organism may be reinforced for the first response emitted after five minutes have elapsed.

Interval Schedules Variable Interval (VI): Variable Interval (VI): Reinforcement is contingent on the first response emitted after a certain time interval has elapsed, but the length of that interval keeps changing from one occasion to the next. Reinforcement is contingent on the first response emitted after a certain time interval has elapsed, but the length of that interval keeps changing from one occasion to the next. Example – The organism may be reinforced for the first response after five minutes, then the first response after eight minutes, then the first response after two minutes, etc. Example – The organism may be reinforced for the first response after five minutes, then the first response after eight minutes, then the first response after two minutes, etc.

Operant vs. Classical Conditioning Operant Conditioning Operant Conditioning Better explains voluntary activity. Better explains voluntary activity. Consequences are contingent on behavior. Consequences are contingent on behavior. Stimuli follow behavior: Stimuli follow behavior: Rat runs maze, receives reward Rat runs maze, receives reward Classical Conditioning Classical Conditioning Better explains involuntary activity. CS not contingent on behavior. Stimuli precede behavior: Bell (CS) precedes salivation

Observational Learning Modeling (Albert Bandura) Modeling (Albert Bandura) People learn by observing the behavior of others People learn by observing the behavior of others Learning occurs without reinforcement Learning occurs without reinforcement

Bandura study on Aggressive Behavior Children watch film of adults hitting & kicking a doll Children watch film of adults hitting & kicking a doll These children were more aggressive with the doll than children who didn’t see the film These children were more aggressive with the doll than children who didn’t see the film

TV violence & Aggressive Behavior Correlational studies: Children who watch a lot of violent TV behave more aggressively Correlational studies: Children who watch a lot of violent TV behave more aggressively Best studies: TV watching controlled, real- world behavior observed. Best studies: TV watching controlled, real- world behavior observed. Finding: TV violence seems to cause increase in aggressive behavior (mainly in children who were already aggressive) Finding: TV violence seems to cause increase in aggressive behavior (mainly in children who were already aggressive)

Modeling Prosocial Behavior Bandura study: Preschool children overcoming fear of dogs Bandura study: Preschool children overcoming fear of dogs Bandura study: Shy children learn to interact with others Bandura study: Shy children learn to interact with others