Download presentation
Presentation is loading. Please wait.
Published byDarlene Cummings Modified over 9 years ago
1
Neobehaviorists
2
Neobehaviorism Life after Watson Life after Watson Optimism Optimism But…. But….
3
Neobehaviorists Influence by Watson?....Clearly. Influence by Watson?....Clearly. Hull Hull Tolman Tolman Skinner Skinner
4
Clark Hull Hull (1884-1952) Ph.D from University of Wisconsin in 1918 Invited to Yale in 1929 President of APA in 1935
5
Hull –Early Interests University of Wisconsin University of Wisconsin Books: Books: Aptitude Testing (1928) Aptitude Testing (1928) Hypnosis and Suggestibility (1933) Hypnosis and Suggestibility (1933) 32 papers 32 papers
6
Hull’s System - Yale Stimuli and responses are assumed to be bridged by intervening variables such as: Stimuli and responses are assumed to be bridged by intervening variables such as: Drive Drive Fatigue Fatigue Habit strength Habit strength Incentive Incentive
7
Hull’s System Example: Example: S E R = S H R x D x V x K E refers to action potential in a given situation E refers to action potential in a given situation H refers to habit strength (or number of previous trials in the situation) H refers to habit strength (or number of previous trials in the situation) D is drive strength (e.g., the number of hours of deprivation) D is drive strength (e.g., the number of hours of deprivation) V refers to stimulus intensity V refers to stimulus intensity K refers to incentive motivation K refers to incentive motivation
8
Hull’s Theory Reinforcement: Reinforcement: Played a key role Played a key role Law of reinforcement: Law of reinforcement: Stimuli that reduce drive stimuli are reinforcing. Stimuli that reduce drive stimuli are reinforcing. Secondary reinforcement Secondary reinforcement Any stimulus consistently associated with primary reinforcers takes on reinforcing properties. Any stimulus consistently associated with primary reinforcers takes on reinforcing properties.
9
Hull - Legacy Central figure in the development of quantitative approaches to behavior. Central figure in the development of quantitative approaches to behavior. Principles of Behavior (1943) Principles of Behavior (1943) A Behavior System (1952) A Behavior System (1952)
10
Edward C. Tolman Tolman (1886-1959) Graduates with B.S. from MIT (1911) Harvard (1915) – Ph.D in Psychology President APA (1937)
11
Tolman’s System Great range of topics that we encounter in our daily lives Great range of topics that we encounter in our daily lives Focus on the role of cognition and purpose Focus on the role of cognition and purpose Wanted a psychology with true breadth of perspective that retained the desirable objective features of classical behaviorism. Wanted a psychology with true breadth of perspective that retained the desirable objective features of classical behaviorism.
12
Tolman’s System Believed that psychological processes intervene between stimuli and responses. Believed that psychological processes intervene between stimuli and responses. Intervening variables: Intervening variables: Cognitions Cognitions Expectancies Expectancies Purposes Purposes Hypotheses Hypotheses Appetite Appetite
13
Example: Expectancies: Expectancies: Develops when a reward follows each successful response. Develops when a reward follows each successful response. Then becomes involved in directing and controlling behavior Then becomes involved in directing and controlling behavior
14
Tolman - Reinforcement A reinforcer (e.g., food) has nothing to do with learning, as such, but do regulate the performance of learned responses. A reinforcer (e.g., food) has nothing to do with learning, as such, but do regulate the performance of learned responses. Learning vs. Reinforcement vs. Performance Learning vs. Reinforcement vs. Performance Cognitive maps Cognitive maps
15
Tolman - Reinforcement Latent learning Latent learning Reinforcement influences motivation and hence performance, but learning itself is an independent process. Reinforcement influences motivation and hence performance, but learning itself is an independent process.
16
Tolman’s - Legacy Behaviorism could be more… Behaviorism could be more… Set up the cognitive movement… Set up the cognitive movement… Springboard for work in: Springboard for work in: Motivation Motivation Clinical Psychology Clinical Psychology Neuropsychology Neuropsychology
17
B.F. Skinner
18
Skinner Box
20
Skinner’s Basic Law of Operant Conditioning A response that is followed by a reinforcer is strengthened and is therefore more likely to occur again. A response that is followed by a reinforcer is strengthened and is therefore more likely to occur again. A reinforcer is a stimulus or event that increases the frequency of a response it follows. A reinforcer is a stimulus or event that increases the frequency of a response it follows.
21
Operant Conditioniing 1) The reinforcer must follow the response. 1) The reinforcer must follow the response. 2) The reinforcer must follow immediately. 2) The reinforcer must follow immediately. 3) The reinforcer must be contingent on the response. 3) The reinforcer must be contingent on the response.
22
What Behaviors Can Be Reinforced? Academic Academic Social Social Psychomotor Psychomotor Aggression Aggression Criminal Activity Criminal Activity
23
Basic Concepts in OC Shaping (Successive approximations) Shaping (Successive approximations) Shaping is a means of teaching a behavior when the free operant level for that behavior is very low (or when the desired terminal behavior is different in form from any responses that the organism exhibits). Shaping is a means of teaching a behavior when the free operant level for that behavior is very low (or when the desired terminal behavior is different in form from any responses that the organism exhibits).
24
The Nature of Reinforcers Primary Reinforcer: Primary Reinforcer: One that satisfies a built-in (perhaps biological) need or desire. One that satisfies a built-in (perhaps biological) need or desire. Examples: Examples: Food Food Water Water Oxygen Oxygen Warmth Warmth
25
The Nature of Reinforcers Secondary (Conditioned) Reinforcers: Secondary (Conditioned) Reinforcers: A previously neutral stimulus that has become reinforcing to an organism through repeated association with another reinforcer. A previously neutral stimulus that has become reinforcing to an organism through repeated association with another reinforcer. Examples: Examples: Praise Praise Good grades Good grades $$$ $$$ Feelings of success Feelings of success
26
What Kinds of Consequences Do We Find Reinforcing? Activity reinforcers Activity reinforcers An opportunity to engage in a favorite activity. An opportunity to engage in a favorite activity. Premack Principle: Premack Principle: A normally high-frequency response, when it follows a normally low-frequency response, will increase the frequency of the low-frequency response. A normally high-frequency response, when it follows a normally low-frequency response, will increase the frequency of the low-frequency response.
27
What Kinds of Consequences Do We Find Reinforcing? Material reinforcers Material reinforcers Actual objects like food or toys Actual objects like food or toys Social reinforcers Social reinforcers Gesture or sign from one person to another that communicates positive regard like praise or a smile. Gesture or sign from one person to another that communicates positive regard like praise or a smile.
28
What Kinds of Consequences Do We Find Reinforcing? Positive feedback: Positive feedback: Provides information as to which responses are desirable (and which are not). Provides information as to which responses are desirable (and which are not). Examples: material and social reinforcers Examples: material and social reinforcers
29
What Kinds of Consequences Do We Find Reinforcing? Intrinsic reinforcers Intrinsic reinforcers When an individual engages in a response not because of any external reinforcers but because of the internal good feelings (the intrinsic reinforcers) that such a response brings. When an individual engages in a response not because of any external reinforcers but because of the internal good feelings (the intrinsic reinforcers) that such a response brings. Examples: feelings of success, feeling relieved, feeling proud Examples: feelings of success, feeling relieved, feeling proud
30
Schedules of Reinforcement Ratio Schedules: Ratio Schedules: A schedule in which reinforcement occurs after a certain number of responses have been emitted (fixed or variable) A schedule in which reinforcement occurs after a certain number of responses have been emitted (fixed or variable) Interval Schedules: Interval Schedules: A schedule in which reinforcement is contingent on the first response emitted after a certain time interval has elapsed (fixed or variable. A schedule in which reinforcement is contingent on the first response emitted after a certain time interval has elapsed (fixed or variable.
31
Ratio Schedules Fixed Ratio (FR): Fixed Ratio (FR): Reinforcer is presented after a certain constant number of responses have occurred. Reinforcer is presented after a certain constant number of responses have occurred. Example - 1:3 or 1:10 Example - 1:3 or 1:10 Produces a high and consistent response rate Produces a high and consistent response rate
32
Ratio Schedules Variable Ratio (VR): Variable Ratio (VR): Reinforcement is presented after a particular, yet changing, number of responses have been emitted. Reinforcement is presented after a particular, yet changing, number of responses have been emitted. Example – In a 1:5 VR you may first be reinforced after four responses, then after seven more, then after three, etc. Example – In a 1:5 VR you may first be reinforced after four responses, then after seven more, then after three, etc.
33
Interval Schedules Fixed Interval (FI): Fixed Interval (FI): Reinforcement is contingent on the first response emitted after a certain fixed time interval has elapsed. Reinforcement is contingent on the first response emitted after a certain fixed time interval has elapsed. Example: The organism may be reinforced for the first response emitted after five minutes have elapsed. Example: The organism may be reinforced for the first response emitted after five minutes have elapsed.
34
Interval Schedules Variable Interval (VI): Variable Interval (VI): Reinforcement is contingent on the first response emitted after a certain time interval has elapsed, but the length of that interval keeps changing from one occasion to the next. Reinforcement is contingent on the first response emitted after a certain time interval has elapsed, but the length of that interval keeps changing from one occasion to the next. Example – The organism may be reinforced for the first response after five minutes, then the first response after eight minutes, then the first response after two minutes, etc. Example – The organism may be reinforced for the first response after five minutes, then the first response after eight minutes, then the first response after two minutes, etc.
36
Operant vs. Classical Conditioning Operant Conditioning Operant Conditioning Better explains voluntary activity. Better explains voluntary activity. Consequences are contingent on behavior. Consequences are contingent on behavior. Stimuli follow behavior: Stimuli follow behavior: Rat runs maze, receives reward Rat runs maze, receives reward Classical Conditioning Classical Conditioning Better explains involuntary activity. CS not contingent on behavior. Stimuli precede behavior: Bell (CS) precedes salivation
37
Observational Learning Modeling (Albert Bandura) Modeling (Albert Bandura) People learn by observing the behavior of others People learn by observing the behavior of others Learning occurs without reinforcement Learning occurs without reinforcement
38
Bandura study on Aggressive Behavior Children watch film of adults hitting & kicking a doll Children watch film of adults hitting & kicking a doll These children were more aggressive with the doll than children who didn’t see the film These children were more aggressive with the doll than children who didn’t see the film
39
TV violence & Aggressive Behavior Correlational studies: Children who watch a lot of violent TV behave more aggressively Correlational studies: Children who watch a lot of violent TV behave more aggressively Best studies: TV watching controlled, real- world behavior observed. Best studies: TV watching controlled, real- world behavior observed. Finding: TV violence seems to cause increase in aggressive behavior (mainly in children who were already aggressive) Finding: TV violence seems to cause increase in aggressive behavior (mainly in children who were already aggressive)
40
Modeling Prosocial Behavior Bandura study: Preschool children overcoming fear of dogs Bandura study: Preschool children overcoming fear of dogs Bandura study: Shy children learn to interact with others Bandura study: Shy children learn to interact with others
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.