Chapter 6 Operant Conditioning Schedules. Schedule of Reinforcement Appetitive outcome --> reinforcement –As a “shorthand” we call the appetitive outcome.

Slides:



Advertisements
Similar presentations
Schedules of reinforcement
Advertisements

Schedules of Reinforcement: Continuous reinforcement: – Reinforce every single time the animal performs the response – Use for teaching the animal the.
Mean = = 83%
The Matching Law Richard J. Herrnstein. Reinforcement schedule Fixed-Ratio (FR) : the first response made after a given number of responses is reinforced.
Schedules of Reinforcement There are several alternate ways to arrange the delivery of reinforcement A. Continuous reinforcement (CRF), in which every.
Developing Behavioral Persistence Through the Use of Intermittent Reinforcement Chapter 6.
Quiz #3 Last class, we talked about 6 techniques for self- control. Name and briefly describe 2 of those techniques. 1.
Copyright © 2011 Pearson Education, Inc. All rights reserved. Developing Behavioral Persistence Through the Use of Intermittent Reinforcement Chapter 6.
Instrumental Learning A general class of behaviors inferring that learning has taken place.
Operant Conditioning. Shaping shaping = successive approximations toward a goal a process whereby reinforcements are given for behavior directed toward.
Learning Operant Conditioning.  Operant Behavior  operates (acts) on environment  produces consequences  Respondent Behavior  occurs as an automatic.
Myers EXPLORING PSYCHOLOGY (6th Edition in Modules) Module 19 Operant Conditioning James A. McCubbin, PhD Clemson University Worth Publishers.
Chapter 8 Operant Conditioning.  Operant Conditioning  type of learning in which behavior is strengthened if followed by reinforcement or diminished.
Operant Conditioning What the heck is it? Module 16.
More Instrumental (Operant) Conditioning. B.F. Skinner Coined the term ‘Operant conditioning’ Coined the term ‘Operant conditioning’ The animal operates.
PSY402 Theories of Learning Chapter 4 (Cont.) Schedules of Reinforcement.
Schedules of Reinforcement Lecture 14. Schedules of RFT n Frequency of RFT after response is important n Continuous RFT l RFT after each response l Fast.
Aversive Control: Avoidance and Punishment
PSY 402 Theories of Learning Chapter 7 – Behavior & Its Consequences Instrumental & Operant Learning.
Instrumental Learning All Learning where an animal operates on its environment to obtain a reinforcement. Operant (Skinnerian) conditioning.
PSY 402 Theories of Learning Chapter 7 – Behavior & Its Consequences Instrumental & Operant Learning.
Lectures 15 & 16: Instrumental Conditioning (Schedules of Reinforcement) Learning, Psychology 5310 Spring, 2015 Professor Delamater.
OPERANT CONDITIONING DEF: a form of learning in which responses come to be controlled by their consequences.
Chapter 7 Operant Conditioning:
 Also called Differentiation or IRT schedules.  Usually used with reinforcement  Used where the reinforcer depends BOTH on time and the number of reinforcers.
Week 5: Increasing Behavior
Ratio Schedules Focus on the number of responses required before reinforcement is given.
Explorations in Economics
Psychology of Learning EXP4404 Chapter 6: Schedules of Reinforcement Dr. Steve.
Ninth Edition 5 Burrhus Frederic Skinner.
Operant Conditioning: Schedules and Theories of Reinforcement
B.F. SKINNER - "Skinner box": -many responses -little time and effort -easily recorded -RESPONSE RATE is the Dependent Variable.
Chapter 7- Powell, et al. Reinforcement Schedules.
Copyright © Allyn & Bacon 2007 Big Bang Theory. I CAN Explain key features of OC – Positive Reinforcement – Negative Reinforcement – Omission Training.
Organizational Behavior Types of Intermittently Reinforcing Behavior.
Reinforcement Consequences that strengthen responses.
Chapter 13: Schedules of Reinforcement
Chapter 6 Developing Behavioral Persistence Through the Use of Intermittent Reinforcement.
PSY402 Theories of Learning Chapter 6 – Appetitive Conditioning.
Principles of Behavior Sixth Edition Richard W. Malott Western Michigan University Power Point by Nikki Hoffmeister.
Info for the final 35 MC = 1 mark each 35 marks total = 40% of exam mark 15 FB = 2 marks each 30 marks total = 33% of exam mark 8 SA = 3 marks each 24.
PRINCIPLES OF APPETITIVE CONDITIONING Chapter 6 1.
Operant Conditioning. Operant Conditioning – A form of learning in which voluntary responses come to be controlled by their consequences. What does this.
Schedules of Reinforcement and Choice. Simple Schedules Ratio Interval Fixed Variable.
Operant conditioning (Skinner – 1938, 1956)
Schedules of Reinforcement CH 17,18,19. Divers of Nassau Diving for coins Success does not follow every attempt Success means reinforcement.
Schedules of Reinforcement Thomas G. Bowers, Ph.D.
Schedules of Reinforcement or Punishment: Ratio Schedules
Schedules of reinforcement
Maintaining Behavior Change Dr. Alan H. Teich Chapter 10.
Principles of Behavior Sixth Edition Richard W. Malott Western Michigan University Power Point by Nikki Hoffmeister.
Schedules of Reinforcement
Working Hypothesis: If created by conditioning, could be eliminated using principles of conditioning Behavior Therapy: treatment based on environmental.
Explorations in Economics Alan B. Krueger & David A. Anderson.
1 Quiz Question: In what way are fixed-ratio (FR) and variable-ratio (VR) reinforcement schedules: (a) similar? (b) different?
Behavioral Economics!. Behavioral Economics Application of economic theory to predict and control behavior – Microeconomics – Assumes law of supply and.
Seminar 4 Applied Behavior Analysis I PS 360 Israel A. Sarasti, Ph.D.
Reinforcements. Clinician’s Basic Task Create communication behaviors Increase communication behaviors Both.
Schedules and more Schedules
Behavioral Economics! Yep, another theory used to predict reinforcement effects This theory = Nobel Prize for Richard Thaler!
Factors Affecting Performance on Reinforcement Schedules
Choice Behavior One.
Behavioral Economics!.
Schedules of Reinforcement
Maintaining Behavior Change Chapter 10
PSY402 Theories of Learning
Demand.
Schedules of Reinforcement
Operant Conditioning What the heck is it?
Behavioral Economics! Yep, another theory used to predict reinforcement effects This theory = Nobel Prize for Richard Thaler!
Presentation transcript:

Chapter 6 Operant Conditioning Schedules

Schedule of Reinforcement Appetitive outcome --> reinforcement –As a “shorthand” we call the appetitive outcome the “reinforcer” –Assume that we’ve got something appetitive and motivating for each individual subject Fairly consistent patterns of behaviour Cumulative recorder

Cumulative Record Cumulative recorder Flat line Slope

paper strip pen roller Cumulative Recorder

Recording Responses

The Accumulation of the Cumulative Record VI-25

Fixed Ratio (FR) N responses required; e.g., FR 25 CRF = FR1 Rise-and-run Postreinforcement pause Ratio strain no responses reinforcement responses “pen” resetting

Variable Ratio (VR) Varies around mean number of responses; e.g., VR 25 Short, if any postreinforcement pause Never know which response will be reinforced

Fixed Interval (FI) Depends on time; e.g., FI 25 Postreinforcement pause; scalloping Clock doesn’t start until reinforcer given

Variable Interval (VI) Varies around mean time; e.g., VI 25 Don’t know when time has elapsed Clock doesn’t start until reinforcer given

Response Rates

Duration Schedules Continuous responding for some time period to receive reinforcement Fixed duration (FD) –Set time period Variable duration (VD) –Varies around a mean

Differential Rate Schedules Differential reinforcement of low rates (DRL) –Reinforcement only if X amount of time has passed since last response –Sometimes “superstitious behaviours” Differential reinforcement of high rates (DRH) –Reinforcement only if more than X responses in a set time

Noncontingent Schedules Reinforcement delivery not contingent upon passage of time Fixed time (FT) –After set time elapses Variable time (VT) –After variable time elapses

Choice Behaviour

Choice Two-key procedure Concurrent schedules of reinforcement Each key associated with separate schedule Distribution of time and behaviour

Concurrent Ratio Schedules Two ratio schedules Schedule that gives most rapid reinforcement chosen exclusively

Concurrent Interval Schedules Maximize reinforcement Must shift between alternatives Allows for study of choice behaviour

Interval Schedules FI-FI –Steady-state responding –Less useful/interesting VI-VI –Not steady-state responding –Respond to both alternatives –Sensitive to rate of reinforcemenet –Most commonly used to study choice

Alternation and the Changeover Response Maximize reinforcers from both alternatives Frequent shifting becomes reinforcing –Simple alternation –“Concurrent superstition”

Changeover Delay COD Prevents rapid switching Time delay after “changeover” before reinforcement possible

Herrnstein’s (1961) Experiment Concurrent VI-VI schedules Overall rates of reinforcement held constant –40 reinforcers/hour between two alternatives

KeyScheduleRft/hrRsp/hrRft rateRsp rate 1 VI-3min VI-3min VI VI VI Extinction VI VI Proportional Rate of Response B 1 = resp. on key 1 B 2 = resp. on key 2 Proportional Rate of Reinforcement R 1 = reinf. on key 1 R 2 = reinf. on key 2 R 1 R 1 +R 2 = B 1 B 1 +B 2 = = =0.5 R 1 R 1 +R 2 = = B 1 B 1 +B 2 = =0.08

The Matching Law The proportion of responses directed toward one alternative should equal the proportion of reinforcers delivered by that alternative.

Bias Spend more time on one alternative than predicted Side preferences Biological predispositions Quality and amount

Varying Quality of Reinforcers Q 1 : quality of first reinforcer Q 2 : quality of second reinforcer

Varying Amount of Reinforcers A 1 : amount of first reinforcer A 2 : amount of second reinforcer

Combining Qualities and Amounts

Extinction

Disrupt the three-term contingency Response rate decreases

Stretching the Ratio/Interval Increasing the number of responses e.g., FR 5 --> FR 50, VI 4 sec. --> VI 30 sec. Extinction problem Shaping; gradual increments “Low” or “high” schedules

Extinction Continuous Reinforcement (CRF) = FR 1 Intermittent schedule: everything else CRF easier to extinguish than any intermittent schedules Partial reinforcement effect (PRE) Generally: –High vs. low –Variable vs. fixed

Discrimination Hypothesis Difficult to discriminate between extinction and intermittent schedule High schedules more like extinction than low schedules e.g., CRF vs. FR 50

Frustration Hypothesis Non-reinforcement for response is frustrating On CRF every response reinforced; no frustration Intermittent schedules always have some non- reinforced responses –Responding leads to reinforcer (pos. reinf.) –Frustration = S + for reinforcement Frustration grows continually during extinction –Stop responding --> stops frustration (neg. reinf.)

Sequential Hypothesis Response followed by reinf. or nonreinf. Intermittent schedules: nonreinforced responses are S + for eventual delivery of reinforcer High schedules increase resistance to extinction because many nonreinforced responses in a row leads to reinforced Extinction similar to high schedule

Response Unit Hypothesis Think in terms of behavioural “units” FR1: 1 response = 1 unit --> reinforcement FR2: 2 responses = 1 unit --> reinforcement –Not “response-failure, response-reinforcer” but “response-response-reinforcer” Says PRE is an artifact

Mowrer & Jones (1945) Response unit hypothesis More responses in extinction on higher schedules disappears when considered as behavioural units Number of responses/units during extinction FR1FR2FR3FR4 absolute number of responses number of behavioural units

Economic Concepts and Operant Behaviour Similarities Application of economic theories to behavioural conditions

The Economic Analogy Responses or time = money Total responses or time possible = income Schedule = price

Consumer Demand Demand curve –Price of something and how much is purchased –Elasticity of demand Amount Purchased Price Elastic Inelastic

Three Factors in Elasticity of Demand 1. Availability of substitutes –Can’t substitute complementary reinforcers e.g., food and water –Can substitute non-complementary reinforcers e.g., Coke and Pepsi 2. Price range –e.g., FR3 to FR5 vs. FR30 to FR50

3. Income level –Higher total response/time…the less effect cost increases have –Increased income --> purchase luxury items –Shurtleff et al. (1987) Two VI schedules; food, saccharin water High schedules: rats spend most time on food lever Low schedules: rats increase time on saccharin lever

Behavioural Economics and Drug Abuse Addictive drugs Nonhuman animal models Elasticity –Work for drug reinforcer on FR schedule –Inelastic...up to a point lowmediumhighvery high Price (FR schedule) Response rate

Elsmore, et al. (1980) –Baboons –Food and heroin –Availability of substitutes 2 minutes12 minutes Frequency of Choice Response rate Food Heroin