Schedules of Reinforcement There are several alternate ways to arrange the delivery of reinforcement A. Continuous reinforcement (CRF), in which every.

Slides:



Advertisements
Similar presentations
Schedules of Reinforcement: Continuous reinforcement: – Reinforce every single time the animal performs the response – Use for teaching the animal the.
Advertisements

Chapter 10 Maintaining Behavior Changes. Relapses in Behavior behavior can regress after goals have been attained a relapse is an extended return to original.
Mean = = 83%
Common Properties of Differential Reinforcement A target behavior performed in the presence of a particular stimulus is reinforced. The same behavior is.
Developing Behavioral Persistence Through the Use of Intermittent Reinforcement Chapter 6.
Quiz #3 Last class, we talked about 6 techniques for self- control. Name and briefly describe 2 of those techniques. 1.
Copyright © 2011 Pearson Education, Inc. All rights reserved. Getting a Behavior to Occur More Often with Positive Reinforcement Chapter 3.
Copyright © 2011 Pearson Education, Inc. All rights reserved. Developing Behavioral Persistence Through the Use of Intermittent Reinforcement Chapter 6.
Thinking About Psychology: The Science of
Operant Conditioning. Shaping shaping = successive approximations toward a goal a process whereby reinforcements are given for behavior directed toward.
 JqjlrHA JqjlrHA.
PSY402 Theories of Learning Chapter 4 (Cont.) Schedules of Reinforcement.
Schedules of Reinforcement Lecture 14. Schedules of RFT n Frequency of RFT after response is important n Continuous RFT l RFT after each response l Fast.
PSY 402 Theories of Learning Chapter 7 – Behavior & Its Consequences Instrumental & Operant Learning.
PSY 402 Theories of Learning Chapter 7 – Behavior & Its Consequences Instrumental & Operant Learning.
Lectures 15 & 16: Instrumental Conditioning (Schedules of Reinforcement) Learning, Psychology 5310 Spring, 2015 Professor Delamater.
PowerPoint Slides to Accompany Applied Behavior Analysis for Teachers Seventh Edition Paul A. Alberto Anne C. Troutman ISBN: Alberto & Troutman.
OPERANT CONDITIONING DEF: a form of learning in which responses come to be controlled by their consequences.
Chapter 7 Operant Conditioning:
 Also called Differentiation or IRT schedules.  Usually used with reinforcement  Used where the reinforcer depends BOTH on time and the number of reinforcers.
Week 5: Increasing Behavior
Ratio Schedules Focus on the number of responses required before reinforcement is given.
Chapter 9 Adjusting to Schedules of Partial Reinforcement.
Operant Conditioning: Schedules and Theories Of Reinforcement.
Chapter 6 Operant Conditioning Schedules. Schedule of Reinforcement Appetitive outcome --> reinforcement –As a “shorthand” we call the appetitive outcome.
Operant Conditioning: Schedules and Theories of Reinforcement
Organizational Behavior Types of Intermittently Reinforcing Behavior.
Learning Operant Conditioning. Operant vs. Classical Conditioning ✤ Both associative types of learning & involve acquisition, extinction, spontaneous.
Schedules of reinforcement. Schedules of Reinforcement Continuous reinforcement refers to reinforcement being administered to each instance of a response.
Chapter 13: Schedules of Reinforcement
Increasing & Decreasing Behaviors 1. Increasing Behaviors 2.
Chapter 6 Developing Behavioral Persistence Through the Use of Intermittent Reinforcement.
 Continuous reinforcement: ◦ Reinforce every single time the animal performs the response  Use for teaching the animal the contingency ◦ E.g., when.
BASIC PRINCIPALS OF REINFORCEMENT TWO TYPES: -Positive Reinforcement -Negative Reinforcement.
© Gemma Minetti, Problem Description  K is a six year old first grade student in Mrs. P’s general education classroom. K often thinks he knows.
PSY402 Theories of Learning Chapter 6 – Appetitive Conditioning.
Principles of Behavior Sixth Edition Richard W. Malott Western Michigan University Power Point by Nikki Hoffmeister.
Schedules of Reinforcement 11/11/11. The consequence provides something ($, a spanking…) The consequence takes something away (removes headache, timeout)
Schedules of Reinforcement CH 17,18,19. Divers of Nassau Diving for coins Success does not follow every attempt Success means reinforcement.
Schedules of Reinforcement Thomas G. Bowers, Ph.D.
Schedules of Reinforcement or Punishment: Ratio Schedules
Schedules of reinforcement
Maintaining Behavior Change Dr. Alan H. Teich Chapter 10.
Principles of Behavior Sixth Edition Richard W. Malott Western Michigan University Power Point by Nikki Hoffmeister.
PowerPoint Slides to Accompany Applied Behavior Analysis for Teachers Seventh Edition Paul A. Alberto Anne C. Troutman ISBN: Alberto &
What are the types of predetermined plans for timing the delivery of a reinforcement?
Schedules of Reinforcement
Copyright © Allyn and Bacon Chapter 6 Learning This multimedia product and its contents are protected under copyright law. The following are prohibited.
Journal: Explain both positive and negative reinforcement and give an example of each.
Behavioral Theory Part 2. Reinforcers and Punishers O A reinforcer INCREASES behavior O A punisher DECREASES behavior.
1 Quiz Question: In what way are fixed-ratio (FR) and variable-ratio (VR) reinforcement schedules: (a) similar? (b) different?
Agenda  Q and As on last week’s lecture and readings  20-min in-class quiz 3  Application HW 1 revisit  Lecture  SD and MO  Positive and Negative.
Reinforcement Schedules 1.Continuous Reinforcement: Reinforces the desired response each time it occurs. 2.Partial Reinforcement: Reinforces a response.
 Systematic desensitization ◦ Technique developed by Joseph Wolpe  Teach relaxation skills  Create fear hierarchy  Sufferers learn to relax while.
Schedules of Reinforcement
Seminar 4 Applied Behavior Analysis I PS 360 Israel A. Sarasti, Ph.D.
Reinforcements. Clinician’s Basic Task Create communication behaviors Increase communication behaviors Both.
Schedules and more Schedules
Factors Affecting Performance on Reinforcement Schedules
Operant Conditioning A form of learning in which behavior becomes more or less probable depending on its consequences Associated with B.F. Skinner.
Reinforcement Schedules
Teaching Appropriate Behavior
Schedules of Reinforcement
Applied Behavior Analysis for Teachers
Maintaining Behavior Change Chapter 10
PSY402 Theories of Learning
UNIT 4 BRAIN, BEHAVIOUR & EXPERIENCE
Operant Conditioning, Continued
Schedules of Reinforcement
Learning and Motivation
Presentation transcript:

Schedules of Reinforcement There are several alternate ways to arrange the delivery of reinforcement A. Continuous reinforcement (CRF), in which every appropriate response is reinforced B. Intermittent schedules of reinforcement, in which reinforcement is unpredictable. There are two sub-categories of intermittent reinforcement schedules. 1. Ratio Schedules a. Fixed Ratio (FR) b. Variable Ratio (VR) 2. Interval Schedules a. Fixed Interval (FI) b. Variable Interval (VI)

Cumulative Frequency Graph Used to illustrate the effects of different schedules of reinforcement Each response is added to the previous one(graphed cumulatively) The steeper the slope of the data path, the more rapid is the rate of responding Time ResponserateResponserate

Continuous Reinforcement (CRF) Use during acquisition stage of learning Reinforce each occurrence of desired behavior Low resistance to extinction (behavior doesn’t persist when reinforcement is withheld Risk of reinforcer satiation, esp. with primary reinforcement

Intermittent Reinforcement Reinforce some, but not all, desired responses Use to maintain established behavior Resistant to extinction Satiation less likely

Intermittent Schedules Ratio Reinforcement based on number of responses √ Fixed (FR) FR 10--every 10th correct response is reinforced √ Variable (VR) VR 10--on average, every 10th correct response is reinforced Interval Reinforcement based on passage of time √ Fixed (FI) FI 2min.--1st correct response after passage of 2 min. reinforced √ Variable (VI) VI 2min.--1st correct response after an average of 2 min. has passed is reinforced

FIXED RATIO SCHEDULES Under fixed ratio schedules, the reinforcement is contingent on a set (fixed) number of responses. (FR-2, FR-4, FR-17) FR schedules result in a high rate of responding, with pauses after reinforcement. Increasing the ratio results in longer pauses.

FIXED RATIO (FR) Low to Moderate Response RateLow to Moderate Response Rate Poor Maintenance Under ExtinctionPoor Maintenance Under Extinction Post-Reinforcement PausesPost-Reinforcement Pauses

Considerations Behavior must occur fixed number of times for reinforcementBehavior must occur fixed number of times for reinforcement Larger ratio = ratio strain. If too large, student will cease to engage in behaviorLarger ratio = ratio strain. If too large, student will cease to engage in behavior Therefore, should not thin reinforcement abruptly from a dense schedule (e.g., CRF) to a very lean FR schedule. Instead, proceed in smaller steps (FR 2, FR 4, etc.)Therefore, should not thin reinforcement abruptly from a dense schedule (e.g., CRF) to a very lean FR schedule. Instead, proceed in smaller steps (FR 2, FR 4, etc.) Example: assembly line laborExample: assembly line labor

Example: FR20 S R+ every 20 words= 2 min on computer STO - Given 20 basic first grade sight words on note cards, Jason will orally read the words within 1 minute with fewer than 5 errors STO - Given 20 basic first grade sight words on note cards, Jason will orally read the words within 1 minute with fewer than 5 errors. S R+ 20 words read

VARIABLE RATIO SCHEDULES On a VR schedule (VR-3, VR-10, VR-200) reinforcement may come at any time. On the average, however reinforcement comes after so many responses. VR reinforcement results in high steady response rates. Extinction usually results in a high number of responses in a short time. Responses come as rapid bursts of behavior, followed by increasing pauses and then abrupt cessation of responding.

VARIABLE RATIO High Rates of Responding Good Resistance to Extinction No Post-Reinforcement Pauses Average number of times behavior occurs

Considerations Reinforcement can’t be predictedReinforcement can’t be predicted The more responses one makes, the more likely that reinforcement will occurThe more responses one makes, the more likely that reinforcement will occur Therefore, supports high rates of responding with no post- reinforcement pausesTherefore, supports high rates of responding with no post- reinforcement pauses Because reinforcement can occur at any time, extinction isn’t likelyBecause reinforcement can occur at any time, extinction isn’t likely Thinning from CRF to a lean VR schedule is easily accomplishedThinning from CRF to a lean VR schedule is easily accomplished Example: slot machineExample: slot machine

Example: VR6 S R+ on average of every 6th hand raise = puzzle with a peer STO - Given a teacher prompt to raise his hand during math seat work periods, Eric will raise his hand on 12 consecutive opportunities across 5 math classes. S R+ 9 hand raises 8 hand raises 5 hand raises 2 hand raises

FIXED INTERVAL SCHEDULES Reinforcement under a fixed interval schedule is contingent on a correct response that occurs after the passage of time (FI-20 sec., FI-2 min., FI-1 hr.) On a FI schedule, the first correct response after a fixed length of time is reinforced. This results in scalloping-- pauses after reinforcement with increased responding at the end of the interval.

FIXED INTERVAL (FI) Post Reinforcement Pauses (scallops) Low to Moderate Response Rate Poor Resistance to Extinction S R+ end of interval if behavior occurred or ASAP after end of interval

Considerations Reinforcement at end of interval if behavior occurred in that interval, or as soon as the behavior occurs after the interval has expired Extinction occurs if schedule is thinned too quickly (post-reinforcement interval is too long) Example: waiting for bus

FI1 Min. S R+ every 1 min= 1 min on computer S R+ STO - Given the instruction to remain in his seat, Jason will be in his seat each time the timer rings* on 20 of 30 opportunities for 5 consecutive sessions. * set for 1 min intervals

VARIABLE INTERVAL SCHEDULES VI reinforcement results in sustained responding at a low rate. The reinforcement is contingent on the first correct response that occurs after an interval averaging a certain length of time (VI-20 sec., VI-3 min., VI-2 hr.) Extinction after interval schedules of reinforcement results in low, sustained responding that gradually tapers off. Extinction on interval schedules takes longer than it does on other schedules.

VARIABLE INTERVAL (VI) Low to Moderate Response Rate No Post-Reinforcement Pauses Best Resistance to Extinction

Considerations Relatively low but stable response rate No post-reinforcement pauses, because intervals between reinforcement are not predictable Most resistance to extinction of any schedule Example: timer game to promote in-seat behavior

Example: VI5 on average every 5 minutes S R+ on average every 5 minutes S R+ STO - During 35 minute reading activities without prompts, Jim will be on-task for 30 consecutive minutes 4 classes in a row.

Post-test Which schedules: Result in high rates of behavior? Result in low rates of behavior? Result in pauses in responding? Result in steady responding? Are best for acquisition of behavior? Are best for maintenance of behavior? Lead to most rapid extinction?

Answers Ratio schedules result in high rates of responding Interval schedules result in low rates of responding Fixed schedules result in pauses in responding Variable schedules result in steady responding FR VR FI VI A CRF schedule is best to use during the acquisition of a behavior An intermittent schedule is best to use to maintain an established behavior Extinction is most rapid on a CRF schedule