Presentation on theme: "Chapter 6 – Schedules or Reinforcement and Choice Behavior"— Presentation transcript:
1 Chapter 6 – Schedules or Reinforcement and Choice Behavior OutlineSimple Schedules of Intermittent ReinforcementRatio SchedulesInterval SchedulesComparison of Ratio and Interval SchedulesChoice Behavior: Concurrent SchedulesMeasures of Choice BehaviorThe Matching LawComplex ChoiceConcurrent-Chain SchedulesStudies of “Self Control”
2 Simple Schedules of Intermittent Reinforcement Ratio Schedules RF depends only on the number of responses performedContinuous reinforcement (CRF)each response is reinforcedbarpress = foodkey peck = foodCRF is rare outside the lab.Partial or intermittent RF
3 Partial or intermittent Schedules of Reinforcement FR (Fixed Ratio) fixed number of operants (responses)CRF is FR1FR 10 = every 10th response RForiginally recorded using a cumulative recordNow computerscan be graphed similarly
4 Figure 6.1 – The construction of a cumulative record by a cumulative recorder for the continuous recording of behavior.
5 The cumulative record represents responding as a function of time the slope of the line represents rate of responding.Steeper = faster
6 Responding on FR scheds. Faster responding = sooner RFSo responding tends to be pretty rapidPostreinforcement pausePostreinforcement pause is directly related to FR.Small FR = shorter pausesFR 5large FR = longer pausesFR 100wait a while before they start working.Domjan points out this may have more to do with the upcoming work than the recent RFPre-ratio pause?
9 how would you respond if you received $1 on an FR 5 schedule? FR 500? Post RF pauses?RF history explanation of post RF pauseContiguity of 1st response and RFFR 51st response close to RFonly 4 moreFR 1001st response long way from RF99 more
10 VR (Variable ratio schedules) Number of responses still criticalVaries from trial to trialVR 10reinforced on average for every 10th response.sometimes only 1 or 2 responses are requiredother times 15 or 19 responses are required.
12 VR = very little postreinforcement pause why would this be?Slot machinesvery lean schedule of RFBut - next lever pull could result in a payoff.
13 FI (Fixed Interval Schedule) 1st response after a given time period has elapsed is reinforced.FI 10s1st response after 10s RF.RF waits for animal to respondresponses prior to 10-s not RF.scalloped responding patternsFI scallop
16 Similarity of FI scallop and post RF pause? The FI scallop has been used to assess animals’ ability to time.
17 VI (variable interval schedule) Time is still the important variableHowever, time elapse requirement varies around a set averageVI 120stime to RF can vary from a few seconds to a few minutes$1 on a VI 10 minute schedule for button presses?Could be RF in secondsCould be 20 minutespost reinforcement pause?
18 Produces stable responding at a constant rate peck..peck..peck..peck..pecksampling whether enough time has passedThe rate on a VI schedule is not as fast as on an FR and VR schedulewhy?ratio schedules are based on response.faster responding gets you to the response requirement quicker, regardless of what it is?On a VI schedule # of responses don’t matter,steady even pace makes sense.
19 Interval Schedules and Limited Hold Limited hold restrictionMust respond within a certain amount of time of RF setupLike lunch at schoolToo late you miss it
20 Comparison of Ratio and Interval Schedules What if you hold RF constantRat 1 = VRRat 2 = Yoked control rat on VIRF is set up when Rat 1 gets to his RFIf Rat 1 responds faster, RF will set up sooner for Rat2If Rat 1 is slower, RF will be delayed
22 Why is responding faster on ratio scheds? Molecular viewBased on moment x moment RFInter-response times (IRTs)R1……………R2 RFReinforces long IRTR1..R2 RFReinforces short IRTMore likely to be RF for short IRTs on VR than VI
23 Molar view Feedback functions Average RF rate during the session is the result of average response ratesHow can the animal increase reinforcement in the long run (across whole session)?Ratio - Respond faster = more RF for that dayFR 30Responding 1 per second RF at 30sRespond 2 per second RF at 15s
24 Molar view continued Interval - No real benefit to responding faster Responding 1 per second RF at 30 or 31 (30.5)What if 2 per second 30 or 30.5 (30.25)PaySalary?Clients?
25 Choice Behavior: Concurrent schedules The responding that we have discussed so far has involved schedules where there is only one thing to do.In real life we tend to have choices among various activitiesConcurrent schedulesexamines how an animal allocates its responding among two schedules of reinforcement?The animals are free to switch back and forth
26 Figure 6.4 – Diagram of a concurrent schedule for pigeons.
27 Measures of choice behavior Relative rate of respondingfor left keyBL(BL + BR)BL = Behavior on leftBR = Behavior on rightWe are just dividing left key responding by total responding.
28 This computation is very similar to the computation for the suppression ratio. If the animals are responding equally to each key what should our ratio be?=20+20If they respond more to the left key?=40+20If they respond more to the right key?=20+40
29 Relative rate of responding for right key Will be reciprocal of left key responding, but also can be calculated with the same formulaBR(BR + BL)Concurrent schedules?If VI 60 VI 60The relative rate of responding for either key will be .5Split responding equally among the two keys
30 What about the relative rate of reinforcement? Left key?Simply divide the rate of reinforcement on the left key by total reinforcement.rL(rL + rR)VI 60 VI 60?If animals are dividing responding equally?.50 again
31 The Matching Lawrelative rate of responding matches relative rate of RF when the same VI schedule is used.50 and .50What if different schedules of RF are used on each key?
32 Left key = VI 6 min (10 per hour) Right key = VI 2 min (30 per hour) Left key relative rate of respondingBL = rL =.25 left(BL + BR) (rL + rR) 40Right key?simply the reciprocal.75Can be calculated thoughBR = rR =.75 right(BR + BL) (rR + rL) 40Thus - three times as much responding on right key .25x3 = .75
33 again – three times as much responding on right key Matching Law continued: Simpler computation.BL = rL .BR rR1030again – three times as much responding on right key
34 Herrnstein (1961) compared various VI schedules Matching Law.Figure 6.5 in your book
36 Application of the matching law The matching law indicates that we match our behaviors to the available RF in the environment.Law,Bulow, and Meller (1998)Predicted adolescent girls that live in RF barren environments would be more likely to engage in sexual behaviorsGirls that have a greater array of RF opportunities should allocate their behaviors toward those other activitiesSurveyed girls about the activities they found rewarding and their sexual activityThe matching law did a pretty good job of predicting sexual activityMany kids today have a lot of RF opportunities.May make it more difficult to motivate behaviors you want them to doLike homeworkX-boxTexting friendsTV
37 Complex ChoiceMany of the choices we make require us to live with those choicesWe can’t always just switch back and forthGo to college?Get a full-time job?Sometimes the short-term and long-term consequences (RF) of those choices are very differentGo to collegePoor now; make more laterGet a full-time jobMoney now; less earning in the long run
38 Concurrent-Chain Schedules Allows us to examine these complex choice behaviors in the labExampleDo animals prefer a VR or a FR?Variety is the spice of life?
39 Figure 6.6 – Diagram of a concurrent-chain schedule.
40 Subjects prefer the VR10 over the FR10 Choice of A10 minutes on VR 10Choice of B10 minutes on FR 10Subjects prefer the VR10 over the FR10How do we know?Subjects will even prefer VR schedules that require somewhat more responding than the FRWhy do you think that happens?
41 Studies of Self control Often a matter of delaying immediate gratification (RF) in order to obtain a greater reward (RF) later.Study or go to party?Work in summer to pay for school or enjoy the time off?
42 Self control in pigeons? Rachlin and Green (1972)Choice A = immediate small rewardCoice B = 4s Delay large rewardDirect choice procedurePigeons choose immediate, small rewardConcurrent-chain procedureCould learn to choose the larger rewardOnly if a long enough delay between initial choice and the next link.
43 Figure 6.7 – Diagram of the experiment by Rachlin and Green (1972) on self control.
44 Value-discounting function (1+KD) This idea that imposing a delay between a choice and the eventual outcomes helps organisms make “better” (higher RF) outcomes works for people to.Value-discounting functionV = M .(1+KD)V-value of RFM- magnitude of RFD – delay of rewardK – is a correction factor for how much the animal is influenced by the delayAll this equation is saying is that the value of a reward is inversely affected by how long you have to wait to receive it.IF there is no delay D=0Then it is simply magnitude over 1
45 If I offer you $50 now or $100 now? 50 . = 50 100 . = 100 = = 100(1+1x0) (1+1x0)$50 now or $100 next year?= = 7.7(1+1x0) (1+1x12)
46 Figure 6.8 – Hypothetical relations between reward value and waiting time to reward delivery for a small reward and a large reward presented some time later.
47 Madden, Petry,Badger, and Bickel (1997) As noted above K is a factor that allows us to correct these delay functions for individual differences in delay-discountingPeople with steep delay discounting functions will have a more difficult time delaying immediate gratification to meet long-term goalsYoung childrenDrug abusersMadden, Petry,Badger, and Bickel (1997)Two GroupsHeroin-dependent patientsControlsOffered hypothetical choices$ smaller – now$ more – laterAmounts varied$1,000, $990, $960, $920, $850, $800, $750, $700, $650, $600, $550, $500, $450, $400, $350, $300,$250, $200, $150, $100, $80, $60, $40, $20, $10, $5, and $1Delays varied1 week, 2 weeks, 2 months, 6 months, 1 year, 5 years, and 25 years.
50 It has been described mathematically in the following way (Baum, 1974)RA = b rA aRB rBRA and RB refer to rates of responding on keys A and B (i.e. left and right)rA and rB refer to the rates of reinforcement on those keysWhen the value of exponent a is equal to 1.0 a simple matching relationship occurs where the ratio of responses perfectly match the ratio of reinforcers obtained.The variable b is used to adjust for response effort differences between A an B when they are unequal, or if the reinforcers for A and B were unequal.