5 Early Operant Conditioning E. L. Thorndike (1898) Puzzle boxes and cats Scratch at bars Push at ceiling Dig at floor Situation: stimuli inside of puzzle box Howl Etc. Press lever First Trial in Box Scratch at bars Push at ceiling Dig at floor Situation: stimuli inside of puzzle box Howl Etc. Press lever After Many Trials in Box
7 B.F. Skinner and Operant Conditioning Classical conditioning involves an automatic response to a stimulus Operant conditioning involves learning how to control one’s response to elicit a reward or avoid a punishment (to press a lever for example)
8 Skinner’s Experiments Skinner’s experiments extend Thorndike’s thinking, especially his law of effect. This law states that rewarded behavior is likely to occur again. Yale University Library
9 Operant Conditioning Operant Behavior operates (acts) on environment produces consequences Respondent Behavior occurs as an automatic response to stimulus behavior learned through classical conditioning
10 Operant Chamber Skinner Box chamber with a bar or key that an animal manipulates to obtain a food or water reinforcer contains devices to record responses
11 Operant Chamber Examples. Walter Dawn/ Photo Researchers, Inc. From The Essentials of Conditioning and Learning, 3 rd Edition by Michael P. Domjan, 2005. Used with permission by Thomson Learning, Wadsworth Division
12 The “Skinner Box” Rats placed in “Skinner boxes” Shaped to get closer and closer to the bar in order to receive food Eventually required to press the bar to receive food Food is a reinforcer
13 Shaping Shaping is the operant conditioning procedure in which reinforcers guide behavior towards the desired target behavior through successive approximations. linklink A rat shaped to sniff mines. A manatee shaped to discriminate objects of different shapes, colors and sizes. Khamis Ramadhan/ Panapress/ Getty Images Fred Bavendam/ Peter Arnold, Inc.
16 Types of Reinforcers Reinforcement: Any event that strengthens the behavior it follows. A heat lamp positively reinforces a meerkat’s behavior in the cold. Reuters/ Corbis
17 Types of Reinforcement Positive reinforcer (+) –Adds something rewarding following a behavior, making that behavior more likely to occur again –Giving a dog a treat for fetching a ball is an example Negative reinforcer (-) –Removes something unpleasant that was already in the environment following a behavior, making that behavior more likely to occur again –Taking an aspirin to relieve a headache is an example
30 Figure 6.18 Positive reinforcement versus negative reinforcement
31 Figure 6.20 Comparison of negative reinforcement and punishment
32 IMPORTANT!! Negative reinforcement is NOT punishment. Negative reinforcement is the REMOVAL of unpleasant stimulus when target behavior is observed (a positive consequence of behavior – increases behavior) Punishment is the introduction of an aversive (unpleasant) stimulus or removal of a pleasant stimulus as a consequence of behavior – ( a negative consequence of behavior - decreases behavior.
33 Punishment An aversive event that decreases the behavior it follows.
34 1.Primary Reinforcer: An innately reinforcing stimulus like food or drink. (satisfies a biological need 2.Conditioned (secondary) Reinforcer: A learned reinforcer that gets its reinforcing power through association with the primary reinforcer. Primary & Secondary Reinforcers
35 1.Immediate Reinforcer: A reinforcer that occurs instantly after a behavior. A rat gets a food pellet for a bar press. 2.Delayed Reinforcer: A reinforcer that is delayed in time for a certain behavior. A paycheck that comes at the end of a week. Immediate & Delayed Reinforcers
36 Reinforcement Schedules 1.Continuous Reinforcement: Reinforces the desired response each time it occurs. 2.Partial (intermittent) Reinforcement: Reinforces a response only part of the time. Though this results in slower acquisition in the beginning, it shows greater resistance to extinction later on.
37 Schedules of Reinforcement Partial reinforcement lies between continuous reinforcement and extinction
38 Schedules of Reinforcement Fixed Ratio (FR) reinforces a response only after a specified number of responses faster you respond the more rewards you get different ratios very high rate of responding like piecework pay
39 Schedules of Reinforcement Variable Ratio (VR) reinforces a response after an unpredictable number of responses like gambling, fishing very hard to extinguish because of unpredictability Skinner link 3:58 Skinner link 3:58 SLOT machines show SLOwesT extinction.
40 Schedules of Reinforcement Fixed Interval (FI) reinforces a response only after a specified (fixed) time has elapsed response occurs more frequently as the anticipated time for reward draws near
41 Schedules of Reinforcement Variable Interval (VI) reinforces a response at unpredictable time intervals produces slow steady responding like pop quiz
42 Variable Interval (VI) Variable Ratio (VR) Fixed Interval (FI) Fixed Ratio (FR) Based on Number of necessary responses Based on Time that must first pass Predictable Unpredictable (“On the Average”) Intermittent Reinforcement Schedules Summary
44 You do not have to write down the following examples.
45 FI, VI, FR, or VR? 1.When I bake cookies, I can only put one set in at a time, so after 10 minutes my first set of cookies is done. After another ten minutes, my second set of cookies is done. I get to eat a cookie after each set is done baking. 2.After every 10 math problems that I complete, I allow myself a 5 minute break. 3.I look over my notes every night because I never know how much time will go by before my next pop quiz. 4.When hunting season comes around, sometimes I’ll spend all day sitting in the woods waiting to get a shot at a big buck. It’s worth it though when I get a nice 10 point. 5.Today in Psychology class we were talking about Schedules of Reinforcement and everyone was eagerly raising their hands and participating. Miranda raised her hand a couple of times and was eventually called on. 1.FI 2.FR 3.VI 4.VI 5.VR
46 FI, VI, FR, or VR? 6. Madison spanks her son if she has to ask him three times to clean up his room. 7. Emily has a spelling test every Friday. She usually does well and gets a star sticker. 8. Steve’s a big gambling man. He plays the slot machines all day hoping for a big win. 9.Snakes get hungry at certain times of the day. They might watch any number of prey go by before they decide to strike. 10.Mr. Bertani receives a salary paycheck every 2 weeks. (Miss Suter doesn’t ). 11.Christina works at a tanning salon. For every 2 bottles of lotion she sells, she gets 1 dollar in commission. 12.Mike is trying to study for his upcoming Psychology quiz. He reads five pages, then takes a break. He resumes reading and takes another break after he has completed 5 more pages. 6. FR 7. FI 8. VR 9.VI 10.FI 11.FR 12.FR
47 FI, VI, FR, or VR? 13. Megan is fundraising to try to raise money so she can go on the annual band trip. She goes door to door in her neighborhood trying to sell popcorn tins. She eventually sells some. 14. Kylie is a business girl who works in the big city. Her boss is busy, so he only checks her work periodically. 15. Mark is a lawyer who owns his own practice. His customers makes payments at irregular times. 16. Jessica is a dental assistant and gets a raise every year at the same time and never in between. 17. Andrew works at a GM factory and is in charge of attaching 3 parts. After he gets his parts attached, he gets some free time before the next car moves down the line. 18. Brittany is a telemarketer trying to sell life insurance. After so many calls, someone will eventually buy. 13. VR 14. VI 15. VI 16. FI 17. FR 18. VR
48 Updating Skinner’s Understanding Skinner’s emphasis on external control of behavior made him an influential, but controversial figure. Many psychologists criticized Skinner for underestimating the importance of cognitive and biological constraints.
49 Cognitive Approach This approach emphasizes abstract and subtle learning that could not be achieved through conditioning or social learning alone.
50 Cognition & Operant Conditioning Evidence of cognitive processes during operant learning comes from rats during a maze exploration in which they navigate the maze without an obvious reward. Rats seem to develop cognitive maps (E.C. Tolman), or mental representations, of the layout of the maze (environment).
52 Intrinsic Motivation Intrinsic Motivation: The desire to perform a behavior for its own sake. Extrinsic Motivation: The desire to perform a behavior due to promised rewards or threats of punishments.
53 Biological Predisposition Biological constraints predispose organisms to learn associations that are naturally adaptive. Marian Breland Bailey Photo: Bob Bailey
54 Skinner’s Legacy Skinner argued that behaviors were shaped by external influences instead of inner thoughts and feelings. Critics argued that Skinner dehumanized people by neglecting their free will. Falk/ Photo Researchers, Inc.
55 Applications of Operant Conditioning Skinner introduced the concept of teaching machines that shape learning in small steps and provide reinforcements for correct rewards. In School LWA-JDL/ Corbis
56 Applications of Operant Conditioning Reinforcers affect productivity. Many companies now allow employees to share profits and participate in company ownership. At work
57 Applications of Operant Conditioning At Home In children, reinforcing good behavior increases the occurrence of these behaviors. Ignoring unwanted behaviors decreases their occurrence.