Presentation is loading. Please wait.

Presentation is loading. Please wait.

Operant & Classical Conditioning

Similar presentations


Presentation on theme: "Operant & Classical Conditioning"— Presentation transcript:

1 Operant & Classical Conditioning
1. Classical conditioning forms associations between stimuli (CS and US). Operant conditioning, on the other hand, forms an association between behaviors and the resulting events. OBJECTIVE 10| Identify the two major characteristics that distinguish classical conditioning from operant conditioning.

2 Operant & Classical Conditioning
Classical conditioning involves respondent behavior that occurs as an automatic response to a certain stimulus. Operant conditioning involves operant behavior, a behavior that operates on the environment, producing rewarding or punishing stimuli.

3 B.F. Skinner and Operant Conditioning
B.F. Skinner had an enormous influence on psychology in general and on the field of psychology known as behaviorism in particular. His key theories were published in the early 1950s. As Pavlov’s experiments showed, classical conditioning involves associating a neutral external stimulus with a response that is generally automatic (such as salivating). Skinner’s research revealed the power of operant conditioning, which involves learning how to operate on one’s environment to elicit a particular stimulus (a reward) or to avoid a punishment. In operant conditioning, the subject controls his or her response. You will learn how this works in the next few slides. Classical conditioning involves an automatic response to a stimulus Operant conditioning involves learning how to control one’s response to elicit a reward or avoid a punishment

4 Skinner’s Experiments
Skinner’s experiments extend Thorndike’s thinking, especially his law of effect. This law states that rewarded behavior is likely to occur again. OBJECTIVE 11| State Thorndike’s law of effect, and explain its connection to Skinner’s research on operant conditioning. Yale University Library

5 Operant Chamber Using Thorndike's law of effect as a starting point, Skinner developed the Operant chamber, or the Skinner box, to study operant conditioning. Edition by Michael P. Domjan, Used with permission From The Essentials of Conditioning and Learning, 3rd by Thomson Learning, Wadsworth Division Walter Dawn/ Photo Researchers, Inc.

6 The “Skinner Box”: Skinner’s Hypothesis, Methodology, and Results
Rats placed in “Skinner boxes” Shaped to get closer and closer to the bar in order to receive food Eventually required to press the bar to receive food Food is a reinforcer Skinner hypothesized that rats could be trained to perform specific behaviors in order to receive a food reward. He placed the rats into what is technically called an “operant chamber” but became more commonly known as a “Skinner box.” The soundproof glass box contained a bar or a key that the rat could press down to receive food. This bar or key was hooked up to an instrument that recorded how many times the rat pressed it. Skinner used a process called “shaping” to teach the rats to press the bar for food. For example, if a rat approached the bar, he might initially give it a pellet of food as a reward for getting close to the bar. Skinner would gradually make the rat get closer to the bar before giving it food. Eventually, the rat learned that it had to press the bar in order to get any food. The food in this case is referred to as a “reinforcer,” since it reinforces the rat’s behavior of stepping closer to and eventually pressing the bar.

7 Primary & Secondary Reinforcers
Primary Reinforcer: An innately reinforcing stimulus like food or drink. Conditioned Reinforcer: A learned reinforcer that gets its reinforcing power through association with the primary reinforcer.

8 Immediate & Delayed Reinforcers
Immediate Reinforcer: A reinforcer that occurs instantly after a behavior. A rat gets a food pellet for a bar press. Delayed Reinforcer: A reinforcer that is delayed in time for a certain behavior. A paycheck that comes at the end of a week. We may be inclined to engage in small immediate reinforcers (watching TV) rather than large delayed reinforcers (getting an A in a course) which require consistent study.

9 Shaping Shaping is the operant conditioning procedure in which reinforcers guide behavior towards the desired target behavior through successive approximations. OBJECTIVE 12| Describe the shaping procedure, and how it can increase our understanding of what animals and babies can discriminate. Khamis Ramadhan/ Panapress/ Getty Images Fred Bavendam/ Peter Arnold, Inc. A rat shaped to sniff mines. A manatee shaped to discriminate objects of different shapes, colors and sizes.

10 For next week… PsychPort Test Corrections or… Spanking
Discipline of Children Punishment Memory Test Corrections or…

11 Types of Reinforcers Any event that strengthens the behavior it follows. A heat lamp positively reinforces a meerkat’s behavior in the cold. OBJECTIVE 13| Compare positive and negative reinforcement, and give one example each of a primary reinforcer, a conditioned, an immediate, and a delayed reinforcer. Reuters/ Corbis

12 Negative Reinforcement and Punishment
Negative reinforcement: Removing an unpleasant stimulus Punishment 1. Introducing an unpleasant stimulus 1. Unpleasant stimulus In operant conditioning, negative reinforcement and punishment are two different things that can be easy to confuse. The food rewards Skinner used are known as positive reinforcement. Negative reinforcement, on the other hand, occurs when an unpleasant stimulus is removed. For example, in one experiment Skinner would play a loud noise inside the box; to make the noise stop, the rat would have to press the bar. Punishment involves either the introduction of an unpleasant stimulus or the removal of a pleasant stimulus. When Skinner gave a rat a painful electric shock after pushing the bar, the rat learned not to push the bar. Another example of punishment would be a parent withholding dessert from a child who misbehaves at the dinner table. = 2. Removal of unpleasant stimulus 2. Withholding a pleasant stimulus =

13 Reinforcement Schedules
Continuous Reinforcement: Reinforces the desired response each time it occurs. Partial Reinforcement: Reinforces a response only part of the time. Though this results in slower acquisition in the beginning, it shows greater resistance to extinction later on. OBJECTIVE 14| Discuss the strengths and weaknesses of continuous and partial reinforcement schedules, and identify four schedules of partial reinforcements.

14 Ratio Schedules Fixed-ratio schedule: Reinforces a response only after a specified number of responses. e.g., piecework pay. Variable-ratio schedule: Reinforces a response after an unpredictable number of responses. This is hard to extinguish because of the unpredictability. (e.g., behaviors like gambling, fishing.)

15 Interval Schedules Fixed-interval schedule: Reinforces a response only after a specified time has elapsed. (e.g., preparing for an exam only when the exam draws close.) Variable-interval schedule: Reinforces a response at unpredictable time intervals, which produces slow, steady responses. (e.g., pop quiz.)

16 Rates and Types of Reinforcement: Additional Experiments
Fixed-ratio: food given after a fixed number of responses Variable-ratio: number of responses required to get food changes each time Skinner and his associates tried the following types of reinforcement schedules with animals such as rats and pigeons: Fixed-ratio: In a fixed-ratio schedule, behavior is reinforced after a set number of responses. For example, an animal might receive food every ten times it presses the bar. After receiving its food reward, the rat presses the bar rapidly until it receives another one. Variable-ratio: Reinforcement is provided after a variable number of responses. Sometimes the animal receives food after two responses, sometimes after twenty; the number of times the rat has to press the bar varies. Animals press the bar frequently because they know they’ll get more food the more they press. Fixed-interval: Reinforcement is based on a time schedule. As the time for another reward draws near, the animal will press the bar more often. Variable-interval: Reinforcement is provided from time to time at a variable rate but is not dependent on how many times the rat presses the bar. The animal tends to press the bar at a slow but steady rate since it has no idea how long it will have to wait for its reward. Can you think of examples of these types of reinforcement in people’s lives? What about gambling? What about factory work in which people are paid by the number of items they produce? What about jobs that pay hourly wages? Fixed-interval: food given after a certain amount of time elapses Variable-interval: amount of time required to get food changes each time

17 Schedules of Reinforcement

18 An aversive event that decreases the behavior it follows.
Punishment An aversive event that decreases the behavior it follows. OBJECTIVE 15| Discuss the ways negative punishment, positive punishment, and negative reinforcement differ, and list some drawbacks of punishment as a behavior-control technique.

19 Punishment Although there may be some justification for occasional punishment (Larzelaere & Baumrind, 2002), it usually leads to negative effects. Results in unwanted fears. Conveys no information to the organism. Justifies pain to others. Causes unwanted behaviors to reappear in its absence. Causes aggression towards the agent. Causes one unwanted behavior to appear in place of another.

20 Extending Skinner’s Understanding
Skinner believed in inner thought processes and biological underpinnings, but many psychologists criticize him for discounting them.

21 Cognition & Operant Conditioning
Evidence of cognitive processes during operant learning comes from rats during a maze exploration in which they navigate the maze without an obvious reward. Rats seem to develop cognitive maps, or mental representations, of the layout of the maze (environment). OBJECTIVE 16| Explain how latent learning and the effect of external rewards demonstrate that cognitive processing is an important part of learning

22 Latent Learning Such cognitive maps are based on latent learning, which becomes apparent when an incentive is given (Tolman & Honzik, 1930).

23 Motivation Intrinsic Motivation: The desire to perform a behavior for its own sake. Extrinsic Motivation: The desire to perform a behavior due to promised rewards or threats of punishments.

24 Biological Predisposition
Biological constraints predispose organisms to learn associations that are naturally adaptive. Breland and Breland (1961) showed that animals drift towards their biologically predisposed instinctive behaviors. OBJECTIVE 17| Explain how biological predisposition place limits on what can be achieved through operant conditioning. Photo: Bob Bailey Marian Breland Bailey

25 Falk/ Photo Researchers, Inc.
Skinner’s Legacy Skinner argued that behaviors were shaped by external influences instead of inner thoughts and feelings. Critics argued that Skinner dehumanized people by neglecting their free will. OBJECTIVE 18| Describe the controversy over Skinner’s views of human behavior. Falk/ Photo Researchers, Inc.

26 Skinner’s Importance Education: programmed instruction Work Parenting
Skinner believed that humans learn behavior through reinforcement, much as rats learn to press a bar when that behavior is reinforced with food. His contributions to the fields of psychology and education therefore focused on learned behaviors and reinforcement. For example, Skinner and his followers promoted the use of machines to teach students concepts in small incremental steps, giving them rewards for right answers; this method is called “programmed instruction.” Skinner emphasized the importance of receiving feedback for each step (e.g., for each math problem in a sequence) before going on to the next one. Skinner’s behaviorist ideas have also been used in the workplace to give workers added incentives and to provide immediate reinforcement for good work. Operant conditioning also appears commonly as a parenting technique, as when parents reinforce good behaviors and try to extinguish negative behaviors. It’s also possible to use Skinner’s techniques to accomplish personal goals. Imagine that you want to spend more time on your psychology homework but never seem to get around to it or feel that you can’t get organized. You could begin by observing how much studying you currently do for this class. You’d then set a goal to study a certain number of minutes or hours more each evening and to reinforce that behavior with a reward (such as candy or time playing a video game). Over time, your study skills would hopefully become more natural and you wouldn’t always need the reward. Parenting Personal goals

27 Applications of Operant Conditioning
Skinner introduced the concept of teaching machines that shape learning in small steps and provide reinforcements for correct rewards. OBJECTIVE 19| Describe some ways to apply operant conditioning principles at school, at work and at home. LWA-JDL/ Corbis

28 Applications of Operant Conditioning
Reinforcement principles can enhance athletic performance.

29 Applications of Operant Conditioning
Reinforcers affect productivity. Many companies now allow employees to share profits and participate in company ownership.

30 Applications of Operant Conditioning
In children, reinforcing good behavior increases the occurrence of these behaviors. Ignoring unwanted behavior decreases their occurrence.


Download ppt "Operant & Classical Conditioning"

Similar presentations


Ads by Google