Presentation is loading. Please wait.

Presentation is loading. Please wait.

Prepare your classical conditioning projects to turn in. Write down your weekly reading assignment: Read and take notes on pages 334 – 343.

Similar presentations


Presentation on theme: "Prepare your classical conditioning projects to turn in. Write down your weekly reading assignment: Read and take notes on pages 334 – 343."— Presentation transcript:

1 Prepare your classical conditioning projects to turn in. Write down your weekly reading assignment: Read and take notes on pages 334 – 343.

2 AP Psychology Ms. Desgrosellier 3.2.2010

3 Objective: SWBAT identify the two major characteristics that distinguish classical conditioning from operant conditioning.

4 acquisition extinction spontaneous recovery generalization discrimination

5 Classical Conditioning: - forms associations between stimuli (a CS and the US it signals) - involves respondent behavior

6 Operant Conditioning: - involves operant behavior

7 associative learning: learning that certain events (a response and its consequences in operant conditioning) occur together. operant conditioning: a type of learning in which behavior is strengthened if followed by a reinforcer or diminished if followed by a punisher.

8 respondent behavior: behavior that occurs as an automatic response to some stimulus. Skinner’s term for behavior learned through classical conditioning. operant behavior: behavior that operates on the environment, producing consequences. The behavior operates on the environment to produce rewarding or punishing stimuli.

9 We can tell the difference between classical and operant conditioning by asking: Is the organism learning associations between events that it doesn’t control (classical conditioning)? Or is it learning associations between its behavior and resulting events (operant conditioning)? See table 22.1 for more information.

10 Objective: SWBAT state Thorndike’s law of effect, and explain its connection to Skinner’s research on operant conditioning.

11 B.F. Skinner (1904 – 1990) is one of behaviorism’s most influential and controversial figures. His work elaborated on Edward L. Thorndike’s (1874 – 1949) observation of a simple fact of life: law of effect: Thorndike’s principle that behaviors followed by favorable consequences become more likely, and that behaviors followed by unfavorable consequences become less likely.

12 Skinner designed his experiments using the Skinner box, or an operant chamber: a chamber containing a bar or key that an animal can manipulate to obtain a food or water reinforcer, with attached devices to record the animal’s rate of bar pressing or key pecking. His experiments have explored the exact conditions that foster efficient and enduring learning.

13

14 He would put an animal in the box, first rats and then pigeons. They would move around the box until they accidently pushed the button that would release the food. Eventually, with constant food rewards, they would learn that pushing the button caused food to appear. They would then push the button intentionally to receive their reward.

15 Objective: SWBAT describe the shaping procedure, and explain how it can increase our understanding of what nonverbal animals and babies can discriminate.

16 Shaping: an operant conditioning procedure in which reinforcers guide behavior toward closer and closer versions of the desired behavior. For example, if you wanted to shape a rat’s behavior to press a bar, you would first observe its natural behavior and then build on them. You might give the rat a food reward every time it moves toward the bar.

17 Shaping: an operant conditioning procedure in which reinforcers guide behavior toward closer and closer versions of the desired behavior. Then, when the rat was doing this regularly, you would reward it only when it got closer to the bar. Finally, you would require it to touch the bar to get the food.

18 Shaping: an operant conditioning procedure in which reinforcers guide behavior toward closer and closer versions of the desired behavior. This method is called successive approximations, rewarding responses that are ever-closer to the final desired behavior, and you ignore all other responses.

19 By shaping non-verbal organisms to discriminate between stimuli, psychologists can also determine what they perceive. If we shape them to respond to one stimulus and not another, then obviously they can perceive the difference. discriminative stimulus: it signals that response will be reinforced.

20 Objective: SWBAT compare positive and negative reinforcement, and give one example each of a primary reinforcer, a conditioned reinforcer, an immediate reinforcer, and a delayed reinforcer.

21 Reinforcer: in operant conditioning, any event that strengthens the behavior that follows. This is not just rewards! If yelling at someone increases their behavior (like in the military), then this is still reinforcement.

22 positive reinforcement: increasing behaviors by presenting positive stimuli, such as food. A positive reinforcer is any stimulus that, when presented after a response, strengthens the response.

23 negative reinforcement: increasing behaviors by stopping or reducing negative stimuli, such as shock. A negative reinforcer is any stimulus that, when removed after a response, strengthens the response (note: negative reinforcement is not punishment!).

24 Giving out candy when someone participates in class? Positive reinforcement When the seatbelt buzzer in your car stops when you buckle up? Negative reinforcement

25 What is the difference between positive reinforcement and negative reinforcement? Give one example of each.

26 Primary reinforcer: an innately reinforcing stimulus, such as one that satisfies a biological need. e.g. getting food when you’re hungry or stopping an electric shock.

27 Conditioned reinforcer: a stimulus that gains its reinforcing power through its association with a primary reinforcer; also known as a secondary reinforcer. e.g. a rat in a Skinner box that knows that a light means that food is coming, so the rat works to turn on the light. The light has become a conditioned reinforcer.

28 Immediate reinforcers are given directly after a desired action. Delayed reinforcers are given after the desired action In rats in a Skinner box, delayed reinforcers will not help the rat to learn to press the bar.

29 Humans can respond to delayed reinforcers, like a paycheck at the end of the work week or a good grade at the end of the semester. People need to learn to delay gratification to receive greater long-term rewards.

30 Objective: SWBAT discuss the strengths and weaknesses of continuous and partial (intermittent) reinforcement schedules, and identify four schedules of partial reinforcement.

31 continuous reinforcement: reinforcing the desired response every time it occurs. learning occurs rapidly, but so does extinction. when the reinforcement stops, the desired behavior stops. This is also not like real life.

32 partial (intermittent) reinforcement: reinforcing a response only part of the time. Results in slower acquisition of a response but much greater resistance to extinction than continuous.

33 fixed-ratio schedule: in operant conditioning, a reinforcement schedule that reinforces a response only after a specified number of responses. With this schedule, an animal will pause only briefly after a reinforcer and will then return to a high rate of responding.

34 variable-ratio schedules: in operant conditioning, a reinforcement schedule that reinforces a response after an unpredictable number of responses. Produces high rates of responding because reinforcers increase as the number of responses increases.

35 fixed-interval schedule: in operant conditioning, a reinforcement schedule that reinforces a response only after a specified time has elapsed. Produces a choppy stop-start pattern rather than a steady response rate.

36 variable-interval schedule: in operant conditioning, a reinforcement schedule that reinforces a response at unpredictable time intervals. Produces slow, steady responding.

37

38 Reward every 30 times a rat presses the button Fixed-ratio schedule checking e-mail repeatedly to get the reward of a new message variable-interval schedule

39 People who play slot machines in hopes of winning the jackpot variable-ratio schedule people checking for the mail as delivery time gets closer fixed-interval schedule

40 Objective: SWBAT discuss the ways negative punishment, and positive punishment, and negative reinforcement differ, and list some drawbacks of punishment as a behavior- control technique.

41 punishment: an event that decreases the behavior that follows it. A punisher decreases the frequency of a preceding behavior, usually by administering an undesirable consequence or withdrawing a desirable one.

42 Positive punishment: gives an undesirable consequence after a behavior to decrease the likelihood of repeating the behavior. Examples:

43 In YOUR OWN WORDS, define punishment.

44 Negative punishment: taking away a desirable consequence after a behavior to decrease the likelihood of repeating the behavior. Examples:

45 If punishment is avoidable, the punished behavior may reappear in safe settings. i.e. the child may learn discrimination – it’s okay to swear when you’re not around your parents.

46 Physical punishment may increase aggressiveness. Punishment may create a sense of fear, and if its unpredictable and inescapable, animals and people may develop feelings of helplessness and depression.

47 Even though punishment suppresses unwanted behavior, it often does not guide one toward more desirable behavior. Punishment tell you what not to do, while reinforcement tells you what to do. Punishment combined with reinforcement is better than punishment alone.

48 Punishment often teaches simply how to avoid it (says Skinner). Most psychologists favor reinforcement – notice someone doing something right and affirm them for it.

49 Objective: SWBAT explain how latent learning and the effect of external rewards demonstrate that cognitive processing is an important part of learning.

50 Skinner (and behaviorism) resisted the belief that cognitive processes – thoughts, perceptions, expectations – have a necessary place in psychology and conditioning.

51 Research has shown hints of cognitive processes at work in operant learning. e.g. animals on fixed-interval reinforcement schedules respond more and more frequently as the time approaches when a response will produce a reinforcer.

52 cognitive map: a mental representation of the layout of one’s environment. For example, after exploring a maze, rats act as if they have learned a cognitive map of it. This is seen in rats in a maze with no obvious rewards (i.e. no reinforcement).

53 latent learning: learning that occurs but is not apparent until there is an incentive to demonstrate it. There is more to learning than associating a response with a consequence.

54 Unnecessary rewards sometimes carry hidden costs. Sometimes rewards given for tasks people already find interesting can lower their natural interest in the activity.

55 Intrinsic motivation: a desire to perform a behavior for its own sake. Excessive rewards can undermine intrinsic motivation. Intrinsically motivated people work and play in search of enjoyment, interest, self- expression, or challenge.

56 Extrinsic motivation: a desire to perform a behavior due to promised rewards or threats of punishment.

57 Objective: SWBAT explain how biological predispositions place limits on what can be achieved with operant conditioning.

58 An animals predispositions constrain its capacity for operant conditioning. e.g. it’s easy to teach a hamster to associate food with standing and digging because they are part of its natural food-seeking behaviors. It’s much more difficult to get them to associate food with face washing because it’s not normally associated with food or hunger.

59 Bottom line: biological constraints predispose organisms to learn associations that are naturally adaptive. instinctive drift: “misbehaviors” occurred as the animals reverted to their biologically predisposed patterns.

60 Objective: SWBAT describe the controversy over Skinner’s views of human behavior.

61 Skinner believed that only external influences shaped behavior and urged the use of operant principles. Critics said this is dehumanizing to people and neglects their personal freedom and seeks to control their actions. Skinner said we could use external consequences to better humans and society.

62 Objective: SWBAT identify the major similarities and differences between classical and operant conditioning.

63 Classical conditioning involves an organism associating different stimuli that it does not control and responds automatically to (respondent behaviors). Operant conditioning involves an organism associating its operant behaviors (those that act on its environment to produce rewarding or punishing stimuli) with their consequences.

64 Cognitive processes and biological predispositions influence both kinds of conditioning. Both involve acquisition, extinction, spontaneous recovery, generalization, and discrimination. See table 22.4 for more information.


Download ppt "Prepare your classical conditioning projects to turn in. Write down your weekly reading assignment: Read and take notes on pages 334 – 343."

Similar presentations


Ads by Google