Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 5 Learning Pages 176-212.

Similar presentations


Presentation on theme: "Chapter 5 Learning Pages 176-212."— Presentation transcript:

1 Chapter 5 Learning Pages

2

3 What is learning? Learning is any relatively permanent change in behavior brought about by experience or practice. The relatively permanent part of the definition refers to the fact that when people learn anything, some part of their brain is physically changed to record what they have learned. This is actually a process of memory, for without the ability to remember what happens people can’t learn anything. Although there is no conclusive proof yet, research suggests strongly that once a person learn something, it is always present somewhere in the memory.

4 Classical conditionging:
Tell me what you know about this? Does anything ring a bell?

5 It makes your mouth water: classical conditioning
 

6 Elements of classical conditioning:
Unconditioned stimulus (ucs) = agent that leads to a response without training. Unconditioned response (ucr) = automatic response to a ucs. Neutral stimulus (ns) = agent that initially has no effect. Conditioned stimulus (cs) = a former ns that comes to elicit a given response after paring with a ucs. Conditioned response (cr) = a learned response to a cs.

7

8

9 Example: A young child who reaches out to pet a barking dog is bitten by the dog and cries. Every time she hears a dog she whimpers. 1. Ucr = crying 3. Ucs = dog bite 5. Ns =barking 4. Cs – barking 2. Cr = crying

10 Extinction: Extinction is from conditioning and refers to the reduction of some response that the organism currently or previously produced. In classical conditioning this results from the unconditioned stimulus NOT occurring after the conditioned stimulus is presented over time.

11 Higher order conditioning:
Another concept in classical conditioning is called higher order conditioning. This occurs when a strong conditioned stimulus and the previously neutral stimulus becomes a second conditioned stimulus. For example, let’s assume that Pavlov has conditioned his dogs to salivate at the sound of the bell . What would happen if just before Pavlov rang the bell he snapped his fingers? The sequence would now be snap-bell-salivation,” or NS-CS-CR. If this happens enough times the finger snap will eventually also produce a salvation response. The finger snap becomes associated with the bell through the same process that the bell became associated with the food originally and is now another CS.

12

13 John Watson & Rosalie Rayner:
The "Little Albert" experiment was a famous psychology experiment conducted by behaviorist John B. Watson and graduate student Rosalie Rayner. Previously, Russian physiologist Ivan Pavlov had conducted experiments demonstrating the conditioning process in dogs. Watson was interested in taking Pavlov's research further to show that emotional reactions could be classically conditioned in people.

14 Conditioned emotional responses:
Emotional response that has become classically conditioned to occur to learned stimuli, such as a fear of dogs or the emotional reaction that occurs when seeing an attractive person.

15 Conditioned emotional responses:
The learning of phobias is a very good example of a certain type of classical conditioning , the conditioned emotional response (CER). Conditioned emotional responses are some of the easiest forms of classical conditioning to accomplish, and our lives are full of them. Its easy to think of fears people might have that are conditioned or learned: a child’s fear of the dentist chair, a puppy’s fear of a rolled-up newspaper, of the fear of dogs that is often shown by a person who has been attacked by a dog in the past. But our emotions can be conditioned too.

16 Conditioned emotional response cont.
The process of acquiring a Conditioned Emotional Response works in the same theory as the classical conditioning learning method. An organism is exposed to a certain stimulus which then produces a biologically significant even and the connection is made. Emotional responses could be anxiety, happiness, sadness, pain, and variety of other emotions that can be triggered in an organism. All emotional responses are regulated by the autonomic nervous system. Among the two subdivisions of the system (Parasympathetic system and Sympathetic nervous system), Sympathetic nervous system are responsible for variety of emotional responses depicted by an average person.

17 Conditioned emotional response cont.
The range of emotions includes panic attacks, test anxiety, stage fright, and other similar emotions that are expressed while experiencing distraught or uneasiness. The system is automatically activated in the “fight or fright” situations, which then produces responses like increased heartbeat, sweating, feeling weak on the knees, and similar other symptoms. These kinds of emotions/reactions are unwantedly or unconsciously acquired, and these even tend to stick to a person for a long while. These conditioned responses take up to seconds to be seen, unlike the motor responses which are even seen as early as half a second.

18 Conditioned emotional response cont.
John B. Watson and Rosaile Rayner conducted an experiment in called the Little Albert Experiment. The experiment involved a 9-month old baby, and the whole purpose of the experiment was to induce fear in little Albert. The experiment is the classic example of CER, Conditioned Emotional Response, as Little Albert was subjected to a certain stimulus in order to create a response of fear. The classic experiment involved CER, Watson and Rayner being unaware about the term at the time simply thought they were applying general conditioning principles to human behavior.

19 A video over conditioned emotional response!

20

21 John Garcia and taste aversion:
Are there any foods you just can’t eat anymore because of a bad experience?

22 Other conditioned responses in humans:
Believe it or not, your reaction to that food is a kind of classical conditioning. Many experiments have shown that lab rats will develop a conditioned taste aversion for any liquid or food they swallow up to 6 hours before becoming nauseated. John Garcia found that rats that were given a sweetened liquid and then injected with a drug or exposed to radiation that caused nausea would not touch the liquid again. In similar manner, the chemotherapy drug that cancer patients receive also can create severe nausea, which causes those people to develop a taste aversion for any food they have eaten before going in for the treatment.

23 Conditioned taste aversion:
Development of a nausea or aversive response to a particular taste because that taste was followed by a nausea reaction, occurring after only one association.

24 Biological preparedness:
Conditioned taste aversion, along with phobic reaction, are an example of something called biological preparedness. Most mammals find their food by smelling and taste and will learn to avoid any food that smells or tastes like something they ate just before becoming ill. It’s a survival mechanism, because if they kept eating a bad food they might just die. Although most conditioning requires repeated paring of CS with UCS and the CS and UCS close together in time when the response is nausea, one paring may be all that is necessary.

25 Biological preparedness:
Taste aversion conditioning is so effective that it has even been used by John Garcia. Garcia and some colleagues used this as a tool to stop coyotes form killing ranchers sheep an also to stop ranchers from wiping out the coyote population entirely. Garcia an his fellow researchers laced sheep meat with lithium chloride and left it for the coyotes to find. The coyotes ate the drugged meat, and got extremely sick and avoided eating sheep for quiet some time afterwards.

26 What is operant conditioning:
Classical conditioning is the kind of learning that occurs with reflexive, involuntary behavior. The kind of learning that applies to voluntary behavior is called operant conditioning.

27 Operant conditioning big bang theory

28 B. F. skinner the behaviorist’s behaviorist:
B. F. Skinner was the behaviorist who assumed leadership of the field after John Watson. He was even more determined than Watson that psychologists should study only measurable, observable behavior. In addition to his knowledge of Pavlovian classical conditioning, Skinner found in the work of Thorndike a way to explain all behavior as the product of learning. He even gave the learning of voluntary behavior as special name: operant conditioning. Voluntary behavior is what people and animals do to operate in the world.

29 The concept of reinforcement:
One of Skinner’s major contributions to behaviorism = reinforcement. The word reinforcement means to strengthen. Skinner defined reinforcement as anything that, when following a response, causes that response to be more likely to happen again. Typically this means that reinforcement is a consequence that is in some way pleasurable to the organism which relates back to Thorndike’s Law of Effect (which is coming up next.)

30 Primary and secondary reinforcers:
The events or items that can be used to reinforce behavior are not all alike. For example: Let’s say your friend asks you for you help, she needs to re paint her living room for a big Christmas party. She says that if you help her she will give you $60.00 or a sandwich. Unless you have suffered recent brain damage you are most likely going to take the money right? With $60.00 you can buy lots of lunches. Now pretend your friend offers the same deal to her 3-year-old sister who likes to play with paint which reward will the child choose money or the sandwich? (she will most likely choose the sandwich)…

31 Primary and secondary reinforcers:
The money and the sandwich represent two basic kinds of reinforcers, items or events that when following a response will strengthen it. The reinforcing properties of money must be learned, but food gives immediate reward in the form of taste and satisfying huger. A reinforce such as a sandwich that satisfies a basic need like hunger is called a primary reinforce. Example of primary reinforce = any kind of food (hunger drive) liquid (thirst drive) or touch (pleasure drive). Infants, toddlers, and preschool-age kids and animals can be easily reinforced by using primary reinforcers.

32 Primary and secondary reinforcers:
A secondary reinforce such as money, however gets its reinforcing properties from being associated with primary reinforcers in the past. A child who is given money to spend soon realized that the ugly green paper can be traded for toys and treats-primary reinforcers-and so many becomes reinforcing in and of itself. if a person praises a puppy while petting him (touch a primary reinforce) the praise alone will eventually make the puppy squirm with delight . Secondary reinforcers do indeed get their reinforcing power from the process of classical conditioning.

33 Quick definition: Primary reinforcer = any reinforce that is naturally reinforcing by meeting a basic biological need such as hunger thirst or touch. Secondary reinforcer = any reinforce that becomes reinforcing after being paired with a primary reinforce, such as praise tokens or gold stars.

34 Positive and negative reinforcement:
Reinforcers can also differ in the way they are used. Most people understand that following a response with some kind of pleasurable consequence (reward) will lead to an increase in the likelihood of that response being repeated. But many people have trouble understanding that the opposite is also true: following a response with the removal or escape from something unpleasant will also increase the likelihood of the response being repeated. Example: if you have a head ach you take a pain pill which gets rid of the head ach. The pain was the unpleasant thing that was removed by the pill.

35 Positive and negative reinforcement:
There are really only two kinds of things people ever experience as consequences in the world: things they like (food, money, sex, praise) and things they don’t like (spanking, being yelled at, and pain). There are also only two possibilities for experiencing these two kinds of consequences: either people experience them directly like getting paid for something or they don’t experience them such as losing an allowance for misbehaving or getting yelled at.

36 Quick definition: Positive reinforcement = the reinforcement of a response by the addition or experiencing of a pleasurable stimulus. Negative reinforcement = the reinforcement of a response by the removal, escape from, or avoidance of an unpleasant stimulus.

37 Punishment big bang theory:

38 Two kinds of punishment:
People get confused because negative sounds like it ought to be something bad, like a kind of punishment. Punishment is actually the opposite of reinforcement. It is any event or stimulus that, when following a response, causes the response to be less likely to happen again. Punishment weakens responses, whereas reinforcement (no matter whether it is positive or negative) strengthens responses. There are two ways in which punishment can happen, just as there are two ways in which reinforcement can happen.

39 Two kinds of punishment:
1. Punishment by application = occurs when something unpleasant (such as spanking, scolding, or unpleasant stimulus) is added to the situation or applied. This is the kind of punishment that most people think of when they hear the word punishment. This is also the kind of punishment that many child development specialists strongly recommend parents avoid using with their children because it can easily escalate into abuse.

40 Two kinds of punishment:
2. punishment by removal = on the other hand is the kind of punishment most often confused with negative reinforcement. In this type of punishment behavior is punished by the removal of something pleasurable or desired after the behavior occurs. Have you ever been grounded and forced to stay at home as a punishment for breaking a rule? Grounding removes your freedom to do what you want and is an example of punishment by removal.

41 Problems with punishment:
The job of punishment is much harder than that of reinforcement. In using reinforcement all one has to do is strengthen a response that is already there. But punishment is used to weaken a response, and getting rid of a response that is already well established is not that easy. Many times punishment only serves to temporarily suppress or inhibit a behavior until enough time has passed, and the behavior may resurface. Can you think of any examples of this actually happening in your life or your siblings lives?

42 More concepts in operant conditioning:
Operant conditioning is more than just the reinforcement of simple responses. For example have you ever tried to teach a pet to do a trick? If you have you know that training animals involves more than simple reinforcement. Shaping = the reinforcement of simple steps in behavior that lead to a desired more complex behavior. Shaping happens when small steps towards an ultimate goal are reinforced until eventually that goal is reached.

43

44 The schedules of reinforcements:
The timing of reinforcement can make a tremendous difference in the speed at which learning occurs and the strength of the learned response. Skinner found that reinforcing each and every response was not necessarily the best schedule of reinforcement for long lasting learning.

45 The schedules of reinforcements:
Partial reinforcement effect = the tendency for a response that is reinforced after some, but not all, correct responses to be very resistant to extinction. BIANCA Continuous reinforcement = the reinforcement of each and every correct response. ALICIA Example = Alicia’s mom gives her a quarter every night she remembers to put her dirty clothes in the hamper. Bianca’s mom giver her a dollar at the end of the week, but only if she has put her clothes in the hamper every night. Alicia learns more quickly than does Bianca because responses that are reinforced each time they occur are more easily and quickly learned. After a time both mothers stop giving the girls money which girl do you think will be more willing to put up her clothes?

46 Answer: It will more likely be Alicia who has expected to get a reinforce (quarter) after every single response. As soon as the money stops the behavior will be extinguished. Bianca has only expected her money after a week so she will continue to put her clothes up for seven days until she stops (due to lack of reinforcement).

47 The schedules of reinforcements:
Although it may be easier to teach a new behavior using continuous reinforcement, partially reinforced behavior is not only more difficult to suppress but also more like real life. Imagine being paid for every hamburger you make or every report you turn in. In the real world people tend to receive partial reinforcement rather than continuous reinforcement for their work.

48 The schedules of reinforcements:
The kind of reinforcement schedule most people are more familiar with is called a fixed interval schedule of reinforcement = schedule of reinforcement in which the interval of time that must pass before reinforcement becomes possible is always the same. Example: Your paycheck!

49 The schedules of reinforcements:
Variable interval schedule of reinforcement = schedule of reinforcement in which the interval of time that must pass before reinforcement becomes possible is different for each trial or event. Example: Pop quizzes are unpredictable. Students don’t know exactly what day they might be given so the best strategy is to study a little every night just in case there is a quiz the next day.

50 The schedules of reinforcements:
Fixed ratio schedule of reinforcement = schedule of reinforcement in which the number of responses required for reinforcement is always the same. Example: Anyone who does piece work in which a certain number of items have to be completed before payment is given is reinforced on a fixed ratio schedule. Some sandwich shops give out punch cards that get punched one time for each sandwich purchased. When the card has 10 punches the person will get a free sandwich.

51 The schedules of reinforcements:
Variable ratio schedule of reinforcement = schedule of reinforcement in which the number of responses required for reinforcement is different for each trial or event. Example: People who put money in slot machines are being reinforced on a variable ratio schedule of reinforcement. They put their coins in but they don’t know how many times they will have to do this before reinforcement (jackpot) comes.

52 Edward thorndike

53 Edward Thorndike: Thorndike was one of the first researchers to explore and attempt to outline the laws of learning voluntary responses, although the field was not yet called operant conditioning. Thorndike placed a hungry cat inside a puzzle box from which the only escape was to press a lever located on the floor of the box. Thorndike observed that the cat would move around the box, pushing and rubbing up against the walls in an effort to escape. Eventually the cat would accidentally push the lever, opening the door. Upon escaping the cat was fed from a dish placed just outside the box.

54

55 The Law of effect: The lever is the stimulus the pushing of the lever is the response and the consequence is both escape (good) and food (even better). The cat did not learn to push the lever and escape right away. After a number of trials (and many errors in a box like this one the cat took less and less time to push the lever that would open the door. Its important not to assume that the cat had “figured out” the connection between the lever and freedom. Thorndike kept moving the lever to a different position and the cat had to learn the whole process over again.

56 The Law of effect: The cat would simply continue to rub and push in the same general area that led to food and freedom the last time each time getting out and feed a little more quickly. Based on this research Thorndike developed the Law of Effect: if an action is followed by a pleasurable consequence, it will tend to be repeated. If an action is followed by an unpleasant consequence, it will tend to not be repeated. This is the basic principle behind learning voluntary behavior.

57

58 Edward Tolman’s maze running rats: Latent learning

59 Edward tolman: One of Gestalt psychologist Tolman best known experiments in learning involved teaching three groups of rats the same maze, one at a time. In the first group, each rat was placed in the maze and reinforced with food for making its way out the other side. The rat was then placed back in the maze reinforced and so on until the rat could successfully solve the maze with no errors the typical maze-learning experiment. The second group of rats was treated exactly like the first except that they never received any reinforcement upon exiting the maze. They were simply put back in again and again, but when the 10th day of the experiment rolled around they were given a reinforcement.

60 Edward tolman: The third group served as the control group, and were like the second group not given any type of reinforcement. A strict Skinnerian behaviorist would predict that only the first group of rats would learn the maze successfully because learning depends on reinforcing consequences. At first this seemed to be true. The first group of rats did indeed solve the maze after a certain number of trials, whereas the second and third groups seemed to wander around the maze until accidentally finding their way out. On the 10th day however, something happened that would be difficult to explain using only Skinner’s basic principles.

61 Edward tolman: The second group of rats upon receiving the reinforcement for the first time should have taken as long as the first group to solve the maze. Instead they began to solve the maze almost immediately. Tolman concluded that the rats in the second group while wandering around for the first 9 days of the experiment has learned all the maze had to offer but they had no reason to prove or demonstrate their knowledge because they were not getting any type of reinforcement. The rats had developed a type of cognitive map in their heads that allowed them to remember everything the maze had to offer, the cognitive map had remained hidden or latent until the rats had a reason to demonstrate their knowledge by getting food. Tolman called this latent learning the idea that learning could happen without reinforcement and then later affect behavior.

62 Wolfang kohler’s smart chimp: insight learning
Kohler was a Gestalt psychologist who became marooned on an island in the Canaries (islands off the coast of North Africa) when WWI broke out. He was stuck at the primate research lab that had first drawn him to the island, he turned his accidental long term stay into a time of research.

63 Wolfang kohler’s smart chimp: insight learning
He set up a problem for one of the chimpanzees. Sultan the chimp was faced with the problem of how to get to a banana that was placed just out of his reach outside his cage. Eventually the chimps actions lead Kohler to something called insight = the sudden preconception of relationships among various parts of a problem allowing the solution to the problem to come quickly.

64

65 Martin Seligman’s depressed dogs: learned helplessness
Founding father of positive psychology = a new way of looking at the entire concept of mental health and therapy. Seligman experimented on dogs to see if even though they have the ability to change the bad situation they were in (shocking them) he wanted to see what they would do this lead him to something called learned helplessness. Learned helplessness = the tendency to fail to act to escape from a situation because of a history of repeated failures in the past.

66

67 Observational learning:
The learning of new behavior through the observation of a model (watching someone else who is doing the behavior.)

68

69 Four elements of observational learning: Bandura (Bobo doll guy)
Bandura concluded from these studies (bobo doll studies) and others that observational learning required the presence of four elements. 1. Attention = to learn anything through observation, the learner must first pay attention to the model. For example if your grandparents host a fancy dinner party and you want to know which utensils to use at certain points during the multi-course meal you might want to pay attention to your grandma to see which one she picks up and when.

70 Four elements of observational learning: Bandura (Bobo doll guy)
2. Memory = the learner must also be able to retain the memory of what was done. If your friend shows you how to install and run a new computer program you need to remember those steps in order to put the program on your own computer later on. 3. Imitation = the learner must be capable of reproducing or imitation the actions of the model. While I was learning how to be a teacher I was put into a classroom at Edison Prep and I watched the teacher I was paired with then when it was my turn in front of the kids I acted like her until I got the hang of teaching.

71 Four elements of observational learning: Bandura (Bobo doll guy)
4. Motivation = finally the learner must have the desire or motivation to preform the action. We are going to watch a crash course video about motivation!

72


Download ppt "Chapter 5 Learning Pages 176-212."

Similar presentations


Ads by Google