Learning.

Slides:



Advertisements
Similar presentations
Introduction to Psychology
Advertisements

A.P. Psychology Modules 20-22
Learning Theories Goal  How do we learn behaviors through classical conditioning?
Learning By: Nick Lam and Jarvis Chow. Learning Definition: A relatively permanent or stable change in behavior as a result of experience. How do we learn?:
Chapter 6: Learning. Classical Conditioning Ivan Pavlov A type of learning in which a neutral stimulus acquires the ability to elicit a response. How.
Learning Unit 5. Topics in Learning Unit Defining Learning Classical Conditioning Operant Conditioning Cognitive Learning.
PowerPoint® Presentation by Jim Foley Learning © 2013 Worth Publishers.
Learning Processes Behaviorism Classical conditioning Operant conditioning.
Learning. Adaptation to the Environment Learning—a process that produces a relatively enduring change in behavior or knowledge due to past experience.
Learning Adaptation to the Environment  Learning—a process that produces a relatively enduring change in behavior or knowledge due to past experience.
Learning How do we learn through our environment? Classical Conditioning – Neutral stimulus acquires ability to produce a response Operant Conditioning.
Learning. Learning Processes Classical conditioning Behaviorism Operant conditioning.
Chapter 8 Learning.  Learning  relatively permanent change in an organism’s behavior due to experience.
Learning. Adaptation to the Environment Learning—any process through which experience at one time can alter an individual’s behavior at a future time.
Unit 6 Learning How do we Learn?.
Conditioning. Ivan Pavlov Russian scientist – he wanted to learn about the relationship between digestion and the nervous system Accidentally discovered.
LEARNING.
Chapter 7: Learning 1 What is learning? A relatively permanent change in behavior due to experience First test - purpose? To assess learning First test.
Learning is a relatively permanent change in an organism’s behavior due to experience. Learning is more flexible in comparison to the genetically- programmed.
Learning/Behaviorism Operant and Observational learning.
Learning.
© 2013 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any manner.
Classical Conditioning, Operant Conditioning, and Observational Learning Learning Conditioning Watson Thorndike Behavior Reinforcement Skinner Operants.
Chapter 7 Learning. Classical Conditioning Learning: a relatively permanent change in behavior that is brought about by experience Ivan Pavlov: – Noticed.
Chapter 6: Learning 1Ch. 6. – Relatively permanent change in behavior due to experience 1. Classical Conditioning : Pairing 2. Operant Conditioning :
Learning Theories Learning To gain knowledge, understanding, or skill, by study, instruction, or experience.
© 2013 The McGraw-Hill Companies, Inc. All rights reserved. LearningLearning Chapter 5.
HOW DO WE LEARN? Conditioning –process of learning associations  Classical conditioning- we learn to associate two stimuli and anticipate events. In classical.
READ!. Unit 4: Learning and Cognition Chapter 6: Learning.
Review Unit 7. Observational Learning Learning by watching others.
Classical Conditioning
Learning. This is happening when you respond to a second stimulus that is similar to a conditioned stimulus without additional training Generalization.
Learning Review Flashcards for Terms on the Test.
Learning. A. Introduction to learning 1. Why do psychologists care about learning? 2. What is and isn’t learning? IS: A relatively permanent change in.
Chapter 5 Learning. chapter 5 What is Learning? Occurs whenever experience or practice results in a relatively permanent change in behavior.
Chapter 6 Learning.
Chapter 5 Learning. Learning Processes Classical conditioning Behaviorism Operant conditioning.
Operant Conditioning E.L. Thorndike and B.F. Skinner.
Dr. M. Davis-Brantley.  Learning is the process that produces a relatively enduring change in behavior or knowledge as a result of an individual’s past.
Learning Principles and Applications
Table of Contents CHAPTER 6 Learning. Table of ContentsLEARNING  Learning  Classical conditioning  Operant/Instrumental conditioning  Observational.
4 th Edition Copyright 2004 Prentice Hall5-1 Learning Chapter 5.
Copyright McGraw-Hill, Inc Chapter 5 Learning.
LEARNING  a relatively permanent change in behavior as the result of an experience.  essential process enabling animals and humans to adapt to their.
Learning Experiments and Concepts.  What is learning?
Chapter 6 Learning and Behavior Learning n A more or less permanent change in behavior that results from experience.
Unit 6: Learning.
Unit 6: Learning. How Do We Learn? Learning = a relatively permanent change in an organism’s behavior due to experience. 3 Types:  Classical  Operant.
Learning Definition: The process of acquiring new and enduring information or behaviors Associative learning is the key Conditioning – the process of.
Def: a relatively permanent change in behavior that results from experience Classical Conditioning: learning procedure in which associations are made.
Chapter 8 Learning. A relatively permanent change in an organism’s behavior due to experience. learning.
Learning 7-9% of the AP Psychology exam. Thursday, December 3 Sit with your group from yesterday’s test review!
Chapter 5 Learning. What is Learning?  A process that produces a relatively enduring change in behavior or knowledge as a result of past experience.
Learning. Learning Processes Classical conditioning Behaviorism Operant conditioning.
Table of Contents Chapter 6 Learning. Table of Contents Learning –Classical conditioning –Operant/Instrumental conditioning –Observational learning Ivan.
Unit 6: Learning. How Do We Learn? Learning = a relatively permanent change in an organism’s behavior due to experience.
Operant Conditioning Chapter 6.
Chapter 6 Learning. Chapter Overview Will be some of the first Psychology information you learn in college Will be some of the first Psychology information.
Chapter 5 Learning. What is Learning?  A relatively permanent change in behavior that results from experience  Learning is adaptive  Three major types.
The Basics of Learning Learning defined: A relatively permanent change in behavior due to experience. Associative learning Habits Habituation Conditioning.
Ch. 7: Learning. Learning: relatively permanent change in an organism’s behavior due to experience. Learning What is learning?
Chapter 6 LEARNING. Learning Learning – A process through which experience produces lasting change in behavior or mental processes. Behavioral Learning.
Conditioning and Learning Unit 6 Conditioning and Learning Modules
Welcome to Jeopardy!.
Learning.
Chapter 5: Learning.
Chapter 6 Learning.
Case Study: The Little Albert Experiment
Chapter 5: Learning.
Presentation transcript:

Learning

Associative and Cognitive Learning Classical conditioning: learning to link two stimuli in a way that helps us anticipate an event to which we have a reaction  Associative Learning  Operant conditioning: changing behavior choices in response to consequences Cognitive learning: acquiring new behaviors and information through observation and information, rather than by direct experience Automatic animation. There are millions of sensory neurons and millions of motor neurons, but BILLIONS of interneurons.

Associative Learning: Operant Conditioning Child associates his “response” (behavior) with consequences. Child learns to repeat behaviors (saying “please”) which were followed by desirable results (cookie). Child learns to avoid behaviors (yelling “gimme!”) which were followed by undesirable results (scolding or loss of dessert). No animation.

Cognitive Learning Cognitive learning refers to acquiring new behaviors and information mentally, rather than by direct experience. Cognitive learning occurs: by observing events and the behavior of others. by using language to acquire information about events experienced by others. No animation.

Adaptation to the Environment Learning—any process through which experience at one time can alter an individual’s behavior at a future time What is experience: environmental effects filtered through the individual’s perceptions. Behavior at a future time. What is behavior?

Behaviorism The attempt to understand observable activity in terms of observable stimuli and observable responses John B. Watson (1913) B. F. Skinner (1938) This paradigm pretty much ignored anything that could not be seen. Perception, schema, concepts, processing--all of these were not considered because they were thought to be unobservable scientifically. John B. Watson was one of the earliest behaviorists. “Give me a dozen healthy infants and my own specified world to bring them up in and I’ll guarantee to take any one at random and produce….. B. F. Skinner - Watson’s successor--the inventor of the Skinner box and the proponent of operant conditioning.

Pavlov’s Dogs Digestive reflexes and salivation Psychic secretion Hockenbury slides (Schulman) Russian physiologist who may have first studied classical conditioning in animals. most famous research on classical conditioning received a Nobel Prize for his studies of the reflexes involved in digestion initial discovery of what is now called classical conditioning emerged from his earlier studies of digestive reflexes of dogs. Using permanently implanted tubes to collect salivary and stomach juices from dogs he found that a dog salivates differently when different kinds of food are placed in his mouth. Encountered a problem: dogs that had been given food of previous occasions in his experiments would begin to salivate before receiving food. Apparently , signals that regularly preceded food, such as sight of the food or the sound associated with its delivery, alerted the dogs to the upcoming stimulation and caused them to salivate. He called this psychic secretions and at first thought it was simply a source of experimental error but later decided to study it physiologically.

Neutral Stimulus—Bell Does not normally elicit a response or reflex action by itself a bell ringing a color a furry object

Unconditioned Stimulus—Food Always elicits a reflex action: an unconditioned response food blast of air noise

Unconditioned Response —Salivation A response to an unconditioned stimulus—naturally occurring Salivation at smell of food Eye blinks at blast of air Startle reaction in babies

Conditioned Stimulus—Bell The stimulus that was originally neutral becomes conditioned after it has been paired with the unconditioned stimulus Will eventually elicit the unconditioned response by itself

Conditioned Response The original unconditioned response becomes conditioned after it has been elicited by the neutral stimulus

DiscPsy Figure 5.1 p. 167

Classical Conditioning Phenomenon Extinction Spontaneous recovery Generalization Discrimination training Extinction- lack of reinforcement of the response and the resulting decline in response rate -an operantly conditioned response declines in rate and eventually disappears if it no longer results in a reinforcer ex: rats quit pressing levers if food pellets no longer appear Extinction is not true “unlearning” of the response but rather a learned inhibition of responding The mere passage of time following extinction can partially renew the conditioned reflex; called spontaneous recovery Generalization-phenomenon in which , after conditioning, stimuli that resemble the conditioned stimulus will elicit the conditioned response even though they themselves were never paired with the unconditioned response. Discrimination training can abolish generalization between two stimuli. By not linking the unconditioned stimulus to the neutral stimulus that has been generalized to, the animal will discriminate.

Acquisition Acquisition refers to the initial stage of learning/conditioning. What gets “acquired”?  The association between a neutral stimulus (NS) and an unconditioned stimulus (US). How can we tell that acquisition has occurred?  The UR now gets triggered by a CS (drooling now gets triggered by a bell). Timing For the association to be acquired, the neutral stimulus (NS) needs to repeatedly appear before the unconditioned stimulus (US)…about a half-second before, in most cases. The bell must come right before the food. Click to reveal text boxes. Instructor, some discussion prompts: “In the “acquisition graph,” what does “strength” mean?” [Answer: it refers to the strength of the association between NS and US, as measured by the likelihood and intensity of the unconditioned response (drooling) now being triggered by the former neutral (now conditioned) stimulus (the bell). More drooling = more strength. Timing issue: why wouldn’t it work if we rang the bell AFTER presenting the food?...because our bodies/brains are not set up to work that way. Why not?...conditioning allows us to prepare for benefits or threats; a bell after the food wouldn’t predict anything.. An exception to the half-second rule is food aversion, coming up in a later slide.

John B. Watson and Little Albert Conditioned emotional responses Generalization Extinction 11 month old baby. Loud sounds--fear loud sounds & Rat--fear Rat--fear

DiscPsy Fig5.2a p170

Classical Conditioning and Drug Use Regular use may produce “placebo response” where user associates sight, smell, taste with drug effect

Cognitive Aspects of Classical Conditioning Reliable and unreliable signals Actively process information Robert Rescorla

Taste Aversions

Early Operant Conditioning E. L. Thorndike (1898) Puzzle boxes and cats Scratch at bars Push at ceiling Dig at floor Situation: stimuli inside of puzzle box Howl Etc. Press lever First Trial in Box After Many Trials in Box Hockenbury slides (Schulman) Thorndike put cats into puzzle boxes and made them find the solution to their quandary. Thorndike did not elicit a response as Pavlov had, he had to wait for the animal to emit the proper response, learn from it and do it again. The trial and error process through which the animals learned the way to trip the latch was what Thorndike called his law of effect. Responses that produce a satisfying effect in a particular situation become more likely to occur again in that situation, and responses that produce a discomforting effect become less likely to occur again in that situation. Instrumental responses- actions which function as tools to work some change in the environment; also called operant responses. Ex: flipping a switch to light a room; rats pushing a lever to receive food Operant conditioning- learning process by which the consequence of an operant response affects the likelihood that the response will occur in the future.

Thorndikes Puzzle Box

B. F. Skinner’s Operant Conditioning Did not like Thorndike’s term “satisfying state of affairs” Interested in emitted behaviors Operant—voluntary response that acts on the environment to produce consequences Skinner insisted on describing all behavior in purely observational terms. Skinner used the term reinforcer rather than satisfaction or reward The skinner box...

Operant Conditioning Reinforcement—the occurrence of a stimulus following a response that increases the likelihood of the response being repeated

DiscPsy Fig5.4 p180

Reinforcers Primary—a stimulus that is inherently reinforcing for a species (biological necessities) Conditioned—a stimulus that has acquired reinforcing value by being associated with a primary reinforcer

Reinforcement Reinforcement: feedback from the environment that makes a behavior more likely to be done again. Positive + reinforcement: the reward is adding something desirable Negative - reinforcement: the reward is ending something unpleasant This meerkat has just completed a task out in the cold Click to reveal bullets. You can give students a preview of upcoming concepts by pointing out that taking the warm light away from the meerkat could be used as a punishment (a negative punishment, because it’s taking something away). For the meerkat, this warm light is desirable.

Behavior shaped by accidental reinforcement

Punishment Presentation of a stimulus following a behavior that acts to decrease the likelihood that the behavior will be repeated

Don’t think about the beach Don’t think about the waves, the sand, the towels and sunscreen, the sailboats and surfboards. Don’t think about the beach. Let the automatic animation play out. Are you obeying the instruction? Would you obey this instruction more if you were punished for thinking about the beach?

Problem: Punishing focuses on what NOT to do, which does not guide people to a desired behavior. Even if undesirable behaviors do stop, another problem behavior may emerge that serves the same purpose, especially if no replacement behaviors are taught and reinforced. Lesson: In order to teach desired behavior, reinforce what’s right more often than punishing what’s wrong. Click to reveal lesson. Instructor: see if students can think of examples of lessons their parents tried to teach them that were in the form of “Don’t…” Then see if the students can restate the lesson in a form of what they SHOULD do. For example, “don’t run across the street without looking” might become “look both ways before you cross the street.”

Problems with Punishment Does not teach or promote alternative, acceptable behavior May produce undesirable results such as hostility, passivity, fear Likely to be temporary May model aggression

DiscPsy Fig5.5 p181

Operant Conditioning Terms Shaping Extinction Spontaneous Recovery Discriminative Stimuli Schedules of Reinforcement We are pulled as well as pushed by events in our environment. We do not just react to stimuli; we also behave in ways to produce or obtain certain environmental changes or stimuli. Shaping- process in which successively closer approximations to the desired response are reinforced until the response finally occurs Consequences- what happens after a response positive reinforcement- the arrival of some stimulus following a response which makes the response more likely to occur; stimulus called a positive reinforcer negative reinforcement- the removal of some stimulus following a response a negative reinforcer punishment- opposite of reinforcement; the consequence of a response decreases the likelihood that the response will occur

Applications of Operant Conditioning School: long before tablet computers, B.F. Skinner proposed machines that would reinforce students for correct responses, allowing students to improve at different rates and work on different learning goals. Sports: athletes improve most in the shaping approach in which they are reinforced for performance that comes closer and closer to the target skill (e.g., hitting pitches that are progressively faster). Work: some companies make pay a function of performance or company profit rather than seniority; they target more specific behaviors to reinforce. Click to reveal three applications. Instructor: you might mention another application--kids with severe, nonverbal autism at an early age are sometimes treated with Applied Behavioral Analysis with Discrete Trial Training. In this approach, specific behaviors such as sitting to eat, making eye contact, or using words to get needs met, are targets for daily sessions of repeated shaping and reinforcement.

Discrimination Bomb-finding rat Manatee that selects shapes Discrimination: the ability to become more and more specific in what situations trigger a response. Shaping can increase discrimination, if reinforcement only comes for certain discriminative stimuli. For examples, dogs, rats, and even spiders can be trained to search for very specific smells, from drugs to explosives. Pigeons, seals, and manatees have been trained to respond to specific shapes, colors, and categories. Click to reveal bullets. Manatee that selects shapes

Reinforcement Schedules Continuous—every correct response is reinforced; good way to get a low frequency behavior to occur Partial—only some correct responses are reinforced; good way to make a behavior resistant to extinction

Partial Schedules—Ratio Ratio schedules are based on number of responses emitted Fixed ratio (FR)—a reinforcer is delivered after a certain (fixed) number of correct responses Variable ratio (VR)—a reinforcer is delivered after an average number of responses, but varies from trial to trial

DiscPsy Fig5.7 p187

Partial Schedules—Interval Interval schedules are based on time. Fixed interval (FI)—reinforcer is delivered for the first response after a fixed period of time has elapsed Variable interval (VI)—reinforcer is delivered for the first response after an average time has elapsed, differs between trials

Why we might work for money If we repeatedly introduce a neutral stimulus before a reinforcer, this stimulus acquires the power to be used as a reinforcer. A primary reinforcer is a stimulus that meets a basic need or otherwise is intrinsically desirable, such as food, sex, fun, attention, or power. Click to reveal bullets. Instructor: you could ask the class, “what learning process allows money to serve as a reinforcer when it’s just a fancy piece of paper (cash or check) or numbers on a screen?” Students might provide various “correct” answers: 1. higher order conditioning 2. classical conditioning, if the pay is presented along with other benefits, OR during the process at a store when holding cash immediately predicts getting something we already like 3. operant conditioning, when earning a lot of cash later gets reinforced by the things money can buy 4. cognitive learning, when we use our rational ability to plan and make predictions to understand that the money represents the ability to get things we want A secondary/conditioned reinforcer is a stimulus, such as a rectangle of paper with numbers on it (money) which has become associated with a primary reinforcer (money buys food, builds power).

How often should we reinforce? Do we need to give a reward every single time? Or is that even best? B.F. Skinner experimented with the effects of giving reinforcements in different patterns or “schedules” to determine what worked best to establish and maintain a target behavior. In continuous reinforcement (giving a reward after the target every single time), the subject acquires the desired behavior quickly. In partial/intermittent reinforcement (giving rewards part of the time), the target behavior takes longer to be acquired/established but persists longer without reward. Click to reveal bullets. A pigeon given a reward sometimes for pecking a key, later pecked 150,000 times without getting a reward. Trying to think like this pigeon: if the reward sometimes comes even after not showing up for a while, maybe the 150,001st time will be the time I get the reward. Perhaps this is why neglected kids hold out hope of parental love and keep trying for it even after years without a response; maybe on some rare occasion, there was an apparently loving moment.

Cognitive Aspects of Operant Conditioning Cognitive map—term for a mental representation of the layout of a familiar environment Latent learning—learning that occurs in the absence of reinforcement, but is not demonstrated until a reinforcer is available Learned helplessness—phenomenon where exposure to inescapable and uncontrollable aversive events produces passive behavior

Learned Helplessness can be produced when negative events are perceived as uncontrollable

Biological Predispositions Animal training issues Instinctive drift—naturally occurring behaviors that interfere with operant responses - raccoons rubbed coins

Classical Conditioning vs. Operant Conditioning DiscPsy .193

Observation Learning Observation Modeling Imitation Albert Bandura and the Bobo doll study

DiscPsy p.195

Antisocial Effects of Observational Learning What happens when we learn from models who demonstrate antisocial behavior, actions that are harmful to individuals and society? Children who witness violence in their homes, but are not physically harmed themselves, may hate violence but still may become violent more often than the average child. Perhaps this is a result of “the Bobo doll effect”? Under stress, we do what has been modeled for us.             Have a wonderful day. Love you. :) 😃 Click to reveal bullets.

Media Models of Violence Do we learn antisocial behavior such as violence from indirect observations of others in the media? Click to reveal bottom bar. Research shows that viewing media violence leads to increased aggression (fights) and reduced prosocial behavior (such as helping an injured person). This violence-viewing effect might be explained by imitation, and also by desensitization toward pain in others.

Famous last words??? Do what I say, not what I do— This will teach you to hit your brother— Why do you do that, you know you get in trouble for it— MSOffice Clip art