Presentation is loading. Please wait.

Presentation is loading. Please wait.

Psychological Science ©2015 W. W. Norton & Company, Inc.

Similar presentations


Presentation on theme: "Psychological Science ©2015 W. W. Norton & Company, Inc."— Presentation transcript:

1 Psychological Science ©2015 W. W. Norton & Company, Inc.
Gazzaniga • Heatherton • Halpern Psychological Science FIFTH EDITION Chapter 6 Learning ©2015 W. W. Norton & Company, Inc.

2 6.1 How Do We Learn? Learning Objectives Define learning,
Identify three types of learning processes. Describe the nonassociative learning processes: habituation and sensitization. Explain the significance of each. 2

3 Learning Results from Experience
Learning: a relatively enduring change in behavior, resulting from experience Associations develop through conditioning, a process in which environmental stimuli and behavioral responses become connected

4 Learning Results from Experience
Learning theory arose in the early twentieth century in response to Freudian and introspective approaches. John B. Watson argued that only observable behavior was a valid indicator of psychological activity, and that the infant mind was a tabula rasa, or blank slate. He stated that the environment and its effects were the sole determinants of learning. Behaviorism was the dominant paradigm into the 1960s, and it had a huge influence on every area of psychology.

5 There Are Three Types of Learning
Nonassociative learning: responding after repeated exposure to a single stimulus, or event Associative Learning: linking two stimuli, or events, that occur together Observational: acquiring or changing a behavior after exposure to another individual performing that behavior 5

6 FIGURE 6.4 Types of Learning 6

7 Habituation and Sensitization Are Simple Models of Learning
Habituation: a decrease in behavioral response after repeated exposure to a stimulus Especially if the stimulus is neither harmful nor rewarding Dishabituation: an increase in a response because of a change in something familiar 7

8 FIGURE 6.5 Types of Nonassociative Learning 8

9 FIGURE 6.6 Habituation Suppose you live or work in a noisy environment. You learn to ignore the constant noise because you do not need to respond to it. 9

10 Habituation and Sensitization Are Simple Models of Learning
Sensitization: an increase in behavioral response after exposure to a stimulus Stimuli that most often lead to sensitization are those that are threatening or painful. 10

11 FIGURE 6.8 Sensitization Suppose you are in the backseat with your brother. He keeps annoying you, until finally you threaten to strike him. Your behavioral response (threatening) in response to your annoying brother (aversive stimulus) is an example of sensitization. 11

12 6.2 How Do We Learn Predictive Associations?
Learning Objectives Define classical conditioning. Differentiate between US, UR, CS, and CR. Describe acquisition, extinction, spontaneous recovery, generalization, discrimination, second- order conditioning, and blocking. Describe the Rescorla-Wagner model of classical conditioning, including the role of prediction error and dopamine in the strength of associations. Describe the role of conditioning in the development and treatment of phobias and addictions. 12

13 6.2 How Do We Learn Predictive Associations?
We learn predictive associations through conditioning, the process that connects environmental stimuli to behavior. Psychologists study two types of associative learning. Classical conditioning Operant conditioning 13

14 FIGURE 6.9 Two Types of Associative Learning 14

15 Behavioral Responses Are Conditioned
Watson was influenced by Ivan Pavlov’s research on the salivary reflex, an automatic response when food stimulus is presented to a hungry animal. Pavlov won a Nobel Prize in 1904 for his research on the digestive system. Pavlov noticed the dogs salivated as soon as they saw the bowls that usually contained food, suggesting a learned response.

16 Behavioral Responses Are Conditioned
Twitmyer made a similar observation of the knee-jerk reflex in humans: when paired with a bell, subjects can be conditioned to demonstrate the knee-jerk response without other triggers. 16

17 FIGURE 6.10 Pavlov’s Apparatus and Classical Conditioning (a) Ivan Pavlov, pictured here with his colleagues and one of his canine subjects, conducted groundbreaking work on classical conditioning. (b) Pavlov’s apparatus collected and measured a dog’s saliva. 17

18

19 Pavlov’s Experiments Classical (Pavlovian) conditioning: a neutral object comes to elicit a response when it is associated with a stimulus that already produces that response. A typical Pavlovian experiment involves Conditioning trials: neutral stimulus and unconditioned stimulus are paired to produce a reflex (e.g., salivation). Neutral stimulus: anything the animal can see or hear as long as it is not associated with the reflex being tested (e.g., a ringing bell). Unconditioned stimulus (US): a stimulus that elicits a response, such as a reflex, without any prior learning (e.g., food)

20 Pavlov’s Experiments Critical trials: neutral stimulus alone is tested, and effect on the reflex is measured 20

21 Terminology of Pavlov’s Experiments
Unconditioned response (UR): a response that does not have to be learned, such as a reflex Unconditioned stimulus (US): a stimulus that elicits a response, such as a reflex, without any prior learning

22 Terminology of Pavlov’s Experiments
Conditioned stimulus (CS): a stimulus that elicits a response only after learning has taken place Conditioned response (CR): a response to a conditioned stimulus; a response that has been learned 22

23 23

24 Acquisition, Second-Order Conditioning, Extinction, and Spontaneous Recovery
Pavlov was influenced by Darwin and believed that conditioning is the basis of adaptive behaviors. Acquisition: the gradual formation of an association between the conditioned and unconditioned stimuli The critical element in the acquisition of a learned association is time, or contiguity.

25 Acquisition, Second-Order Conditioning, Extinction, and Spontaneous Recovery
The CR is stronger when there is a very brief delay between the CS and the US. Scary music begins to play right before a frightening scene in a movie—not during or after. 25

26 Acquisition, Second-Order Conditioning, Extinction, and Spontaneous Recovery
Animals must learn when associations are no longer adaptive. Extinction: a process in which the conditioned response is weakened when the conditioned stimulus is repeated without the unconditioned stimulus

27 Acquisition, Second-Order Conditioning, Extinction, and Spontaneous Recovery
Spontaneous recovery: a process in which a previously extinguished conditioned response reemerges after the presentation of the conditioned stimulus The recovery will fade unless the CS is again paired with the US. 27

28 Acquisition, Second-Order Conditioning, Extinction, and Spontaneous Recovery
Extinction inhibits the associative bond, but does not eliminate it. Second-order conditioning: a CS becomes associated with other stimuli associated with the US. This phenomenon helps account for the complexity of learned associations. 28

29 FIGURE 6.12 Acquisition, Extinction, and Spontaneous Recovery 29

30 Generalization and Discrimination
Stimulus generalization: learning that occurs when stimuli that are similar, but not identical, to the conditioned stimulus produce the conditioned response Stimulus discrimination: a differentiation between two similar stimuli when only one of them is consistently associated with the unconditioned stimulus

31 FIGURE 6.13 Stimulus Generalization 31

32 FIGURE 6.14 Stimulus Discrimination 32

33 Classical Conditioning Involves More Than Events Occurring at the Same Time
Pavlov’s original explanation for classical conditioning was that any two events presented in contiguity would produce a learned association. Pavlov and his followers believed that the association’s strength was determined by factors such as the intensity of the conditioned and unconditioned stimuli. 33

34 Classical Conditioning Involves More Than Events Occurring at the Same Time
However, in the mid-1960s, a number of challenges to Pavlov’s theory suggested that some conditioned stimuli were more likely than others to produce learning. Contiguity was not sufficient to create CS-US associations. 34

35 Evolutionary Significance
Psychologist Garcia and colleagues showed that certain pairings of stimuli are more likely to become associated than others. Conditioned taste aversion: the association between eating a food and getting sick Response occurs even if the illness was caused by a virus or some other condition Especially likely to occur if the food was not part of the person’s usual diet. A food aversion can be formed in one trial. 35

36 Evolutionary Significance
Animals that associate a certain flavor with illness, and therefore avoid that flavor, are more likely to survive and pass along their genes. Learned adaptive responses may reflect the survival value that different auditory and visual stimuli have based on potential dangers associated with the stimuli. 36

37 Evolutionary Significance
Biological preparedness: Psychologist Seligman argued that animals are genetically programmed to fear specific objects. People are predisposed to wariness of outgroup members. 37

38 FIGURE 6.16 Biological Preparedness Animals have evolved to be able to detect threats. Thus, (a) we will quickly see the snake in this group of images, and (b) we will have a harder time detecting the flowers in this group. In both cases, the snakes grab our attention (Hayakawa, Kawai, & Masataka, 2011). 38

39

40 Learning Involves Expectancies and Prediction
Classical conditioning is a way that animals come to predict the occurrence of events that prompted psychologists to try to understand the mental processes that underlie conditioning. Robert Rescorla argued that for learning to take place, the conditioned stimulus must accurately predict the unconditioned stimulus. 40

41 Learning Involves Expectancies and Prediction
Rescorla-Wagner model: a cognitive model of classical conditioning; it holds that the strength of the CS-US association is determined by the extent to which the unconditioned stimulus is unexpected. 41

42 Learning Involves Expectancies and Prediction
Other aspects of classical conditioning consistent with the Rescorla-Wagner model Prediction error: the difference between the expected and actual outcomes A positive prediction error strengthens the association between the CS and the US. A negative prediction error weakens the CS-US relationship. 42

43 Learning Involves Expectancies and Prediction
Blocking effect: once a conditioned stimulus is learned, it can prevent the acquisition of a new conditioned stimulus. Blocking is similar to second-order conditioning, but it involves a different process. 43

44 FIGURE 6.17 Rescorla-Wagner Model The Rescorla-Wagner model of learning emphasizes prediction error. (a) Here a dog associates the sound of an electric can opener with the arrival of food. (b) With the substitution of a manual can opener for the electric one, the dog is initially surprised. What happened to the reliable predictor of the dog’s food? (c) This prediction error causes the dog to check the environment for a new stimulus. When the dog comes to associate the manual can opener with the arrival of food, the new stimulus has become the better predictor of the expected event: time to eat! 44

45 Dopamine and Predication Error
What biological mechanisms are in effect during such learning? Researcher examined how dopamine neurons respond during conditioning Prediction error signals alert us to important events in the environment. Recent support for the error prediction model using optogenetics By using optogenetics to activate dopamine neurons, researchers actually overcame the blocking effect. 45

46 FIGURE 6.18 Prediction Error and Dopamine Activity Dopamine activity in the brain signals the receipt of a reward. (a) The blue line clearly shows a spike in dopamine activity. This activity resulted from a positive prediction error after the unexpected arrival of the US. (b) Once the US was associated with the CS, the spike in dopamine activity occurred after the arrival of the CS but not after the arrival of the expected US. (c) Dopamine activity continued after the arrival of the CS. However, once the US no longer appeared, negative prediction error resulted in decreased dopamine activity. 46

47 Phobias and Addictions Have Learned Components
Classical conditioning helps explain many behavioral phenomena. Among the examples are phobias and addictions.

48 Phobias and Their Treatment
Phobia: an acquired fear out of proportion to the real threat of an object or of a situation Fear conditioning: the process of classically conditioning animals to fear neutral objects The responses include specific physiological and behavioral reactions. Freezing: may be a hardwired response to fear that helps animals deal with predators

49 Phobias and Their Treatment
In 1919, J. B. Watson became one of the first researchers to demonstrate the role of classical conditioning in the development of phobias by devising the “Little Albert” experiment. At the time, the prominent theory of phobias was based on Freudian ideas about unconscious repressed sexual desires. Watson proposed that phobias could be explained by simple learning principles, such as classical conditioning.

50 Phobias and Their Treatment
The “Little Albert” Research Method Little Albert (11 months old) was presented with neutral objects (a white rat, rabbit, dog, and costume masks) that provoked a neutral response. During conditioning trials, when Albert reached for the white rat (CS), a loud clanging sound (US) scared him (UR).

51 Phobias and Their Treatment
Results: eventually, the pairing of the rat (CS) and the clanging sound (US) led to the rat’s producing fear (CR) on its own. The fear response generalized to other stimuli presented with the rat initially, such as the costume masks. Conclusion: classical conditioning can cause people to fear neutral objects. 51

52 Phobias and Their Treatment
Watson planned to conduct extinction trials to remove the learned phobias but Albert’s mother removed the child from the study. Is this type of research ethical? Watson’s colleague, Mary Cover Jones, used classic conditioning techniques to develop effective behavioral therapies to treat phobias in 3-year-old Peter. Counterconditioning: exposing a patient to small doses of the feared stimulus while he or she engages in an enjoyable task

53 53

54 Drug Addiction Classical conditioning also plays an important role in drug addiction. Environmental cues associated with drug use can induce conditioned cravings. Unsatisfied cravings may result in withdrawal, an unpleasant state of tension and anxiety, coupled with changes in heart rate and blood pressure. The sight of drug cues leads to activation of the prefrontal cortex and various regions of the limbic system and produces an expectation that the drug high will follow.

55 Drug Addiction Psychologist Siegel believed exposing addicts to drug cues was an important part of treating addiction. Exposure helps extinguish responses to the cues and prevents them from triggering cravings.

56 Drug Addiction Siegel and his colleagues conducted research into the relationship between drug tolerance and situation. The body has learned to expect the drug in that location and compensates by altering neurochemistry or physiology to metabolize it. Conversely, if addicts take their usual large doses in novel settings, they are more likely to overdose because their bodies will not respond sufficiently to compensate. 56

57 6.3 How Does Operant Conditioning Change Behavior?
Learning Objectives Define operant conditioning. Distinguish between positive reinforcement, negative reinforcement, positive punishment, and negative punishment. Distinguish between schedules of reinforcement. Identify biological and cognitive factors that influence operant conditioning.

58 6.3 How Does Operant Conditioning Change Behavior?
Operant Conditioning (Instrumental Conditioning): a learning process in which the consequences of an action determine the likelihood that it will be performed in the future B. F. Skinner chose the term operant to express the idea that animals operate on their environments to produce effects. 58

59 6.3 How Does Operant Conditioning Change Behavior?
Edward Thorndike performed the first reported carefully controlled experiments in comparative animal psychology using a puzzle box. Law of Effect: any behavior that leads to a “satisfying state of affairs” is likely to occur again, and any behavior that leads to an “annoying state of affairs” is less likely to occur again. 59

60 FIGURE 6.21 Thorndike’s Puzzle Box (a) Thorndike used puzzle boxes, such as the one depicted here, (b) to assess learning in animals. 60

61

62 FIGURE 6.22 Law of Effect By studying cats’ attempts to escape from a puzzle box, Thorndike was able to formulate his general theory of learning. 62

63 Reinforcement Increases Behavior
Thirty years after Thorndike, Skinner developed a more formal learning theory based on the law of effect. He objected to the subjective aspects of Thorndike’s law of effect: states of “satisfaction” are not observable empirically.

64 Reinforcement Increases Behavior
Skinner believed that behavior occurs because it has been reinforced. Reinforcer: a stimulus that follows a response and increases the likelihood that the response will be repeated. 64

65 The Skinner Box An operant chamber that allowed repeated conditioning trials without requiring interaction from the experimenter Contained one lever connected to a food supply and another connected to a water supply

66 FIGURE 6.20 B. F. Skinner B. F. Skinner studies an animal’s operations on its laboratory environment. 66

67 FIGURE 6.23 Operant Chamber This diagram shows B. F. Skinner’s operant chamber. 67

68 Shaping Sometimes animals take a long time to perform the precise desired action. What can be done to make them act more quickly? Shaping: an operant-conditioning technique that consists of reinforcing behaviors that are increasingly similar to the desired behavior Successive approximations: any behavior that even slightly resembles the desired behavior

69 Reinforcers Can Be Conditioned
Primary reinforcers: satisfy biological needs such as food or water Secondary reinforcers: events or objects established through classical conditioning that serve as reinforcers but do not satisfy biological needs (e.g., money or compliments)

70 FIGURE 6.25 Superstitions According to superstition, bad luck will come your way if a black cat crosses your path or if you walk under a ladder. 70

71 What to Believe? Using Psychological Reasoning
Seeing Relationships That Do Not Exist: How Do Superstitions Start? The list of people’s superstitions is virtually endless. Culture influences specific superstitions. In North America and Europe, the number 13 In China, Japan, Korea, and Hawaii, the number 4 Many sports stars, including Michael Jordan and Wade Boggs, engage in superstitious behaviors. . 71

72 What to Believe? Using Psychological Reasoning
The Scientific Study of Superstition B.F. Skinner started the scientific study of superstitious behavior in 1948, using pigeons as subjects. The pigeons developed a number of superstitious behaviors that they normally would not perform. Because these pigeons were performing particular actions when the reinforcers were given, their actions were accidentally reinforced. This type of learning is called autoshaping. 72

73 What to Believe? Using Psychological Reasoning
Associating Events that Occur Together in Time Both animals and humans have a tendency to associate events that occur together in time. This tendency is incredibly strong because the brain is compelled to figure things out. Pigeons develop behaviors that look like superstitions and people look for reasons to explain outcomes; the observed association serves that purpose. 73

74 What to Believe? Using Psychological Reasoning
Associating Events That Occur Together in Time Critical thinking requires us to understand psychological reasoning and be aware of the tendency to associate events with other events that occur at the same time. 74

75 Reinforcer Potency Premack theorized about how a reinforcer’s value could be determined. The key is the amount of time an organism, when free to do anything, engages in a specific behavior associated with the reinforcer. Premack principle: using a more valued activity can reinforce the performance of a less valued activity.

76 Positive and Negative Reinforcement
Reinforcement—positive or negative— increases the likelihood of a behavior. Positive reinforcement: the administration of a stimulus to increase the probability of a behavior’s being repeated Negative reinforcement: the removal of a stimulus to increase the probability of a behavior’s being repeated

77 FIGURE 6.26 Positive Reinforcement and Negative Reinforcement (a) In positive reinforcement, the response rate increases because responding causes the stimulus to be given. (b) In negative reinforcement, the response rate increases because responding causes the stimulus to be removed. 77

78 Operant Conditioning Is Influenced by Schedules of Reinforcement
How often should reinforcers be given? Continuous reinforcement: a type of learning in which behavior is reinforced each time it occurs Partial reinforcement: a type of learning in which behavior is reinforced intermittently Partial reinforcement’s effect on conditioning depends on the reinforcement schedule.

79 Operant Conditioning Is Influenced by Schedules of Reinforcement
Partial reinforcement can be administered according to either the number of behavioral responses or the passage of time. Ratio schedule: Reinforcement is based on the number of times the behavior occurs. Interval schedule: Reinforcement is provided after a specific unit of time. Ratio reinforcement generally leads to greater responding than does interval reinforcement.

80 Operant Conditioning Is Influenced by Schedules of Reinforcement
Partial reinforcement can also be given on a fixed schedule or a variable schedule. Fixed schedule: Reinforcement is provided after a specific number of occurrences or after a specific amount of time. Variable schedule: Reinforcement is provided at different rates or at different times.

81 Schedules of Reinforcement
Fixed Interval schedule (FI): occurs when reinforcement is provided after a certain amount of time has passed Variable Interval schedule (VI): occurs when reinforcement is provided after the passage of time, but the time is not regular 81

82 FIGURE 6.27 Fixed Interval Schedule Imagine a cat learning to perform “feed me” behaviors right before the two feeding times each day. The reinforcer (slash mark) is the food. 82

83 FIGURE 6.28 Variable Interval Schedule Imagine yourself checking for texts and s frequently throughout the day. The reinforcer (slash) is a message from a friend. 83

84 Schedules of Reinforcement
Fixed Ratio schedule (FR): occurs when reinforcement is provided after a certain number of responses have been made Variable Ratio schedule (VR): occurs when reinforcement is provided after an unpredictable number of responses 84

85 FIGURE 6.29 Fixed Ratio Schedule Imagine factory workers who are paid based on making a certain number of objects. The reinforcer (slash mark) is payment. 85

86 FIGURE 6.30 Variable Ratio Schedule Imagine putting a lot of money into a slot machine in the hope that eventually you will win. The reinforcer (slash mark) is a payoff. 86

87 Schedules of Reinforcement
Continuous reinforcement is highly effective for teaching a behavior. If the reinforcement is stopped, however, the behavior extinguishes quickly. Partial-reinforcement extinction effect: the greater persistence of behavior under partial reinforcement than under continuous reinforcement This explains why gambling is so addictive.

88 Positive and Negative Punishment
Punishment reduces the probability that a behavior will recur Positive punishment: the administration of a stimulus to decrease the probability of a behavior’s recurring Negative punishment: the removal of a stimulus to decrease the probability of a behavior’s recurring 88

89 FIGURE 6.31 Negative and Positive Reinforcement, Negative and Positive Punishment Use this chart to help solidify your understanding of these very important terms. 89

90 Effectiveness of Parental Punishment
For punishment to be effective, it must be reasonable, unpleasant, and applied immediately so that the relationship between the unwanted behavior and the punishment is clear. Punishment often fails to offset the reinforcing aspects of the undesired behavior. 90

91 Effectiveness of Parental Punishment
Research indicates that physical punishment is often ineffective, compared with grounding and time-outs. Many psychologists believe that positive reinforcement is the most effective way of increasing desired behaviors while encouraging positive parent/child bonding. 91

92 FIGURE 6.32 Legality of Spanking These maps compare (a) the United States and (b) Europe in terms of the legality of spanking children. 92

93 Behavior Modification
Behavior modification: the use of operant- conditioning techniques to eliminate unwanted behaviors and replace them with desirable ones Token economies operate on the principle of secondary reinforcement. Tokens are earned for completing tasks and lost for bad behavior. Tokens can later be traded for objects or privileges.

94 Biology and Cognition Influence Operant Conditioning
Behaviorists such as Skinner believed that all behavior could be explained by straightforward conditioning principles. However, a great deal about behavior remains unexplained. Biology constrains learning, and reinforcement does not always have to be present for learning to take place.

95 Biological Constraints
Animals have a hard time learning behaviors that run counter to their evolutionary adaptation. Breland and Breland used operant-conditioning techniques to train animals but ran into difficulty when the tasks were incompatible with innate adaptive behaviors.

96 Biological Constraints
Conditioning is most effective when the association between the response and the reinforcement is similar to the animal’s built-in predispositions. Bolles argued that animals have built-in defense reactions to threatening stimuli. 96

97 Acquisition/Performance Distinction
Tolman’s studies involved rats running through mazes. Cognitive map: a visual/spatial mental representation of an environment The presence of reinforcement does not adequately explain insight learning, but it helps determine whether the behavior will be subsequently repeated. 97

98 Acquisition/Performance Distinction
Tolman argued that learning can take place without reinforcement. Latent learning: takes place in the absence of reinforcement Insight learning: A solution suddenly emerges after a period either of inaction or of contemplation.

99 99

100 Dopamine Activity Underlies Reinforcement
People often use the term reward as a synonym for positive reinforcement. Skinner and other traditional behaviorists defined reinforcement strictly in terms of whether it increased behavior. The neurotransmitter dopamine is involved in addictive behavior and plays an important role in reinforcement. 100

101 Dopamine Activity Underlies Reinforcement
When hungry rats are given food, they experience an increased dopamine release in the nucleus accumbens, a structure that is part of the limbic system: The greater the hunger, the greater the dopamine release. More dopamine is released under deprived conditions than under nondeprived conditions. 101

102 Dopamine Activity Underlies Reinforcement
In operant conditioning, dopamine release sets the value of a reinforcer, and blocking dopamine decreases reinforcement. Dopamine blockers are can also help people with Tourette’s syndrome regulate their involuntary body movements. 102

103 Dopamine Activity Underlies Reinforcement
Robinson and Berridge introduced an important distinction between the wanting and liking aspects of reward. A smoker may want a cigarette but not especially enjoy it. Dopamine appears to be especially important in wanting a reward. 103

104 6.4 How Does Watching Others Affect Learning?
Learning Objectives Define observational learning. Generate examples of observational learning, modeling, and vicarious learning. Discuss contemporary evidence regarding the role of mirror neurons in learning.

105 Learning Can Occur Through Observation and Imitation
Observational learning: the acquisition or modification of a behavior after exposure to another individual performing that behavior (aka Social Learning) Observational learning is a powerful adaptive tool for humans and other animals.

106 Bandura’s Observational Studies
Bandura’s studies suggest that exposing children to violence may encourage them to act aggressively.

107 107

108 Modeling (Demonstration and Imitation)
Modeling: the imitation of observed behavior Modeling is effective only if the observer is physically capable of imitating the behavior. Imitation is much less common in nonhuman animals than in humans. Adolescents who associate smoking with admirable figures are more likely to begin smoking. 108

109 FIGURE 6.37 Movie Smoking and Adolescent Smoking This double-Y-axis graph compares the declining rate of smoking in movies with the declining rate of adolescent smoking. 109

110 Vicarious Learning (Reinforcement and Conditioning)
Vicarious learning: learning the consequences of an action by watching others being rewarded or punished for performing the same action A key distinction in learning is between the acquisition of a behavior and its performance. Learning a behavior does not necessarily lead to performing that behavior. 110

111 FIGURE 6.38 Two Types of Observational Learning 111

112 Watching Violence in Media May Encourage Aggression
The extent to which media violence impacts aggressive behavior in children is debatable. Some studies demonstrate desensitization to violence after exposure to violent video games. However, it is difficult to draw the line between “playful” and “aggressive” behaviors in children. Most research in the area of TV and aggression shows a relationship between exposure to violence on TV and aggressive behavior.

113 FIGURE 6.39 Media Use by Young Americans This bar graph shows the results of a study sponsored by the Kaiser Family Foundation, which provides information about health issues. “Total media use” means total hours individuals spent using media, sometimes more than one category of media at once. 113

114 FIGURE 6.40 Media and Violent Behavior Studies have shown that playing violent video games desensitizes children to violence. 114

115 Fear Can Be Learned Through Observation
Mineka noticed that lab-reared monkeys were not afraid of snakes the way monkeys in the wild are. Her research demonstrated that animals’ fears can be learned through observation. Social forces also play a role in fear-learning in humans.

116 116

117 Mirror Neurons Are Activated by Watching Others
Mirror neurons: neurons in the brain that are activated when one observes another individual engage in an action and performs a similar action May serve as the basis of imitation learning, but the firing of mirror neurons does not always lead to imitative behavior Possibly the neural basis for empathy and a possible role in humans’ ability to communicate through language


Download ppt "Psychological Science ©2015 W. W. Norton & Company, Inc."

Similar presentations


Ads by Google