Kr test yourself answers ch. 6.doc
Test Yourself, p. 228
1. What is classical conditioning? How was it discovered?
Classical conditioning is sometimes called Pavlovian conditioning because it was discovered by Ivan Pavlov. a.
Pavlov studied salivation in dogs. He collected the saliva in tubes connected to the dogs’ salivary glands.
Pavlov noticed that his dogs were salivating simply on seeing their food bowls or hearing their feeder’s footsteps.
Pavlov’s basic method consisted of: a.
Sounding a tone on a tuning fork just before food was brought into the dogs’ room.
After several times of pairing the tone with the food, the dogs would salivate on hearing the tone alone.
The food is the unconditioned stimulus
(US) because it elicits an
automatic response that does not depend on prior learning.
The dogs’ salivation is termed the unconditioned response
reflexive or automatic response elicited
by the US.
The tone is the conditioned stimulus
(CS), an originally neutral
stimulus that acquires significance through the conditioning
repeated pairings with a US.
Salivation in response to the tone alone is a conditioned response
(CR), a response that depends on —is conditional on-- pairings of the
CS with a US.
The initial learning of the conditioned response is called acquisition
Initially, researchers thought that to create a conditioned response, the US must follow the CS, a process called forward conditioning.
There are two types of forward conditioning: 1)
, when the CS occurs both before and
during the presentation of the US.
, when the presentation of t he CS ends
before the presentation of the US begins. This is most
effective if there is a very short interval of time between the
CS and US.
Pavlov also tried backward pairing
, when the US comes first, followed by the
CS. He found no conditioning.
Even simultaneous conditioning
(presenting the CS and US at the same time)
does not lead to a conditioned response.
With only some exceptions, the US should follow the CS immediately for conditioning to occur.
Vladimir Bechterev extended Pavlov’s work. a.
In his studies, the US was a shock and the UR was the dog’s withdrawal of its foot.
When a neutral stimulus, such as a bell (CS) was paired with the shock, the dog learned to withdraw its foot (CR), thus successfully learning to avoid pain.
This demonstrated the conditioned response to motor reflexes and
established the basis for avoidance learning
John Watson and Rosalie Rayner demonstrated how classical conditioning can
produce a conditioned emotional response
of fear and how fear can lead to a
Watson and Rayner classically conditioned fear in an 11-month-old infant they called “Little Albert.” 1)
Initially, Little Albert was afraid of loud noises but not animals.
The researchers p aired the presentation of a white rat (CS) with the loud noise (US).
After 5 pairings, Albert developed a phobia of rats.
This study could not be done today because of rigorous ethical principles governing psychological research. (See chapter 1.)
Watson and Rayner did nothing to help Little Albert overcome his phobia.
Organisms seem to have a biological preparedness
(a built-in readiness for
certain conditioned stimuli to elicit particular conditioned responses).
Less learning is necessary to produce such conditioning.
If the first time you eat cheese, you become nauseous, you may learn to avoid that type of cheese. The fact that it takes only one trial to learn this food aversion is an example of biological preparedness.
Fear-related responses are more easily conditioned, and less easily lost, if the CS is a picture of a snake, rat, or spider than other objects. This may be because it is evolutionarily adaptive to avoid dangerous objects. .
is a built-in disinclination or inability for certain
conditioned stimuli to elicit particular conditioned responses. For example, car
doors and wooden blocks do not make successful conditioned stimuli.
2. How are conditioned responses eliminated?
If the CS is repeatedly presented without the US, the response will be
extinguished; this is called extinction
What occurs during extinction is not the forgetting of old learning, but the overlayering of old learning by new learning.
This interferes with the previous classically conditioned response.
However, once classical conditioning has occurred, the connection between the CS and the US never completely vanishes. For example, relearning (the CS again elicits the CR) takes place much more quickly than the original training.
If the CS is not presented for a period of time and is then re -presented, the CR
will return. This is called spontaneous recovery
refers to the tendency for the CR to be elicited by
neutral stimuli that are like, but not identical to CS.
The more closely the new stimulus resembles the original CS, the stronger the response.
This may be helpful for survival because often a dangerous stimulus may not occur in exactly the same form the next time.
Organisms are also able to discriminate between stimuli similar to the CS and to
respond only to conditioned stimuli; this is called stimulus discrimination
In higher order conditioning
, the CS serves as a US when paired with a new
For example, once Little Albert was conditioned to fear the white rat, if a rock and the white rat were paired enough times, Little Albert would have come to fear the rock.
However, the response would not be as strong as it was originally to the tone.
Strict behaviorists do not believe that thoughts play a role in classical conditioning. However, research suggests otherwise.
The CS provides information by signaling the upcoming US.
Backward pairing does not produce conditioning.
Kamin conditioned rats by pairing a tone with a shock; the rats developed a conditioned fear response to the tone. But, when he added a second CS by turning on a light with the tone, the rats did not developed a conditioned fear response to the light alone. This is apparently because the light did not add new information and was not worth the rats’ attention.
Visual imagery can serve as a CS or US. For example, imagining food can lead to salivation.
Classical conditioning is brain-based. a.
There are 3 distinct components and brain processes involved in classical conditioning.
The visual and auditory cortex register the stimulus.
The amygdala is involved in producing the response. When fear is conditioned, it is the central nucleus of the amygdala that is involved.
Conditioning causes the neurons in the cortex and the amygdala to become linked.
Even after conditioning has been extinguished, linked activity remains, making it easy to relearn a conditioned response.
Extinction depends on the active suppression of the response, which is accomplished by the frontal lobe’s inhibition of the amygdala.
3. What are common examples of classical conditioning in daily life?
Classical conditioning showed that emotional responses can be conditioned and that emotional responses exert a powerful effect on people’s lives.
Classical conditioning plays a role in deaths caused by drug overdoses. a.
A user who takes a drug in a particular setting develops a conditioned response to that setting, such as a bathroom.
When the user walks into the bathroom (assuming that's the usual setting), his or her body begins to compensate for the influx of drug that is soon to come. This will dampen the effect of the drug.
When the user takes a drug in a new setting, this response does not occur. As a result, the user’s body does not try to counteract the effect of the drug. As a result, the user will get a higher effective does of the drug than he or she can tolerate. This may lead to an overdose.
Classical conditioning also helps to explain why people addicted to cocaine get drug cravings merely from handling money. Part of the experience of using cocaine is buying it. Thus, handling money becomes a CS.
Similarly, among smokers certain environmental stimuli can elicit a desire for a cigarette.
Classical conditioning is the basis for a number of therapeutic techniques. For example, systematic desensitization is the structured and repeated presentation of a feared conditioned stimulus in circumstances designed to reduce anxiety.
This works by teaching people to be relaxed in the presence of the feared object or situation.
The use of classical conditioning to promote consumers’ positive attitudes about
a product is called evaluative conditioning
. The goal is to change people’s
liking or evaluation of the conditioned stimulus.
Watson formalized the use of behavioral principles in advertising. The use of “sex appeal” to sell products stems from Watson’s ideas.
Razran showed that when people eat (US) while viewing political slogans (CS), they view the slogans more favorably.
may occur when animals or people find a particular food or taste
extremely unpleasant and try to avoid it.
This type of classical conditioning usually involves learning after only one pairing of the CS and US.
A likely scenario is that the food that made you sick had some unhealthy and unwanted ingredient.
The ensuing nausea and vomiting are the UR.
The taste of whatever food contained the bacteria is the CS.
Now, whenever that food is tasted, it may elicit nausea and vomiting (the CR).
Taste aversion was accidentally discovered by Garcia and Koelling while they were looking at the effects of radiation on rats. 1)
The rats were getting sick (UR) from the radiation (US). Because the taste of water from a plastic bottle (CS) was inadvertently paired with the radiation, they developed a taste aversion for this water (CR). They chose to drink more from the glass water bottle in their “home” cages (that didn't have the plastic taste).
Garcia’s results stirred controversy because they described exceptions to the “rules” of classical conditioning. For example, Garcia found that the rats avoided the water even if the US didn’t come until hours after the CS (rather than immediately after, as previously thought).
Taste averse is the mechanism behind the use of Antabuse to treat alcoholism. 1)
Antabuse is a medicine that causes violent nausea and vomiting when mixed with alcohol. If an alcoholic took Antabuse and then drank, he or she would feel nauseous and perhaps vomit and then develop a taste aversion to alcohol.
If used consistently, Antabuse will decrease how often alcoholics drink but does not increase the likelihood of total abstinence.
Antabuse has not been completely successful because some alcoholics stop taking their Antabuse so that they can drink. When the alcoholic is regularly supervised, its effectiveness is enhanced.
Classical conditioning can affect the immune system. a.
Ader and Cohen wanted to study how long taste aversions last in rats. They paired flavored water with injections of cyclophosphamide (a drug that suppresses the immune system and has a side effect of nausea). However, some of the rats died as a result of the experiment; the taste of the water was triggering not just nausea, but also a suppression of the immune system.
The taste of the water was acting as a CS and was in essence, a placebo (a medically inactive substance that still seems to have medicinal effects).
This study was important for two reasons: 1)
It demonstrated that the placebo effect could be induced in animals, not just humans.
It showed that the organism doesn’t have to believe that the placebo has medicinal properties in order to produce a placebo response.
Test Yourself, p. 242
1. What is operant conditioning? How does it occur?
The Roots of Operant Conditioning: Its Discovery and How It Works
Thorndike’s Puzzle Box
Thorndike created a puzzle box, a cage with a latched door that a cat could open by pressing on a pedal inside the cage.
Although the cat took a while to press the pedal, once it did, the cat was quicker to press the pedal in subsequent sessions in the box.
Thorndike called this type of learning “trial-and-error learning
His finding led to his formulation of the Law of Effect
: Actions that
subsequently lead to a “satisfying state of affairs” are more likely to be
The Skinner Box
Skinner developed an apparatus to minimize his handling of pigeons, now called a Skinner box.
The Skinner box could feed the animals and record the frequency of their responses, making it easy to quantify the responses.
If a rat is put in a Skinner box, it learns to associate pressing the lever or bar with the likelihood of a food pellet's appearing.
Pressing the lever is the response, or behavior.
Receiving the food pellet is the consequence.
2. What is the difference between reinforcement and punishment?
Principles of Operant Conditioning
Operant conditioning involves an association between a stimulus, the response to the stimulus (a behavior) and its consequence.
It relies on reinforcement, the process by which consequences lead to an increase in the likelihood that the response will occur again.
The reinforcement should be contingent on a desired response; this is called response contingency.
In contrast to the responses that are elicited in classical conditioning, responses
in operant conditioning are emitte d
is any object or event that comes after the desired response and
strengthens the likelihood of its recurrence.
Reinforcement is in the eyes of the recipient. Different consequences affect people (and other animals) differently.
There are 2 types of reinforcement, both of which increase the likelihood that a behavior recurs. 1)
In positive reinforcement
, a desired reinforcer is presented
after a response, increasing the likelihood of a recurrence of
that response. Food is the usual positive reinforcer for
animals; for humans, toys, money, praise and attention can be
positive reinforcers. Sometimes even “bad attention” (such as
a scolding) can be a positive reinforcer if the only time a child receives any attention is when he or she misbehaves.
In negative reinforcement
, an unpleasant event or
circumstance is removed following the desired behavior. For
example, if a rat is being shocked in its cage and the shock
stops when it presses a bar, then bar pressing is negativ ely
Negative reinforcement is sometimes called escape
, because the organism has learned to
perform a behavior that decreases or stops an
aversive stimulus (thereby allowing the animal to
escape from the aversive stimulus). For example, if
a person, awoken by early-morning noise out the
window, stuffs tissue in her ears to muffle the noise.
If this works, then stuffing tissues is negatively
After a few weeks of this, the person is likely to
experience avoidance learning
(e.g., inserting ear
plugs the night before so as to prevent hearing the
In contrast, a punishment
is an unpleasant event that occurs as a consequence of
There are 2 types of punishment, both of which decrease the probability of the recurrence of a behavior. 1)
occurs when a behavior leads to an
occurs when a pleasant event or
circumstance is removed following a behavior.
Punishment is mo st effective if it has 3 characteristics:
It is swift, occurring immediately after the undesired behavior.
It is consistent. Punishment should be given each and every time a behavior occurs.
It is sufficiently aversive without being overly so.
The effect of a specific type of punishment depends on the individual.
There are several problems with punishment. 1)
Punishment may decrease the frequency of a behavior, but it doesn’t eliminate the capacity to engage in that behavior.
Physical punishment may actually increase aggressive behavior in the person on the receiving end. a)
This creates an opportunity for learning by watching the behavior of others.
This may account for the finding that abusive parents tend to come from abusive families.
Through classical conditioning, the one being punished may come to fear the one delivering the punishment. A single instance may be enough for the person being punished to learn to live in fear of the punisher. Living in fear can make people and animals chronically stressed and lead to depression.
Punishment is more effective when used in combination with reinforcement. This is because punishment doesn’t convey information about what behavior should be exhibited in place of the undesired, punished behavior.
There are different levels of reinforcers. a.
are events or objects that are inherently
reinforcing (e.g., food, water, or relief from pain).
are learned reinforcers and do not inherently
satisfy a physical need (e.g., attention, praise, money, good grades, or a
is a technique that brings about therapeutic change in
behavior through the use of secondary reinforcers. Participants in such
programs earn token that can be traded for candy or for privileges such as going
out for a walk.
The interval of time between behavior and its consequence can affect operant conditioning. a.
When rats experience a delay between bar pressing and receiving a
food pellet (called delayed reinforcement
), they experience some
difficulty in learning that bar pressing is followed by food.
However, humans often work hard for delayed reinforcement. Further, the ability to delay gratification in 4-year-olds predicts social competence and high achievement during adolescence.
3. How are complex behaviors learned? How are these new behaviors maintained?
In operant conditioning, generalization
is the ability to generalize from the
desired response to a similar one. For example, a child may generalize from the
desired response of wiping her nose on a tissue to wiping it on her sleeve.
is the ability to distinguish between the desired response (e.g.,
wiping her nose on a tissue) and a similar one (e.g., wiping her nose on a
Discrimination can be encouraged by reinforcing the desired behavior but not the undesired behavior.
Discrimination depends on the ability to distinguish among the different situations in which a stimulus might occur.
When someone has learned a behavior through operant conditioning and the reinforcement stops, there is an increase in responding followed by the response fading. This is how extinction works in operant conditioning.
As with classical conditioning, spontaneous recovery occurs. If there is a break
following extinction, the old behavior will reappear.
Many complex behaviors must be shaped
by gradually reinforcing an organism
for behavior that gets closer and closer to the behavior you ultimately wish to
This is how animal trainers induce animals to perform complex behaviors.
Shaping must be accomplished in phases, nudging the animal closer and closer to the desired response.
The final behavior is considered as a series of smaller behaviors (called successive approximations
) which become increasingly similar to the
One element that can change the frequency of an organism’s response is the schedule on which the reinforcement is delivered. a.
When an organism is reinforced for each desired response, it is
receiving continuous reinforcement
. This type of presentation of the
reinforcer is the best method until the desired behavior is stable.
When reinforcement occurs only intermittently, the organism is
receiving partial reinforcement
These schedules yield behaviors that are more resistant to extinction that those that arose following continuous reinforcement.
Some partial reinforcement schedules are called interval
and are based on time. Reinforcement is given for
responses after certain interval of time. It doesn’t matter how
many times within that interval the organism produces the
In fixed interval schedules
, organism receives
reinforcement for a response produced after a fixed
interval of time. If the number of responses is
graphed over time, it creates a scallop pattern.
In a variable interval schedule
, the interval is an
average over time. This creates a slow but steady
rate of responding.
of reinforcement are based on the number of
times the desired behavior is produced.
In fixed ratio schedules
, reinforcement is given after
a fixed number of responses. When the number of
responses is graphed, it creates a step-like pattern.
This schedule yields a high rate of responding
compared to fixed interval responding.
In variable ratio schedules
, reinforcement occurs at
a variable number of responses. This is often called
the gambling reinforcement schedule” because most
gambling relies on this type of reinforcement.
Behavior reinforced this way is most resistant to
extinction. Response rate is usually frequent,
consistent, doesn’t have long pauses, and gets the
highest response rate.
Learning involves at least two major phases. a.
First, the organism learns to discriminate the proper situation in which to make a response.
The neurotransmitter acetylcholcholine is critical for this function of the hippocampus.
Scopolamine is an antagonist for this chemical. Animals given scopolamine can’t learn which stimuli should be grouped together to signal an appropriate behavior.
The organism learns the stimulus-response association. 1)
Dopamine produced by the nucleus accumbens, located behind the amygdala, appears particularly important for a reward to be effective.
When researchers block dopamine receptors, animals fail to respond to reinforcement.
Amphetamines and cocaine are also dopamine agonists.
The fact that different neurotransmitters are involved indicates that learning is more than a single activity.
Classical and operant conditioning are similar in that:
They both involve extinction, spontaneous recovery, generalization, and discrimination.
They both are subject to moderating factors that affect response acquisition.
They both are influenced by biological factors.
Noting the similarities between classical and operant conditioning, some researchers debate whether they are different.
The fact that different neural systems are used in each is evidence that they are truly different.
Test Yourself, p. 250
1. What is cognitive learning? How does it differ from classical and operant conditioning?
Cognitive learning is the acquisition of information that is often not immediately
Cognitive learning was demonstrated by Tolman and Honzik’s study of learning with rats a.
One group of rats was put in a maze that led to a food box. Thus, they got a reward for completing the maze. The other group was put in the maze but didn’t get any reinforcement. They were just removed from the maze after a certain amount of time.
The first group of rats increased their speed and decreased their number of mistakes; the second group did not change.
However, when rats in second group given food reward later, their speed increased and errors decreased dramatically.
This demonstrated that they had learned to run the maze quickly and
correctly earlier, but hadn’t had a reason to do so. Learning without
behavioral signs is called latent learning
Tolman reasoned that the unreinforced rats had developed a cognitive map
of the maze, storing information about its spatial layout. But, they didn’t use it until motivated to do so by reinforcement.
Latent learning depends on the hippocampus, which is used when new facts and events are stored. The study of learning focuses on the acquisition of information. In contrast, the study of memory focuses on retention of information. Most research on cognitive learning focuses on the way information is stored in memory.
2. What is insight learning?
This is based on the phenomenon known as the “aha experience,” a sudden flash of awareness.
The most famous insight learning experiments were done by Wolfgang Köhler, a German Gestalt psychologist. a.
Köhler put a chimpanzee named Sultan in a cage with a short stick. Outside of the cage and out of reach, Köhler put fruit as well as a long stick that was out of reach, but closer in.
Initia lly, Sultan was frustrated as he tried to reach food.
Then, he had insight how to get fruit: He used the short stick to get the long one and then capture the fruit.
3. How can watching others help people learn?
Observational Learning: To See Is to Know
Albert Bandura developed social learning theory
, which emphasizes the fact
that much learning occurs in a social context.
This type of learning, which results from watching others and does not depend
on reinforcement, is called observational learning
Bandura focused much of his work on modeling
, a process in which someone
learns new behaviors by observing others. These other people serve as models,
presenting a behavior to be imitated.
In one of Bandura’s famous studies, he used an inflated vinyl “Bobo” doll that pops back up when punched.
One group watched an adult beat up a Bobo doll.
One group watched an adult ignore a Bob doll.
Children were put in a frustrating situation (being in a room with toys but not being able to play with some of them).
Then, all children were put in another room with many toys, including a Bobo doll.
Children who had seen the adult b eing aggressive with Bobo were more aggressive themselves.
4. What makes some models better than others?
Learning from models has advantages over other sorts of learning. It enables a person to go straight to the end product and avoid the middle steps typically required in learning.
Observational learning can produce both desired and undesired learning.
Modeling and operant conditioning are involved in learning about culture and gender.
Several characteris tics of models can make learning through observation more effective. People learn more if they pay attention to the model, which they are more likely to do if the model:
Bandura’s findings have led to concern over the amount of violence shown on TV, and how violent TV characters are reinforced or punished. a.
Nonviolent programs promote nonviolent observational learning. Preschool children who watch these pro grams were more likely to exhibit positive, helpful behaviors than children who did not watch them.
Violent programs may lead to violent behavior, although these results are influenced by:
Therapeutx™, the makers of Maori Miracle®, researches & develops evidence-based remedies for the safe, effective treatment of human disorders, disease prevention and the maintenance of good health. Maori Miracle® is a unique and superior arthritis formula, designed to treat arthritic symptoms, reduce inflammation, relieve pain, rebuild worn cartilage and improve joint & muscle functio
COGNITIVE NEUROSCIENCE AND NEUROPSYCHOLOGYTime-dependent e¡ect of transcranial directcurrent stimulation on the enhancement ofSuk Hoon Ohna, Chang-Il Parkd, Woo-Kyoung Yooe, Myoung-Hwan Kof, Kyung Pil Choia,Gyeong-Moon Kimb, Yong Taek Leec and Yun-Hee KimaDepartments of aPhysical Medicine and Rehabilitation, Division for Neurorehabilitation, bNeurology, Stroke and Cerebrovascular Center, Sam