So here's the second in the series of "things I could present for Journal Club". I figure I'll blog the top three, and then actually prepare whichever one I want to do the morning before. Procrastination is a mark of genius, and don't let anyone tell you different.
I'm considering this one because the series of experiments is beautifully elegant and really well laid out, and it proves a very interesting point that had been bugging the field for a while. I saw some of this data at a conference recently, and when the speaker showed the main effect, everyone in the audience went "oooOOOooo". It's that good.
Stuber et al. "Reward-predictive cues enhance excitatory synaptic strength onto midbrain dopamine neurons" Science, 321, 2008.
We all know that certain things in like are 'rewarding'. Food, sex, rock and roll, you get the idea. And of course, when we get something rewarding, reward signals in our brain fire, and we feel good. What is responsible for these feelings of reward? Right now, most people believe that the prime mediator is the neurotransmitter dopamine (DA), which, you may recall from some of my previous posts, is also the neurotransmitter which goes so badly awry in drug addiction.
But it's not quite as simple as "reward, feels good". After a while, you start to learn which things are rewarding, and what signals imply that those rewarding things are availible. Think advertising. The first time you see a commercial for, say, a Snickers, maybe you didn't think anything of it. But then you had a Snickers, and tasted its lovely awesomeness. The next time you saw that "Hungry? Grab a Snickers" you found yourself thinking "mmm...Snickers...those are so good...I could really use one..." (Ok, maybe this was just me at the gym the other day.)
The basic idea is that, when you get a reward unexpectedly, you get a big spike of DA to make your brain go "sweet!" After a while, you being to recognize the cues behind the reward, and so seeing the wrapper to the candy will make your DA spike in anticipation. But it's only very recently that we've been able to see this change taking place, and there were still lots of questions as to what was happening when these changes happen.
So the authors of this study took a bunch of rats. They implanted fast scan cyclic voltammetry probes into their heads. Voltammetry is a technique that allows you to detect changes in DA levels in brain areas (in this case the nucleus accumbens, an area linked with reward) which represent groups of cells firing. So the rats had probes in their heads detecting their DA, and then they were given a stimulus light (a conditioned stimulus), a nosepoke device, and a sugar pellet. There is nothing that a rat likes more than a sugar pellet, and so there was a nice big spike in DA as it got its reward. So the rats figured out pretty quickly that, when the light came on, you stick your nose in the hole, and sugar was on the way. As they learned the conditioned stimulus, their DA spikes in response to reward SHIFTED, moving backward in time, so that they soon got a spike of DA when they saw the light, without a spike when they got the pellet. This means that the animals had learned to associate a conditioned stimulus with reward. Not only that, the DA spike was higher immediately after learning than the spike in rats who just got rewards without learning.
This was REALLY cool to watch in action, but isn't entirely new, we've known that conditioned stimulus can shift a DA spike for a while (Shultz, 1998). What researchers didn't know what what was causing the shift, and what makes the conditioned stimulus DA response so much bigger. Because the shift was gradual (it took 3-5 days) and developed as the learning progressed, they thought that part of the effect could be caused by changes in the synapses that are coming on to DA neurons. The idea is that DA neurons don't just act on their own, they are stimulated to fire from other areas, such as sensory processing ("oh hey! there's a light!"). Stimulation from excitatory synapses is what causes the neurons to fire. So if those signals are stronger and occuring earlier, they might be responsible for the increased DA signal seen with a conditioned stimulus.
To find out whether or not excitatory synapses were in fact changing, they authors conducted electrophysiology experiments in rats that were either trained or not trained. Electrophysiology is a technique where you actually put a tiny, tiny electrode into a cell membrane. When that cell is then stimulated, you can actually WATCH it fire. It's really very cool to see. Of course all sorts of things are responsible for when a cell fires and how, but what they were looking at here were specific glutamate receptors known as AMPA and NMDA. These are two major receptors that receive glutamate currents, which are excitatory and induce cells downstream to fire. What they found was that, in animals that had been trained to a conditioned stimulus, AMPA and NMDA receptors had a much stronger influence on firing than in non-trained animals, which means that the synaptic strength on DA neurons is getting stronger as animals learn. Not only that, but cells from trained rats already exhibited long-term potentiation, a phenomenon associated with formation of things like learning and memory.
But of course, you have to make sure that glutamate is really the neurotransmitter responsible, and not just a symptom of something else changing. So they ran more rats on voltammetry and trained, and this time put a glutamate antagonist into the brain. The found that a glutamate antagonist completely blocked not only the DA shift to a conditioned stimulus, but the learning itself.
I love this experiment, they showed the DA firing transition as the animals learned (which is no easy thing to do), and then they showed evidence for a possible mechanism, and used that mechanism to influence whether the animals learned or not! It's a beautifully elegant paper.
But what does it all mean? Well first of all, this provides a lot of insight into how cue-related learning (which is something we do all the time) works. Also, this is normal reward learning, and the results are very different from the learning results that are seen when animals are nosepoking for things like cocaine instead of sugar pellets. Knowing how normal reward learning works can help us to learn how durgs highjack the system, which can help us develop strategies to understand and treat drug addiction in humans. Finally, learning about...learning...can teach us a lot about the way we learn things, and how we might put that to use, in teaching people who may have learning disabilities as well as in learning normal, everyday things.
G. D. Stuber, M. Klanker, B. de Ridder, M. S. Bowers, R. N. Joosten, M. G. Feenstra, A. Bonci (2008). Reward-Predictive Cues Enhance Excitatory Synaptic Strength onto Midbrain Dopamine Neurons Science, 321 (5896), 1690-1692 DOI: 10.1126/science.1160873