Information

What in neurons and their connections changes during the process of learning?


I'm not sure if this question belongs more in physics or biology (or maybe even computer science)… but biology seemed to fit more.

What changes in the state of our brains when we learn things? Because I looked on the internet and I learned about artificial neural networks, and every resource I've found teaches of neural networks that have weights that are trained/evolved and then are static at runtime; you simply train the weights and then once you use them in simulations, they never change.

Purely feedforward neural networks that rely on this can't react to new situations differently if they've experienced it before. They react exactly the same each time.

I suppose that theoretically, a recurrent neural network that was big enough could really learn, but in practice, these have been used as purely memory slots for an existing method defined by the weights, not as storage of new methods of doing things.

So my question is, physically, chemically, biologically, what changes in the neurons and the connections between them when we learn things? I don't think we really understand how it comes together on a grand scale yet, but I'm pretty sure we've figured out that much. I want to learn how to model a simplified version of it mathematically/programmatically.


What changes in the process of learning:

  1. The connections (the way one neuron is connected to another). New synapses can form or dissolve in the process of learning. The glial cells such as astrocytes and microglia can facilitate this process.

  2. Strength of connections: Existing connections can be made weaker or stronger in the process of triggering the same circuit. This happens by up/down regulation of the ion channels at the synaptic junctions.

This process is not fully understood and is an active area of research. You can find some mathematical models too for learning processes such as Long Term Potentiation etc.


During learning, LTP (long term potentiation) or LTD (long term depression) can result in post synaptic neurons altering their responses to neurotransmitter release by the pre synaptic neurons. This is achieved at the post synaptic cell by increasing or decreasing the amount of neurotransmitter vesicles, moving them closer or farther away from the axon terminals, and increasing or decreasing the density of sodium channels. All of these factors can lead to changes in the tendency of the post synaptic cell to undergo an action potential as a result of stimulation by the pre synaptic cell, as well as changes in the amount of neurotransmitter released by the post synaptic cell as a result of that action potential. Further, neurons can also establish or degenerate synapses with other neurons, leading to modifications of neural pathways as a result of learning.


How neurons lose their connections

MIT neuroscientists discovered that the protein CPG2 connects the cytoskeleton (represented by the scaffold of the bridge) and the endocytic machinery (represented by the cars) during the reabsorption of glutamate receptors. Each "car" on the “bridge" carries a vesicle containing glutamate receptors.

Strengthening and weakening the connections between neurons, known as synapses, is vital to the brain’s development and everyday function. One way that neurons weaken their synapses is by swallowing up receptors on their surfaces that normally respond to glutamate, one of the brain’s excitatory chemicals.

In a new study, MIT neuroscientists have detailed how this receptor reabsorption takes place, allowing neurons to get rid of unwanted connections and to dampen their sensitivity in cases of overexcitation.

“Pulling in and putting out receptors is a dynamic process, and it’s highly regulated by a neuron’s environment,” says Elly Nedivi, a professor of brain and cognitive sciences and member of MIT’s Picower Institute for Learning and Memory. “Our understanding of how receptors are pulled in and how regulatory pathways impact that has been quite poor.”

Nedivi and colleagues found that a protein known as CPG2 is key to this regulation, which is notable because mutations in the human version of CPG2 have been previously linked to bipolar disorder. “This sets the stage for testing various human mutations and their impact at the cellular level,” says Nedivi, who is the senior author of a Jan. 14 Current Biology paper describing the findings.

The paper’s lead author is former Picower Institute postdoc Sven Loebrich. Other authors are technical assistant Marc Benoit, recent MIT graduate Jaclyn Konopka, former postdoc Joanne Gibson, and Jeffrey Cottrell, the director of translational research at the Stanley Center for Psychiatric Research at the Broad Institute.

Forming a bridge

Neurons communicate at synapses via neurotransmitters such as glutamate, which flow from the presynaptic to the postsynaptic neuron. This communication allows the brain to coordinate activity and store information such as new memories.

Previous studies have shown that postsynaptic cells can actively pull in some of their receptors in a phenomenon known as long-term depression (LTD). This important process allows cells to weaken and eventually eliminate poor connections, as well as to recalibrate their set point for further excitation. It can also protect them from overexcitation by making them less sensitive to an ongoing stimulus.

Pulling in receptors requires the cytoskeleton, which provides the physical power, and a specialized complex of proteins known as the endocytic machinery. This machinery performs endocytosis — the process of pulling in a section of the cell membrane in the form of a vesicle, along with anything attached to its surface. At the synapse, this process is used to internalize receptors.

Until now, it was unknown how the cytoskeleton and the endocytic machinery were linked. In the new study, Nedivi’s team found that the CPG2 protein forms a bridge between the cytoskeleton and the endocytic machinery.

“CPG2 acts like a tether for the endocytic machinery, which the cytoskeleton can use to pull in the vesicles,” Nedivi says. “The glutamate receptors that are in the membrane will get pinched off and internalized.”

They also found that CPG2 binds to the endocytic machinery through a protein called EndoB2. This CPG2-EndoB2 interaction occurs only during receptor internalization provoked by synaptic stimulation and is distinct from the constant recycling of glutamate receptors that also occurs in cells. Nedivi’s lab has previously shown that this process, which does not change the cells’ overall sensitivity to glutamate, is also governed by CPG2.

“This study is intriguing because it shows that by engaging different complexes, CPG2 can regulate different types of endocytosis,” says Linda Van Aelst, a professor at Cold Spring Harbor Laboratory who was not involved in the research.

When synapses are too active, it appears that an enzyme called protein kinase A (PKA) binds to CPG2 and causes it to launch activity-dependent receptor absorption. CPG2 may also be controlled by other factors that regulate PKA, including hormone levels, Nedivi says.

Link to bipolar disorder

In 2011, a large consortium including researchers from the Broad Institute discovered that a gene called SYNE1 is number two on the hit list of genes linked to susceptibility for bipolar disorder. They were excited to find that this gene encoded CPG2, a regulator of glutamate receptors, given prior evidence implicating these receptors in bipolar disorder.

In a study published in December, Nedivi and colleagues, including Loebrich and co-lead author Mette Rathje, identified and isolated the human messenger RNA that encodes CPG2. They showed that when rat CPG2 was knocked out, its function could be restored by the human version of the protein, suggesting both versions have the same cellular function.

Rathje, a Picower Institute postdoc in Nedivi’s lab, is now studying mutations in human CPG2 that have been linked to bipolar disorder. She is testing their effect on synaptic function in rats, in hopes of revealing how those mutations might disrupt synapses and influence the development of the disorder.

Nedivi suspects that CPG2 is one player in a constellation of genes that influence susceptibility to bipolar disorder.

“My prediction would be that in the general population there’s a range of CPG2 function, in terms of efficacy,” Nedivi says. “Within that range, it will depend what the rest of the genetic and environmental constellation is, to determine whether it gets to the point of causing a disease state.”

The research was funded by the Picower Institute Innovation Fund and the Gail Steel Fund for Bipolar Research.


The Neurobiology of Psychotherapy

A recent Institute of Medicine report acknowledges the efficacy of a broad range of psychosocial interventions. 1 It challenges us to “identify key elements that drive an intervention’s effect.” The report describes key clinical tasks such as the therapist’s ability to engage with a patient, understand the patient’s worldview, and help the patient manage his or her emotional responses. The psychiatric community should also look into the neurobiological changes that accompany and may be responsible for an intervention’s effect. Although early psychoanalysts made little effort to connect functions of the mind to definable portions of the brain, from the beginning there was a belief that such a relationship may exist. Freud confidently predicted that one day there would be a neurological understanding of the work he initiated.

The deficiencies in our description would probably vanish if we were already in a position to replace psychological terms by physiological or chemical ones. Biology is truly a land of unlimited possibilities. We may expect it to give us the most surprising information and we cannot guess what answers it will return in a few dozen years to the questions we have put to it. 2

Almost 100 years have passed since Freud wrote those words, and many of his questions remain unanswered. Steady progress, however, has been made in the development of a neurobiological understanding of what happens in the brain when the mind is engaged in psychotherapy. Advances in cognitive neuroscience and neuroimaging have facilitated a greater appreciation of the neuroanatomy and neurophysiology of the CNS. The technology to study the real-time functioning of the brain through measurement of blood flow or glucose uptake, for example, has been widely used for a quarter of a century. Numerous challenges endure, such as subtle individual variations of neural circuitry, uncertainty as to the proper area to study, and the possibility that differing forms of therapy affect the brain differently. Within the boundaries created by these limitations, however, there is an emerging understanding of the neurobiological correlates of some common psychotherapy elements.

Although different approaches utilize various terms and concepts, there are some components that are found in most forms of efficacious psychotherapy. There must be some emotional engagement (attachment) between the patient and therapist. The therapist will struggle to understand and express the patient’s experience (empathy). By learning about themselves and their environment, patients will decide to make changes. As therapy continues, they will develop new abilities to regulate their emotions. Many patients will be forced to face and overcome feared relationships or situations (fear extinction). There is a neurobiological literature developing on each of these common components of psychotherapy.

Forming nurturing attachments with others remains a challenge throughout life especially for those with early trauma. The neurochemistry of attachment involves the neuropeptides oxytocin and arginine vasopressin. Both of these messengers are released from the hypothalamus by sexual stimulation and stress. In combination with estrogen, oxytocin helps induce maternal behavior, while the absence of oxytocin makes it more difficult for animals to adapt to social settings and leads to abnormal displays of aggression. Infusing or blocking oxytocin also causes dramatic changes in mating behaviors. Arginine vasopressin has myriad effects on normal mammals including altering displays of aggression and the animal’s tendency to affiliate with or protect others.

In humans, oxytocin is associated with a number of factors that affect attachment including trust, empathy, eye contact, and generosity. Oxytocin infusions in healthy individuals tend to decrease anxiety and the stress associated with social situations while shifting attention from negative to positive information. The reduction in distress appears related to a reduction in activity in the amygdala. In a study of women with borderline personality disorder, oxytocin infusion decreased their amygdala activation when exposed to angry faces. 3 Although the effect is mediated by past experiences, intra-nasal infusion of oxytocin may increase an individual’s ability to infer the mental state and intention of others based on their facial expression. Oxytocin specifically aids in parent-child bonding. Administering oxytocin to parents increases the social engagement of the parent and child and leads to an increase in oxytocin in the child.

The mu-opioid receptor also appears to be involved in attachment. Activation of the mu-opioid receptor leads to a general sense of pleasure as well as analgesia. In animal models, removing the mother from the child leads to distress that is at least partially mediated through mu-opioid activity. Animals with an increased activation of the opioid system had more attachment behaviors and louder and more prolonged protests when separated. Their separation distress could be partially reversed by non-sedating opioid agonists. 4 Patients with borderline personality disorder have differences in baseline mu-opioid receptor concentrations and in the endogenous opioid system response to negative emotional challenges. These differences might be related to their difficulty with emotion regulation.

Attachment therefore correlates with neurochemical changes within the brain. This might be most evidenced in parent-child interactions but may play a significant role in psychotherapy as well. A study of mothers with postpartum depression undergoing psychodynamic psychotherapy found that daily infusions of oxytocin over 12 weeks were associated with a decrease in narcissistic traits but not in depressive personality traits or depressive symptoms. 5 Depressed men who received oxytocin infusions while in psychotherapy performed better on tests of inferring the mental state of others but were more likely to experience anxiety during the session.6 These findings hint of a complex interaction between oxytocin and the therapist-patient relationship (Table 1).

Empathy entails the ability to consider the world from another’s point of view and in some way to share his or her emotional experiences. Neurobiological correlates of empathy were first described in the early 1990s with the discovery of mirror neurons. Researchers who were studying the neuronal activity involved in organizing and monitoring movements noted that some of the same premotor cortex neurons were activated while observing others make corresponding movements. 7,8 Neurons that were activated when a monkey grabbed food were activated when the researcher grabbed food but not when the researcher pushed or struck the food. These neurons were referred to as mirror neurons.

Mirror neurons in humans are not limited to simple movements. Watching dance leads to activation in the brains of other dancers. Greater activation occurs when dancers watch movements they already know. Mirror neurons activate while watching facial expressions and seem to be partially impaired in individuals with autistic disorders. Human mirroring networks also exist for pain and emotional distress. Researchers created pain by sticking subjects with a needle and monitored which areas of the brain were activated. 9 Similar pathways were activated when the researchers brought the needle close to the subject but did not make contact and when subjects watched researchers pricking their own fingers. Witnessing others display disgust activates many of the same areas that are activated when one smells an unpleasant odor.

Mirroring neurons also fire when observing others being rejected or embarrassed. The areas most involved in mirroring physical pain, emotional distress, and social discomfort are the anterior cingulate cortex and insula. These areas help individuals automatically imagine themselves experiencing what they witness others experiencing. It should be noted that the studies of mirror neurons in humans are preliminary and not without controversy.

The posterior portions of the superior temporal sulcus have similar activities. This region activates when witnessing social behavior and predicting future actions. When a figure is walking toward you, activation of the superior temporal sulcus is greater when the figure is looking at you, which indicates the prospect of an upcoming interaction. Activation also increases when the other person’s behavior is different than expected.

In a study by Wyk and colleagues, 10 an actor displayed pleasure or disgust to one of two identical objects and then randomly picked up one of the objects. When there was incongruent action (picking the object after showing disgust or not picking the object after a display of pleasure), there was increased activity in the observer’s posterior superior temporal sulcus region.

Experiencing empathy appears to require proper activation of portions of the insula and anterior cingulate cortex as we seek to understand the emotional experience of others. A properly functioning posterior superior temporal sulcus allows us to determine and predict the social actions of others, and will tend to activate when someone violates societal expectations.

Oxytocin and arginine vasopressin are also implicated in empathy. A study of polymorphisms in the genes of 367 young adults found that variations in the emotional aspect of empathy were associated with the oxytocin receptor gene, while the cognitive aspect of empathy was associated with the gene for the arginine vasopressin 1a receptor. 11 A highly complex interaction of neurotransmitters and brain activation allows the therapist to understand the patient’s experience (Table 2).

Early in the 20th century Cajal proposed that the brain stored information by modifying neuronal connections. Learning involved changing individual neurons and their connections with each other. In the mid-20th century Hebb proposed his rule stating that when one neuron’s repeated excitation is involved in the excitation of a neighboring neuron, the connection between the two of them grows more efficient. Put colloquially, “Neurons that fire together wire together.” This implies that synapses change over time.

The first demonstration of this in the hippocampus occurred in 1973. After exposing neurons to strong, high-frequency stimulation, their connection to other neurons in the hippocampus became stronger. 12 The discovery of hippocampal neurogenesis established the process of neuronal plasticity and upended the long-held belief that CNS cells were neurophysiologically and neuroanatomically incapable of growth.

Using the California sea slug (Aplysia californica), Kandel 13 demonstrated that habituation-a decrease in response to a stimulus-could be attained with a single training session of 10 stimulations. These effects lasted minutes to hours and appeared to be a result of changes in the amount of neurotransmitter released with the stimulation. Training sessions on 4 consecutive days resulted in an effect that lasted weeks. This long-term learning was associated with changes in interneuronal connections.

Preliminary studies in humans have found measurable changes in the brain based on learning stemming from juggling and playing video games. 14,15 Experimental data demonstrate that the neurons in the brain are capable of learning-induced change. Psychotherapy includes components of experiential and didactic learning that is expected to create change in the patient’s brain. Many psychotherapies focus on thoughts or patterns that are initially outside the awareness of the patient. Therapy creates new memories to modify older, dysfunctional ones and in some cases creates new psychic structures. This learning must involve changes in interneuronal connections (Table 3).

Emotion regulation

Patients in psychotherapy are taught to understand, accept, or manage their emotional responses in new ways. Researchers are looking into how emotion regulation modifies brain activity. One common strategy for altering emotions is reappraisal, when the individual deliberately tries to alter the meaning or relevance of an event. Reappraisal strategies link cognitive control with emotional experience. Attempts to deliberately decrease aversive emotions, sadness, and sexual arousal through cognitive reappraisal have found that reappraisal strategies most commonly activate multiple areas within the prefrontal cortex and posterior parietal cortex. Activation of these areas during reappraisal leads to decreased activity in portions of the amygdala. These studies have demonstrated specific neuronal circuitry, for example, between the hippocampus, prefrontal cortex, and amygdala, which are strengthened by psychotherapeutic treatment. 16,17

Suppression, an intentional attempt to minimize the display of emotions, may also decrease the intensity of emotions. Subjects were asked to suppress their emotions while observing sad pictures. When suppressing their emotions, there was an increase in activity in the right orbitofrontal and dorsolateral prefrontal cortices. 18 Other studies have found activation of the dorsal anterior cingulate cortex, dorsomedial prefrontal cortex, and lateral prefrontal cortex with suppression. 19

Cognitive reappraisal and suppression seem to have distinctly different neurophysiological mechanisms. In a head-to-head study both strategies were successful at decreasing subjective emotional experience, but there were differences in brain activation. 20 Reappraisal led to increased activity in the prefrontal cortex and decreased activity in the right amygdala and left insula. Suppression increased activity in the right ventral-lateral prefrontal cortex but did not decrease activity in the amygdala or insula (Table 4).

Fear extinction

Learning to be afraid, or fear conditioning, involves pairing a previously innocuous stimulus (conditioned stimulus) with an aversive stimulus (unconditioned stimulus). The mind begins to associate the previously benign stimulus with the unpleasant one, and the individual experiences heightened anxiety whenever presented with the new conditioned stimulus. The process of fear conditioning involves interactions of the amygdala, insula, anterior cingulate cortex, and medial prefrontal cortex.

The process of unlearning fear is known as fear extinction. Fear extinction does not consist of erasing old memories rather it is the creation of new, benign associations. The underlying fear is still present, but successful fear extinction leads to a reduction in the amplitude and likelihood of a fearful response. This occurs when the once feared stimulus or situation no longer brings about any adverse consequences. Fear extinction is a principal therapeutic component of exposure therapies for specific phobias and for PTSD, but learning to confront feared memories, situations, and people can be found in a broad range of psychotherapies.

Fear extinction requires a functional ventral-medial prefrontal cortex, rostral anterior cingulate cortex, and hippocampus. Activation of these regions leads to decreased amygdala activity. Clinical studies have demonstrated that the addition of D-cycloserine, a partial N-methyl-D-aspartate (NMDA) glutamate agonist, may improve outcomes in exposure-based therapies for acrophobia and social phobias. 21 Conversely, NMDA antagonists, by decreasing NMDA activity, can inhibit the formation of long-term fear extinction.

While decreasing the activity in the amygdala leads to an acute reduction in perceived fear, the long-term persistence of fear extinction requires the activity of the ventral-medial prefrontal cortex and rostral anterior cingulate cortex. These appear to be vital in connecting the cognitive and emotional experiences and solidifying the learning.

The hippocampus aids in fear extinction by placing events into a context. This context helps determine how generally the brain applies the new learning. Because the previous fear is not erased, when the individual encounters the once feared stimuli there is activation of both the fearful and fear extinguished pathways. The context, provided by interactions between the hippocampus and ventromedial prefrontal cortex, determines which set of behaviors is predominately activated (Table 5). The work of the psychotherapist is to help solidify the most healthy and adaptive responses.

Prediction of response to psychotherapy

Studies have begun to examine the ability to predict a patient’s response to psychotherapy based on neurobiological factors. One of the many changes that occur with depression is the elevation of metabolism in the posterior insula. Studies have found that changes in connectivity and activation of the insula predicted a positive response to psychotherapy in patients with depression. 22-24 Another study found that increased metabolism in the subcollasal cingulate cortex and superior temporal sulcus was associated with no response to escitalopram or cognitive-behavior therapy treatment for depression. 25

Although a precise description of the neurophysiological changes that occur during psychotherapy is currently impossible, it is likely that future imaging and neurobiological investigation will elucidate this process. The neurobiological correlates to many of the common elements of psychotherapy such as attachment, empathy, memory, learning, emotional regulation, and fear extinction are emerging. While we still cannot answer all of Freud’s questions, or our own questions, the artificial dichotomy between the functioning of the mind and brain during psychotherapy seems less imposing.

Disclosures:

Dr Welton is Associate Professor of Psychiatry and Director of Residency Training and Dr Kay is Emeritus Professor of Psychiatry, Boonshoft School of Medicine, Wright State University, Dayton, OH. The authors report no conflicts of interest concerning the subject matter of this article.

References:

1. Institute of Medicine. Psychosocial Interventions for Mental and Substance Use Disorders: A Framework for Establishing Evidence-Based Standards. Washington, DC: The National Academies Press 2015:S1-S16.

2. Freud S. Beyond the pleasure principle. In: Strachey J, ed. The Standard Edition of the Complete Psychological Works of Sigmund Freud. Vol 18. London: Hogarth Press 1955:1-64.

3. Bertsch K, Gamer M, Schmidt B, et al. Oxytocin and reduction of social threat hypersensitivity in women with borderline personality disorder. Am J Psychiatry. 2013170:1169-1177.

4. Barr CS, Schwandt ML, Lindell SG, et al. Variation at the mu-opioid receptor gene (OPRM1) influences attachment behavior in infant primates. Proc Natl Acad Sci USA. 2008105:5277-5281.

5. Clarici A, Pellizzoni S, Guaschino S, et al. Intranasal administration of oxytocin in postnatal depression: implications for psychodynamic psychotherapy from a randomized double-blind pilot study. Front Psychiatry. 20156:1-10.

6. MacDonald K, MacDonald TM, Brune M, et al. Oxytocin and psychotherapy: a pilot study of its physiological, behavioral and subjective effects in males with depression. Psychoneuroendocrinol. 201338:2831-2843.

7. Rizzolatti G, Fadiga L, Gallese V, Fogassi L. Premotor cortex and the recognition of motor actions. Cog Brain Res. 19963:131-141.

8. Iacoboni M. Face to face: the neural basis of social mirroring and empathy. Psychiatr Ann. 200737: 236-241.

9. Hutchison WD, Davis KD, Lozano AM, et al. Pain-related neurons in the human cingulate cortex. Nature Neurosci. 19992:403-405.

10. Wyk BC, Hudac CM, Carter EJ, et al. Action understanding in the superior temporal sulcus region. Psychol Sci. 200920:771-777.

11. Uzefovsky F, Shalev I, Israel S, et al. Oxytocin receptor and vasopressin receptor 1a genes are respectively associated with the emotional and cognitive empathy. Horm Behav. 201567:60-65.

12. Bliss TVP, Collingridge GL. A synaptic model of memory: long-term potentiation in the hippocampus. Nature. 1993361:31-39.

13. Kandel ER. Psychotherapy and the single synapse: the impact of psychiatric thought on neurobiological research. J Neuropsychiatry Clin Neurosci. 200113:290-300.

14. Draganski B, Gaser C, Busch V, et al. Neuroplasticity: changes in grey matter induced by training. Nature. 2004427:311-312.

15. Kuhn S, Gleich T, Lorenz RC, et al. Playing Super Mario induces structural brain plasticity: gray matter changes resulting from training with a commercial video game. Mol Psychiatry. 201419:272.

16. Ochsner KN, Ray RD, Cooper JC, et al. For better or for worse: neural systems supporting the cognitive down- and up-regulation of negative emotion. Neuroimage. 200423:483-499.

17. Buhle JT, Silvers JA, Wager TD, et al. Cognitive reappraisal of emotion: a meta-analysis of human neuroimaging studies. Cerebral Cortex. 201424: 2891-2890.

18. Levesque J, Eugene F, Joanette Y, et al. Neural circuitry underlying voluntary suppression of sadness. Biol Psychiatry. 200353:502-510.

19. Phan KL, Fitzgerald DA, Nathan PJ, et al. Neural substrates for voluntary suppression of negative affect: a functional magnetic resonance imaging study. Biol Psychiatry. 200557:210-219.

20. Goldin PR, McRae K, Ramel W, Gross JJ. The neural bases of emotion regulation: reappraisal and suppression of negative emotion. Biol Psychiatry. 200863:577-586.

21. Britton JC, Gold AL, Feczko EJ, et al. D-cycloserine inhibits amygdala responses during repeated presentations of faces. CNS Spect. 200712:600-605.

22. Roffman JL, Witte JM, Tanner AS, et al. Neural predictors of successful brief psychodynamic psychotherapy for persistent depression. Psychother Psychosom. 201483:364-370.

23. Crowther A, Smoski MJ, Minkel J, et al. Resting-state connectivity predictors of response to psychotherapy in major depressive disorder. Neuropsychopharmacol. 201540:1659-1673.

24. McGrath CL, Kelley ME, Holtzheimer PE 3rd, et al. Toward a neuroimaging treatment selection biomarker for major depressive disorder. JAMA Psychiatry. 201370:821-829.

25. McGrath CL, Kelley ME, Dunlop BW, et al. Pretreatment brain states identify likely nonresponse to standard treatments for depression. Biol Psychiatry. 201476:527-535.


How neurons lose their connections

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license. You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images if one is not provided below, credit the images to "MIT."

Previous image Next image

Strengthening and weakening the connections between neurons, known as synapses, is vital to the brain’s development and everyday function. One way that neurons weaken their synapses is by swallowing up receptors on their surfaces that normally respond to glutamate, one of the brain’s excitatory chemicals.

In a new study, MIT neuroscientists have detailed how this receptor reabsorption takes place, allowing neurons to get rid of unwanted connections and to dampen their sensitivity in cases of overexcitation.

“Pulling in and putting out receptors is a dynamic process, and it’s highly regulated by a neuron’s environment,” says Elly Nedivi, a professor of brain and cognitive sciences and member of MIT’s Picower Institute for Learning and Memory. “Our understanding of how receptors are pulled in and how regulatory pathways impact that has been quite poor.”

Nedivi and colleagues found that a protein known as CPG2 is key to this regulation, which is notable because mutations in the human version of CPG2 have been previously linked to bipolar disorder. “This sets the stage for testing various human mutations and their impact at the cellular level,” says Nedivi, who is the senior author of a Jan. 14 Current Biology paper describing the findings.

The paper’s lead author is former Picower Institute postdoc Sven Loebrich. Other authors are technical assistant Marc Benoit, recent MIT graduate Jaclyn Konopka, former postdoc Joanne Gibson, and Jeffrey Cottrell, the director of translational research at the Stanley Center for Psychiatric Research at the Broad Institute.

Forming a bridge

Neurons communicate at synapses via neurotransmitters such as glutamate, which flow from the presynaptic to the postsynaptic neuron. This communication allows the brain to coordinate activity and store information such as new memories.

Previous studies have shown that postsynaptic cells can actively pull in some of their receptors in a phenomenon known as long-term depression (LTD). This important process allows cells to weaken and eventually eliminate poor connections, as well as to recalibrate their set point for further excitation. It can also protect them from overexcitation by making them less sensitive to an ongoing stimulus.

Pulling in receptors requires the cytoskeleton, which provides the physical power, and a specialized complex of proteins known as the endocytic machinery. This machinery performs endocytosis — the process of pulling in a section of the cell membrane in the form of a vesicle, along with anything attached to its surface. At the synapse, this process is used to internalize receptors.

Until now, it was unknown how the cytoskeleton and the endocytic machinery were linked. In the new study, Nedivi’s team found that the CPG2 protein forms a bridge between the cytoskeleton and the endocytic machinery.

“CPG2 acts like a tether for the endocytic machinery, which the cytoskeleton can use to pull in the vesicles,” Nedivi says. “The glutamate receptors that are in the membrane will get pinched off and internalized.”

They also found that CPG2 binds to the endocytic machinery through a protein called EndoB2. This CPG2-EndoB2 interaction occurs only during receptor internalization provoked by synaptic stimulation and is distinct from the constant recycling of glutamate receptors that also occurs in cells. Nedivi’s lab has previously shown that this process, which does not change the cells’ overall sensitivity to glutamate, is also governed by CPG2.

“This study is intriguing because it shows that by engaging different complexes, CPG2 can regulate different types of endocytosis,” says Linda Van Aelst, a professor at Cold Spring Harbor Laboratory who was not involved in the research.

When synapses are too active, it appears that an enzyme called protein kinase A (PKA) binds to CPG2 and causes it to launch activity-dependent receptor absorption. CPG2 may also be controlled by other factors that regulate PKA, including hormone levels, Nedivi says.

Link to bipolar disorder

In 2011, a large consortium including researchers from the Broad Institute discovered that a gene called SYNE1 is number two on the hit list of genes linked to susceptibility for bipolar disorder. They were excited to find that this gene encoded CPG2, a regulator of glutamate receptors, given prior evidence implicating these receptors in bipolar disorder.

In a study published in December, Nedivi and colleagues, including Loebrich and co-lead author Mette Rathje, identified and isolated the human messenger RNA that encodes CPG2. They showed that when rat CPG2 was knocked out, its function could be restored by the human version of the protein, suggesting both versions have the same cellular function.

Rathje, a Picower Institute postdoc in Nedivi’s lab, is now studying mutations in human CPG2 that have been linked to bipolar disorder. She is testing their effect on synaptic function in rats, in hopes of revealing how those mutations might disrupt synapses and influence the development of the disorder.

Nedivi suspects that CPG2 is one player in a constellation of genes that influence susceptibility to bipolar disorder.

“My prediction would be that in the general population there’s a range of CPG2 function, in terms of efficacy,” Nedivi says. “Within that range, it will depend what the rest of the genetic and environmental constellation is, to determine whether it gets to the point of causing a disease state.”

The research was funded by the Picower Institute Innovation Fund and the Gail Steel Fund for Bipolar Research.


HOW DOES THE BRAIN CHANGE THROUGH SYNAPTIC PLASTICITY? NEURONS THAT FIRE TOGETHER, WIRE TOGETHER

While the basic architecture of the human brain is set up early in childhood, learning and memory is possible because individual neurons retain the ability to change their signaling and synaptic connections throughout a person’s life. Brain changes have been observed in neurons and synapses both after extreme changes in sensory experiences, like blindness, and after more subtle ones, like navigating a maze for the first time (Wiesel and Hubel, 1963 Karlsson and Frank, 2008). For the most part, brain changes do not seem to arise from the birth of new neurons, called neurogenesis. While neurogenesis does occur in the adult human brain, it only does so in certain brain areas, and newly born neurons represent only ∼0.004% of the of the total population of neurons at any given time (Bhardwaj et al., 2006 Bergmann et al., 2012 Spalding et al., 2013).

Instead, learning appears to occur primarily because of changes in the strength and number of the connections between existing neurons, a process called synaptic plasticity. For the most part, the changes occur in such a way that frequently used connections between neurons are enhanced the most. If the activation of a presynaptic neuron causes a postsynaptic neuron to fire, the neurons will alter themselves molecularly and cellularly so that the presynaptic neuron becomes even better at triggering the firing of the postsynaptic neuron (Hebb, 1949 Takeuchi et al., 2014). For example, in the short term, more neurotransmitter receptors may be inserted into the membrane of the synapse of the postsynaptic neuron, making it more receptive to the presynaptic neuron’s signals, and in the long term, new synapses between the two neurons may grow (Holtmaat and Svoboda, 2009 Takeuchi et al., 2014). If the coactivation of two neurons happens repeatedly, these new synapses can last for long periods of time, providing a neural substrate for long-term memory. The principle that that coactivation of two neurons leads to a stronger connection between those neurons was pithily summarized in the early 1990s by neuroscientist Carla Shatz as, “Neurons that fire together, wire together” (Shatz, 1992).


How brain cells pick which connections to keep

Brain cells, or neurons, constantly tinker with their circuit connections, a crucial feature that allows the brain to store and process information. While neurons frequently test out new potential partners through transient contacts, only a fraction of fledging junctions, called synapses, are selected to become permanent.

The major criterion for excitatory synapse selection is based on how well they engage in response to experience-driven neural activity, but how such selection is implemented at the molecular level has been unclear. In a new study, MIT neuroscientists have identified the gene and protein, CPG15, that allows experience to tap a synapse as a keeper.

In a series of novel experiments described in Cell Reports, the team at MIT’s Picower Institute for Learning and Memory used multi-spectral, high-resolution two-photon microscopy to literally watch potential synapses come and go in the visual cortex of mice — both in the light, or normal visual experience, and in the darkness, where there is no visual input. By comparing observations made in normal mice and ones engineered to lack CPG15, they were able to show that the protein is required in order for visual experience to facilitate the transition of nascent excitatory synapses to permanence.

Mice engineered to lack CPG15 only exhibit one behavioral deficiency: They learn much more slowly than normal mice, says senior author Elly Nedivi, the William R. (1964) and Linda R. Young Professor of Neuroscience in the Picower Institute and a professor of brain and cognitive sciences at MIT. They need more trials and repetitions to learn associations that other mice can learn quickly. The new study suggests that’s because without CPG15, they must rely on circuits where synapses simply happened to take hold, rather than on a circuit architecture that has been refined by experience for optimal efficiency.

“Learning and memory are really specific manifestations of our brain’s ability in general to constantly adapt and change in response to our environment,” Nedivi says. “It’s not that the circuits aren’t there in mice lacking CPG15, they just don’t have that feature — which is really important — of being optimized through use.”

Watching in light and darkness

The first experiment reported in the paper, led by former MIT postdoc Jaichandar Subramanian, who is now an assistant professor at the University of Kansas, is a contribution to neuroscience in and of itself, Nedivi says. The novel labeling and imaging technologies implemented in the study, she says, allowed tracking key events in synapse formation with unprecedented spatial and temporal resolution. The study resolved the emergence of “dendritic spines,” which are the structural protrusions on which excitatory synapses are formed, and the recruitment of the synaptic scaffold, PSD95, that signals that a synapse is there to stay.

The team tracked specially labeled neurons in the visual cortex of mice after normal visual experience, and after two weeks in darkness. To their surprise, they saw that spines would routinely arise and then typically disappear again at the same rate regardless of whether the mice were in light or darkness. This careful scrutiny of spines confirmed that experience doesn’t matter for spine formation, Nedivi said. That upends a common assumption in the field, which held that experience was necessary for spines to even emerge.

By keeping track of the presence of PSD95 they could confirm that the synapses that became stabilized during normal visual experience were the ones that had accumulated that protein. But the question remained: How does experience drive PSD95 to the synapse? The team hypothesized that CPG15, which is activity dependent and associated with synapse stabilization, does that job.

CPG15 represents experience

To investigate that, they repeated the same light-versus-dark experiences, but this time in mice engineered to lack CPG15. In the normal mice, there was much more PSD95 recruitment during the light phase than during the dark, but in the mice without CPG15, the experience of seeing in the light never made a difference. It was as if CPG15-less mice in the light were like normal mice in the dark.

Later they tried another experiment testing whether the low PSD95 recruitment seen when normal mice were in the dark could be rescued by exogenous expression of CPG15. Indeed, PSD95 recruitment shot up, as if the animals were exposed to visual experience. This showed that CPG15 not only carries the message of experience in the light, it can actually substitute for it in the dark, essentially “tricking” PSD95 into acting as if experience had called upon it.

“This is a very exciting result, because it shows that CPG15 is not just required for experience-dependent synapse selection, but it’s also sufficient,” says Nedivi, “That’s unique in relation to all other molecules that are involved in synaptic plasticity.”

A new model and method

In all, the paper’s data allowed Nedivi to propose a new model of experience-dependent synapse stabilization: Regardless of neural activity or experience, spines emerge with fledgling excitatory synapses and the receptors needed for further development. If activity and experience send CPG15 their way, that draws in PSD95 and the synapse stabilizes. If experience doesn’t involve the synapse, it gets no CPG15, very likely no PSD95, and the spine withers away.

The paper potentially has significance beyond the findings about experience-dependent synapse stabilization, Nedivi says. The method it describes of closely monitoring the growth or withering of spines and synapses amid a manipulation (like knocking out or modifying a gene) allows for a whole raft of studies in which examining how a gene, or a drug, or other factors affect synapses.

“You can apply this to any disease model and use this very sensitive tool for seeing what might be wrong at the synapse,” she says.

In addition to Nedivi and Subramanian, the paper’s other authors are Katrin Michel and Marc Benoit.

The National Institutes of Health and the JPB Foundation provided support for the research.


Conclusion

This picture of a neuron using electric potentials to act as a kind of biological transistor gives us a foundation for thinking about the brain and nervous system it shows what is happening at the level of a single cell. Yet, as mentioned at the start of this article, this picture is really just a single piece in the puzzle of how the brain works.

the brain is part of another complex biological system (the rest of the body), and is thus inextricably linked to it and the outside world.

Additionally, the answer to the question of how a neuron fires serves mainly to open up an even bigger question: how is it that the process of cells interacting with each other in this way yields all the interesting things the brain does? Five complementary observations on the complexity of the brain are starting points for answering that question.

First, there are about 86 billion neurons in the human brain and a neuron doesn’t just talk to one other neuron, it can potentially talk to thousands.

Second, among those billions of neurons, there is enormous functional diversity (see “Caveats”, above).

Third, neurons don’t just activate one another, they can also block or otherwise modulate the activation of their partners.

Fourth, brain regions and subregions are composed of millions of neurons arranged in particular patterns of circuitry, which allow them to accomplish particular neural tasks. These regions are then linked to other regions in specific ways and this allows for functional networks composed of specialized modules to arise.

And fifth, the brain is part of another complex biological system (the rest of the body), and is thus inextricably linked to it and the outside world.

These observations notwithstanding, the only single easy answer to the question of how the brain works is that nobody really knows (yet). Our ignorance here is not for want of trying, nor is it a sign that no progress has been made. Rather, it is due to the fact that we are not dealing with just another organ of the body. We are dealing with the organ that gives us everything from sight and sound to love and hate, from gut feelings to carefully constructed arguments, from the losses we mourn to the relationships we cherish.

This problem is so difficult, and its answer so elusive, because we are dealing with the most complex and consequential object in the known universe: the human brain.


How Neurons Lose Their Connections

Strengthening and weakening the connections between neurons, known as synapses, is vital to the brain’s development and everyday function.

One way that neurons weaken their synapses is by swallowing up receptors on their surfaces that normally respond to glutamate, one of the brain’s excitatory chemicals.

In a new study, MIT neuroscientists have detailed how this receptor reabsorption takes place, allowing neurons to get rid of unwanted connections and to dampen their sensitivity in cases of overexcitation.

“Pulling in and putting out receptors is a dynamic process, and it’s highly regulated by a neuron’s environment,” says Elly Nedivi, a professor of brain and cognitive sciences and member of MIT’s Picower Institute for Learning and Memory. “Our understanding of how receptors are pulled in and how regulatory pathways impact that has been quite poor.”

Nedivi and colleagues found that a protein known as CPG2 is key to this regulation, which is notable because mutations in the human version of CPG2 have been previously linked to bipolar disorder. “This sets the stage for testing various human mutations and their impact at the cellular level,” says Nedivi, who is the senior author of a Jan. 14 Current Biology paper describing the findings.

The paper’s lead author is former Picower Institute postdoc Sven Loebrich. Other authors are technical assistant Marc Benoit, recent MIT graduate Jaclyn Konopka, former postdoc Joanne Gibson, and Jeffrey Cottrell, the director of translational research at the Stanley Center for Psychiatric Research at the Broad Institute.

Forming a bridge

Neurons communicate at synapses via neurotransmitters such as glutamate, which flow from the presynaptic to the postsynaptic neuron. This communication allows the brain to coordinate activity and store information such as new memories.

Previous studies have shown that postsynaptic cells can actively pull in some of their receptors in a phenomenon known as long-term depression (LTD). This important process allows cells to weaken and eventually eliminate poor connections, as well as to recalibrate their set point for further excitation. It can also protect them from overexcitation by making them less sensitive to an ongoing stimulus.

Pulling in receptors requires the cytoskeleton, which provides the physical power, and a specialized complex of proteins known as the endocytic machinery. This machinery performs endocytosis — the process of pulling in a section of the cell membrane in the form of a vesicle, along with anything attached to its surface. At the synapse, this process is used to internalize receptors.

Until now, it was unknown how the cytoskeleton and the endocytic machinery were linked. In the new study, Nedivi’s team found that the CPG2 protein forms a bridge between the cytoskeleton and the endocytic machinery.

“CPG2 acts like a tether for the endocytic machinery, which the cytoskeleton can use to pull in the vesicles,” Nedivi says. “The glutamate receptors that are in the membrane will get pinched off and internalized.”

They also found that CPG2 binds to the endocytic machinery through a protein called EndoB2. This CPG2-EndoB2 interaction occurs only during receptor internalization provoked by synaptic stimulation and is distinct from the constant recycling of glutamate receptors that also occurs in cells. Nedivi’s lab has previously shown that this process, which does not change the cells’ overall sensitivity to glutamate, is also governed by CPG2.

“This study is intriguing because it shows that by engaging different complexes, CPG2 can regulate different types of endocytosis,” says Linda Van Aelst, a professor at Cold Spring Harbor Laboratory who was not involved in the research.

When synapses are too active, it appears that an enzyme called protein kinase A (PKA) binds to CPG2 and causes it to launch activity-dependent receptor absorption. CPG2 may also be controlled by other factors that regulate PKA, including hormone levels, Nedivi says.

Link to bipolar disorder

In 2011, a large consortium including researchers from the Broad Institute discovered that a gene called SYNE1 is number two on the hit list of genes linked to susceptibility for bipolar disorder. They were excited to find that this gene encoded CPG2, a regulator of glutamate receptors, given prior evidence implicating these receptors in bipolar disorder.

MIT neuroscientists discovered that the protein CPG2 connects the cytoskeleton (represented by the scaffold of the bridge) and the endocytic machinery (represented by the cars) during the reabsorption of glutamate receptors. Each “car” on the “bridge” carries a vesicle containing glutamate receptors. Credit: Mark Steele.

In a study published in December*, Nedivi and colleagues, including Loebrich and co-lead author Mette Rathje, identified and isolated the human messenger RNA that encodes CPG2. They showed that when rat CPG2 was knocked out, its function could be restored by the human version of the protein, suggesting both versions have the same cellular function.

Rathje, a Picower Institute postdoc in Nedivi’s lab, is now studying mutations in human CPG2 that have been linked to bipolar disorder. She is testing their effect on synaptic function in rats, in hopes of revealing how those mutations might disrupt synapses and influence the development of the disorder.


How do neurons know?

Patricia Smith Churchland is UC President’s Professor of Philosophy and chair of the philosophy department at the University of California, San Diego, and adjunct professor at the Salk Institute. She is past president of the American Philosophical Association and the Society for Philosophy and Psychology. Her latest books are Brain-Wise: Studies in Neurophilosophy (2002) and On the Contrary: Critical Essays, 1987–1997 (with Paul Churchland, 1998).

My knowing anything depends on my neurons – the cells of my brain. 1 More precisely, what I know depends on the specific configuration of connections among my trillion neurons, on the neurochemical interactions between connected neurons, and on the response portfolio of different neuron types. All this is what makes me me.

The range of things I know is as diverse as the range of stuff at a yard sale. Some is knowledge how, some knowledge that, some a bit of both, and some not exactly either. Some is fleeting, some enduring. Some I can articulate, such as the instructions for changing a tire, some, such as how I construct a logical argument, I cannot.

Some learning is conscious, some not. To learn some things, such as how to ride a bicycle, I have to try over and over by contrast, learning to avoid eating oysters if they made me vomit the last time just happens. Knowing how to change a tire depends on cultural artifacts, but knowing how to clap does not.

And neurons are at the bottom of it all. How did it come to pass that we know anything?

Early in the history of living things, evolution stumbled upon the advantages accruing to animals whose nervous systems could make predictions based upon past correlations. Unlike plants, who have to take what comes, animals are movers, and having a brain that can learn confers a competitive advantage in finding food, mates, and shelter and in avoiding dangers. Nervous systems earn their keep in the service of prediction, and, to that end, map the me-relevant parts of the world – its spatial relations, social relations, dangers, and so on. And, of course, brains map their worlds in varying degrees of complexity, and relative to the needs, equipment, and lifestyle of the organisms they inhabit. 2

Thus humans, dogs, and frogs will represent the same pond quite differently. The human, for example, may be interested in the pond’s water source, the potability of the water, or the potential for irrigation. The dog may be interested in a cool swim and a good drink, and the frog, in a good place to lay eggs, find flies, bask in the sun, or hide.

Boiled down to essentials, the main problems for the neuroscience of knowledge are these: How do structural arrangements in neural tissue embody knowledge (the problem of representations)? How, as a result of the animal’s experience, do neurons undergo changes in their structural features such that these changes constitute knowing something new (the problem of learning)? How is the genome organized so that the nervous system it builds is able to learn what it needs to learn?

The spectacular progress, during the last three or four decades, in genetics, psychology, neuroethology, neuroembryology, and neurobiology has given the problems of how brains represent and learn and get built an entirely new look. In the process, many revered paradigms have taken a pounding. From the ashes of the old verities is arising a very different framework for thinking about ourselves and how our brains make sense of the world.

Historically, philosophers have debated how much of what we know is based on instinct, and how much on experience. At one extreme, the rationalists argued that essentially all knowledge was innate. At the other, radical empiricists, impressed by infant modifiability and by the impact of culture, argued that all knowledge was acquired.

Knowledge displayed at birth is obviously likely to be innate. A normal neonate rat scrambles to the warmest place, latches its mouth onto a nipple, and begins to suck. A kitten thrown into the air rights itself and lands on its feet. A human neonate will imitate a facial expression, such as an outstuck tongue. But other knowledge, such as how to weave or make fire, is obviously learned post-natally.

Such contrasts have seemed to imply that everything we know is either caused by genes or caused by experience, where these categories are construed as exclusive and exhaustive. But recent discoveries in molecular biology, neuroembryology, and neurobiology have demolished this sharp distinction between nature and nurture. One such discovery is that normal development, right from the earliest stages, relies on both genes and epigenetic conditions. For example, a female (XX) fetus developing in a uterine environment that is unusually high in androgens may be born with male-looking genitalia and may have a masculinized area in the hypothalamus, a sexually dimorphic brain region. In mice, the gender of adjacent siblings on the placental fetus line in the uterus will affect such things as the male/female ratio of a given mouse’s subsequent offspring, and even the longevity of those offspring.

On the other hand, paradigmatic instances of long-term learning, such as memorizing a route through a forest, rely on genes to produce changes in cells that embody that learning. If you experience a new kind of sensorimotor event during the day – say, for example, you learn to cast a fishing line – and your brain rehearses that event during your deep sleep cycle, then the gene zif-268 will be up-regulated. Improvement in casting the next day will depend on the resulting gene products and their role in neuronal function.

Indeed, five important and related discoveries have made it increasingly clear just how interrelated ‘nature’ and ‘nurture’ are, and, consequently, how inadequate the old distinction is. 3

First, what genes do is code for proteins. Strictly speaking, there is no gene for a sucking reflex, let alone for female coyness or Scottish thriftiness or cognizance of the concept of zero. A gene is simply a sequence of base pairs containing the information that allows RNA to string together a sequence of amino acids to constitute a protein. (This gene is said to be ‘expressed’ when it is transcribed into RNA products, some of which, in turn, are translated into proteins.)

Second, natural selection cannot directly select particular wiring to support a particular domain of knowledge. Blind luck aside, what determines whether the animal survives is its behavior its equipment, neural and otherwise, underpins that behavior. Representational prowess in a nervous system can be selected for, albeit indirectly, only if the representational package informing the behavior was what gave the animal the competitive edge. Hence representational sophistication and its wiring infrastructure can be selected for only via the behavior they upgrade.

Third, there is a truly stunning degree of conservation in structures and developmental organization across all vertebrate animals, and a very high degree of conservation in basic cellular functions across phyla, from worms to spiders to humans. All nervous systems use essentially the same neurochemicals, and their neurons work in essentially the same way, the variations being vastly outweighed by the similarities. Humans have only about thirty thousand genes, and we differ from mice in only about three hundred of those 4 meanwhile, we share about 99.7 percent of our genes with chimpanzees. Our brains and those of other primates have the same organization, the same gross structures in roughly the same proportions, the same neuron types, and, so far as we know, much the same developmental schedule and patterns of connectivity.

Fourth, given the high degree of conservation, whence the diversity of multicellular organisms? Molecular biologists have discovered that some genes regulate the expression of other genes, and are themselves regulated by yet other genes, in an intricate, interactive, and systematic organization. But genes (via RNA) make proteins, so the expression of one gene by another may be affected via sensitivity to protein products. Additionally, proteins, both within cells and in the extracellular space, may interact with each other to yield further contingencies that can figure in an unfolding regulatory cascade. Small differences in regulatory genes can have large and farreaching effects, owing to the intricate hierarchy of regulatory linkages between them. The emergence of complex, interactive cause-effect profiles for gene expression begets very fancy regulatory cascades that can beget very fancy organisms – us, for example.

Fifth, various aspects of the development of an organism from fertilized egg to up-and-running critter depend on where and when cells are born. Neurons originate from the daughter cells of the last division of pre-neuron cells. Whether such a daughter cell becomes a glial (supporting) cell or a neuron, and which type of some hundred types of neurons the cell becomes, depends on its epigenetic circumstances. Moreover, the manner in which neurons from one area, such as the thalamus, connect to cells in the cortex depends very much on epigenetic circumstances, e.g., on the spontaneous activity, and later, the experience-driven activity, of the thalamic and cortical neurons. This is not to say that there are no causally significant differences between, for instance, the neonatal sucking reflex and knowing how to make a fire. Differences, obviously, there are. The essential point is that the differences do not sort themselves into the archaic ‘nature’ versus ‘nurture’ bins. Genes and extragenetic factors collaborate in a complex interdependency. 5

Recent discoveries in neuropsychology point in this same direction. Hitherto, it was assumed that brain centers – modules dedicated to a specific task – were wired up at birth. The idea was that we were able to see because dedicated ‘visual modules’ in the cortex were wired for vision we could feel because dedicated modules in the cortex were wired for touch, and so on.

The truth turns out to be much more puzzling.

For example, the visual cortex of a blind subject is recruited during the reading of braille, a distinctly nonvisual, tactile skill – whether the subject has acquired or congenital blindness. It turns out, moreover, that stimulating the subject’s visual cortex with a magnet-induced current will temporarily impede his braille performance. Even more remarkably, activity in the visual cortex occurs even in normal seeing subjects who are blindfolded for a few days while learning to read braille. 6 So long as the blindfold remains firmly in place to prevent any light from falling on the retina, performance of braille reading steadily improves. The blindfold is essential, for normal visual stimuli that activate the visual cortex in the normal way impede acquisition of the tactile skill. For example, if after five days the blindfold is removed, even briefly while the subject watches a television program before going to sleep, his braille performance under blindfold the next day falls from its previous level. If the visual cortex can be recruited in the processing of nonvisual signals, what sense can we make of the notion of the dedicated vision module, and of the dedicated-modules hypothesis more generally?

What is clear is that the nature versus nurture dichotomy is more of a liability than an asset in framing the inquiry into the origin of plasticity in human brains. Its inadequacy is rather like the inadequacy of ‘good versus evil’ as a framework for understanding the complexity of political life in human societies. It is not that there is nothing to it. But it is like using a grub hoe to remove a splinter.

An appealing idea is that if you learn something, such as how to tie a trucker’s knot, then that information will be stored in one particular location in the brain, along with related knowledge – say, between reef knots and half-hitches. That is, after all, a good method for storing tools and paper files – in a particular drawer at a particular location. But this is not the brain’s way, as Karl Lashley first demonstrated in the 1920s.

Lashley reasoned that if a rat learned something, such as a route through a certain maze, and if that information was stored in a single, punctate location, then you should be able to extract it by lesioning the rat’s brain in the right place. Lashley trained twenty rats on his maze. Next he removed a different area of cortex from each animal, and allowed the rats time to recover. He then retested each one to see which lesion removed knowledge of the maze. Lashley discovered that a rat’s knowledge could not be localized to any single region it appeared that all of the rats were somewhat impaired and yet somewhat competent – although more extensive tissue removal produced more serious memory deficit.

As improved experimental protocols later showed, Lashley’s non-localization conclusion was essentially correct. There is no such thing as a dedicated memory organ in the brain information is not stored on the filing cabinet model at all, but distributed across neurons.

A general understanding of what it means for information to be distributed over neurons in a network has emerged from computer models. The basic idea is that artificial neurons in a network, by virtue of their connections to other artificial neurons and of the variable strengths of those connections, can produce a pattern that represents something – such as a male face or a female face, or the face of Churchill. The connection strengths vary as the artificial network goes through a training phase, during which it gets feedback about the adequacy of its representations given its input. But many details of how actual neural nets – as opposed to computer-simulated ones – store and distribute information have not yet been pinned down, and so computer models and neural experiments are coevolving.

Neuroscientists are trying to understand the structure of learning by using a variety of research strategies. One strategy consists of tracking down experience-dependent changes at the level of the neuron to find out what precisely changes, when, and why. Another strategy involves learning on a larger scale: what happens in behavior and in particular brain subsystems when there are lesions, or during development, or when the subject performs a memory task while in a scanner, or, in the case of experimental animals, when certain genes are knocked out? At this level of inquiry, psychology, neuroscience, and molecular biology closely interact.

Network-level research aims to straddle the gap between the systems and the neuronal levels. One challenge is to understand how distinct local changes in many different neurons yield a coherent global, system-level change and a task-suitable modification of behavior. How do diverse and far-flung changes in the brain underlie an improved golf swing or a better knowledge of quantum mechanics?

What kinds of experience-dependent modifications occur in the brain? From one day to the next, the neurons that collectively make me what I am undergo many structural changes: new branches can sprout, existing branches can extend, and new receptor sites for neurochemical signals can come into being. On the other hand, pruning could decrease branches, and therewith decrease the number of synaptic connections between neurons. Or the synapses on remaining branches could be shut down altogether. Or the whole cell might die, taking with it all the synapses it formerly supported. Or, finally, in certain special regions, a whole new neuron might be born and begin to establish synaptic connections in its region.

And that is not all. Repeated high rates of synaptic firing (spiking) will deplete the neurotransmitter vesicles available for release, thus constituting a kind of memory on the order of two to three seconds. The constituents of particular neurons, the number of vesicles released per spike, and the number of transmitter molecules contained in each vesicle, can change. And yet, somehow, my skills remain much the same, and my autobiographical memories remain intact, even though my brain is never exactly the same from day to day, or even from minute to minute.

No ‘bandleader’ neurons exist to ensure that diverse changes within neurons and across neuronal populations are properly orchestrated and collectively reflect the lessons of experience. Nevertheless, several general assumptions guide research. For convenience, the broad range of neuronal modifiability can be condensed by referring simply to the modification of synapses. The decision to modify synapses can be made either globally (broadcast widely) or locally (targeting specific synapses). If made globally, then the signal for change will be permissive, in effect saying, “You may change yourself now” – but not dictating exactly where or by how much or in what direction. If local, the decision will likely conform to a rule such as this: If distinct but simultaneous input signals cause the receiving neuron to respond with a spike, then strengthen the connection between the input neurons and the output neurons. On its own, a signal from one presynaptic (sending) neuron is unlikely to cause the postsynaptic (receiving) neuron to spike. But if two distinct presynaptic neurons – perhaps one from the auditory system and one from the somatosensory system – connect to the same postsynaptic neuron at the same time, then the receiving neuron is more likely to spike. This joint input activity creates a larger postsynaptic effect, triggering a cascade of events inside the neuron that strengthens the synapse. This general arrangement allows for distinct but associated world events (e.g., blue flower and plenty of nectar) to be modeled by associated neuronal events.

The nervous system enables animals to make predictions. 7 Unlike plants, animals can use past correlations between classes of events (e.g., between red cherries and a satisfying taste) to judge the probability of future correlations. A central part of learning thus involves computing which specific properties predict the presence of which desirable effects. We correlate variable rewards with a feature to some degree of probability, so good predictions will reflect both the expected value of the reward and the probability of the reward’s occurring this is the expected utility. Humans and bees alike, in the normal course of the business of life, compute expected utility, and some neuronal details are beginning to emerge to explain how our brains do this.

To the casual observer, bees seem to visit flowers for nectar on a willy-nilly basis. Closer observation, however, reveals that they forage methodically. Not only do bees tend to remember which individual flowers they have already visited, but in a field of mixed flowers with varying amounts of nectar they also learn to optimize their foraging strategy, so that they get the most nectar for the least effort.

Suppose you stock a small field with two sets of plastic flowers – yellow and blue – each with wells in the center into which precise amounts of sucrose have been deposited. 8 These flowers are randomly distributed around the enclosed field and then baited with measured volumes of ‘nectar’: all blue flowers have two milliliters one-third of the yellow flowers have six milliliters, two-thirds have none. This sucrose distribution ensures that the mean value of visiting a population of blue flowers is the same as that of visiting the yellow flowers, though the yellow flowers are more uncertain than the blues.

After an initial random sampling of the flowers, the bees quickly fall into a pattern of going to the blue flowers 85 percent of the time. You can change their foraging pattern by raising the mean value of the yellow flowers – for example, by baiting one-third of them with ten milliliters. The behavior of the bees displays a kind of trade-off between the reliability of the source type and the nectar volume of the source type, with the bees showing a mild preference for reliability. What is interesting is this: depending on the reward profile taken in a sample of visits, the bees revise their strategy. The bees appear to be calculating expected utility. How do bees – mere bees – do this?

In the bee brain there is a neuron, though itself neither sensory nor motor, that responds positively to reward. This neuron, called VUMmx1 (‘vum’ for short), projects very diffusely in the bee brain, reaching both sensory and motor regions, as it mediates reinforcement learning. Using an artificial neural network, Read Montague and Peter Dayan discovered that the activity of vum represents prediction error – that is, the difference between ‘the goodies expected’ and ‘the goodies received this time.’ 9 Vum’s output is the release of a neuromodulator that targets a variety of cells, including those responsible for action selection. If that neuromodulator also acts on the synapses connecting the sensory neurons to vum, then the synapses will get stronger, depending on whether the vum calculates ‘worse than expected’ (less neuromodulator) or ‘better than expected’ (more neuromodulator). Assuming that the Montague-Dayan model is correct, then a surprisingly simple circuit, operating according to a fairly simple weight-modification algorithm, underlies the bee’s adaptability to foraging conditions.

Dependency relations between phenomena can be very complex. In much of life, dependencies are conditional and probabilistic: If I put a fresh worm on the hook, and if it is early afternoon, then very probably I will catch a trout here. As we learn more about the complexities of the world, we ‘upgrade’ our representations of dependency relations 10 we learn, for example, that trout are more likely to be caught when the water is cool, that shadowy pools are more promising fish havens than sunny pools, and that talking to the worm, entreating the trout, or wearing a ‘lucky’ hat makes no difference. Part of what we call intelligence in humans and other animals is the capacity to acquire an increasingly complex understanding of dependency relations. This allows us to distinguish fortuitous correlations that are not genuinely predictive in the long run (e.g., breaking a tooth on Friday the thirteenth) from causal correlations that are (e.g., breaking a tooth and chewing hard candy). This means that we can replace superstitious hypotheses with those that pass empirical muster.

Like the bee, humans and other animals have a reward system that mediates learning about how the world works. There are neurons in the mammalian brain that, like vum, respond to reward. 11 They shift their responsiveness to a stimulus that predicts reward, or indicates error if the reward is not forthcoming. These neurons project from a brainstem structure (the ventral tegmental area, or ‘vta’) to the frontal cortex, and release dopamine onto the postsynaptic neurons. The dopamine, only one of the neurochemicals involved in the reward system, modulates the excitability of the target neurons to the neurotransmitters, thus setting up the conditions for local learning of specific associations.

Reinforcing a behavior by increasing pleasure and decreasing anxiety and pain works very efficiently. Nevertheless, such a system can be hijacked by plant-derived molecules whose behavior mimics the brain’s own reward system neurochemicals. Changes in reward system pathways occur after administration of cocaine, nicotine, or opiates, all of which bind to receptor sites on neurons and are similar to the brain’s own peptides. The precise role in brain function of the large number of brain peptides is one of neuroscience’s continuing conundrums. 12

These discoveries open the door to understanding the neural organization underlying prediction. They begin to forge the explanatory bridge between experience-dependent changes in single neurons and experience-dependent guidance of behavior. And they have begun to expose the neurobiology of addiction. A complementary line of research, meanwhile, is untangling the mechanisms for predicting what is nasty. Although aversive learning depends upon a different set of structures and networks than does reinforcement learning, here too the critical modifications happen at the level of individual neurons, and these local modifications are coordinated across neuronal populations and integrated across time.

Within other areas of learning research, comparable explanatory threads are beginning to tie together the many levels of nervous system organization. This research has deepened our understanding of working memory (holding information at the ready during the absence of relevant stimuli) spatial learning, autobiographical memory, motor skills, and logical inference. Granting the extraordinary research accomplishments in the neuroscience of knowledge, nevertheless it is vital to realize that these are still very early days for neuroscience. Many surprises – and even a revolution or two – are undoubtedly in store.

Together, neuroscience, psychology, embryology, and molecular biology are teaching us about ourselves as knowers – about what it is to know, learn, remember, and forget. But not all philosophers embrace these developments as progress. 13 Some believe that what we call external reality is naught but an idea created in a nonphysical mind, a mind that can be understood only through introspection and reflection. To these philosophers, developments in cognitive neuroscience seem, at best, irrelevant.

The element of truth in these philosophers’ approach is their hunch that the mind is not just a passive canvas on which reality paints. Indeed, we know that brains are continually organizing, structuring, extracting, and creating. As a central part of their predictive functions, nervous systems are rigged to make a coherent story of whatever input they get. ‘Coherencing,’ as I call it, sometimes entails seeing a fragment as a whole, or a contour where none exists sometimes it involves predicting the imminent perception of an object as yet unperceived. As a result of learning, brains come to recognize a stimulus as indicating the onset of meningitis in a child, or an eclipse of the Sun by the Earth’s shadow. Such knowledge depends upon stacks upon stacks of neural networks. There is no apprehending the nature of reality except via brains, and via the theories and artifacts that brains devise and interpret.

From this it does not follow, however, that reality is only a mind-created idea. It means, rather, that our brains have to keep plugging along, trying to devise hypotheses that more accurately map the causal structure of reality. We build the next generation of theories upon the scaffolding – or the ruins – of the last. How do we know whether our hypotheses are increasingly adequate? Only by their relative success in predicting and explaining.

But does all of this mean that there is a kind of fatal circularity in neuroscience – that the brain necessarily uses itself to study itself? Not if you think about it. The brain I study is seldom my own, but that of other animals or humans, and I can reliably generalize to my own case. Neuroepistemology involves many brains – correcting each other, testing each other, and building models that can be rated as better or worse in characterizing the neural world.

Is there anything left for the philosopher to do? For the neurophilosopher, at least, questions abound: about the integration of distinct memory systems, the nature of representation, the nature of reasoning and rationality, how information is used to make decisions, what nervous systems interpret as information, and so on. These are questions with deep roots reaching back to the ancient Greeks, with ramifying branches extending throughout the history and philosophy of Western thought. They are questions where experiment and theoretical insight must jointly conspire, where creativity in experimental design and creativity in theoretical speculation must egg each other on to unforeseen discoveries. 14

1 Portions of this paper are drawn from my book Brain-Wise: Studies in Neurophilosophy (Cambridge, Mass.:MIT Press, 2002).


Adult brain neurons can remodel connections

Overturning a century of prevailing thought, scientists are finding that neurons in the adult brain can remodel their connections. In work reported in the Nov. 24 online edition of the Proceedings of the National Academy of Sciences (PNAS), Elly Nedivi, associate professor of neurobiology at the Picower Institute for Learning and Memory, and colleagues found that a type of neuron implicated in autism spectrum disorders remodels itself in a strip of brain tissue only as thick as four sheets of tissue paper at the upper border of cortical layer 2.

"This work is particularly exciting because it sheds new light on the potential flexibility of cerebral cortex circuitry and architecture in higher-level brain regions that contribute to perception and cognition," said Nedivi, who is also affiliated with MIT's departments of brain and cognitive sciences and biology. "Our goal is to extract clues regarding the contribution of structural remodeling to long-term adult brain plasticity -- the brain's ability to change in response to input from the environment -- and what allows or limits this plasticity."

In a previous study, Nedivi and Peter T. So, professor of mechanical engineering and biological engineering at MIT, saw relatively large-scale changes in the length of dendrites -- branched projections of nerve cells that conduct electrical stimulation to the cell body. Even more surprising was their finding that this growth was limited to specific type of cell. The majority of cortical neurons were stable, while the small fraction of locally connecting cells called interneurons underwent dynamic rearrangement.

In the current study, they show that the capacity of interneurons to remodel is not predetermined by genetic lineage, but imposed by the circuitry within the layers of the cortex itself. "Our findings suggest that the location of cells within the circuit and not pre-programming by genes determines their ability to remodel in the adult brain," Nedivi said. "If we can identify what aspect of this location allows growth in an otherwise stable brain, we can perhaps use it to coax growth in cells and regions that are normally unable to repair or adjust to a changing environment.

"Knowing that neurons are able to grow in the adult brain gives us a chance to enhance the process and explore under what conditions we can make it happen," Nedivi said. "In particular, we need to pay more attention to the unique interneuron population that retains special growth features into adulthood."

In addition to Nedivi and So, authors are Brain and Cognitive Sciences graduate student Wei-Chung Allen Lee Biology graduate students Jennifer H. Leslie and Jerry L. Chen MIT research affiliate Hayden Huang and Yael Amitai of Ben-Gurion University in Israel.


Watch the video: How do neurons connect to each others? Blue Brain Project opens new insights. (January 2022).