Are there humans which the brain sends signals to the limbs faster than the average?

I have done some researches on the time taken by the brain to send signals, but I didn't find whether that time is the same amongst all humans or there are some differences, and I have based my question on an article which mentioned that Messi(one of the best players in soccer history)'s brain send signals to the limbs faster than the normal human (, so is this possible to happen.

Assuming that the time it takes for a signal to travel from the brain to the limbs follows a normal distribution (which is reasonable), then certainly there are people who fall at the upper and lower ends of the distribution. In fact, it's relatively improbable that someone would fall right at the mean.

Consider the normal distribution below. If the mean for all humans is at the center of the distribution, then almost all fall above or below that mean. Whether Messi falls in the lower tail is a separate question.

How does your body move? Does the brain send it messages?

Muscles move on commands from the brain. Single nerve cells in the spinal cord, called motor neurons, are the only way the brain connects to muscles. When a motor neuron inside the spinal cord fires, an impulse goes out from it to the muscles on a long, very thin extension of that single cell called an axon. When the impulse travels down the axon to the muscle, a chemical is released at its ending. Muscles are made of long fibers connected to each other longways by a ratchet mechanism, the kind of mechanism that allows the two parts of an extension ladder to slide past each other and then lock in a certain position. When the chemical impulse from the motor neuron hits the muscle, it causes to muscle fibers to rachet past each other, overlapping each other more, so that the muscle gets shorter and fatter. When the impulses from the nerves stop, the muscle fibers slide back to their original positions.

Each motor neuron connects to just one muscle, say the bicep on the front of your upper arm that lifts your forearm, or to the triceps, the one on the back that extends your forearm. But when you move, you never think, “I’d like to contract my bicep two inches and relax my tricep two inches” — instead you think, “I’d like to put this cake in my mouth!” How does the brain translate from the general idea to lift something to your mouth to specific commands to muscles? It does it in stages. In the cerebral cortex, the commands in the neurons there represent coordinated movements – like pick up the cake, hit the ball, salute. The cortex then connects to a sort of console in the spinal cord that overlays the motor neurons. This console lays out arm position in space, up-down, left-right. Each desired arm position then is read out as a collection of specific commands to each motor neuron and muscle.


Barbara Finlay

  • W.R. Kenan Professor of Psychology
  • Psychology also Neurobiology & Behavior, Cornell University

Research Area:
Brain evolution
son Will daughter Laura
Horseback riding

Question From

Chelsea Norton
West Middle School
Mrs. Summerlee
Skateboarding, soccer
Future Career:
Veterinarian, pediatrician

How Does The Brain Control Movement?

How does the brain control the precision of movement of our body parts? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world.

Answer by Fabian van den Berg, Neuroscientist and Psychologist, on Quora:

How does brain control the precision of movement of our body parts? This might get a bit more complicated than you expected, so hold on. Your brain is rather complicated with many different parts and even simplifying it gets confusing. This is going to be a long one since you asked for the brain to movement mechanism (which is more complex than you’d think).

I’ll try to find common ground making it both understandable and accurate. This is about the voluntary control the brain has on muscles, movements like reflexes are excluded.

Initiating a Movement

The first thing we need is to know how movement is initiated. This isn’t as easy as sending a message from the brain to the muscle to make it “move”. Messages originate from the cortex, the outer layer of the brain. These need to go to the muscles, but they make a little stop first. If every message was sent to your muscles you wouldn’t be able to function. This stop happens at the Basal Ganglia. This is a complicated system that selects which “instructions” will be executed and which are inhibited. The reason for a movement can be many things, the specific goal is not important right now.

Important areas in the basal ganglia are the ones below, I’ll hold off on too much detail and just give general descriptions. There are more structures that may or may not be part of the basal ganglia, but let’s stick to these.

Striatum: The striatum is a collective name for several structures. The dorsal (top part) is divided into the caudate nucleus and the putamen. The ventral (bottom part) is divided into the nucleus accumbens and olfactory tubercle.

Globus Pallidus: The Globus Pallidus is divided in two parts, the internal and external globus pallidus. It has a strong role in voluntary movement.

Substrantia Nigra: This translates to “black substance” and is named like that because it is literally darker than the rest. The reason for the dark look is high levels of neuromelanin found in dopaminergic neurons (neurons that produce dopamine). The substantia nigra also has two parts: the pars compacta and the pars reticularis. The substatia nigra plays a large role in both movement, motivation, and learning responses to stimuli.

Thalamus: This is a true master of multitasking. The thalamus is an information hub, receiving and relaying information. It mainly relays information between the subcortical areas and the cortex and in particular relays the sensory information to the relevant association areas.

These have a complex anatomy, so for the sake of clarity I’ll rearrange them a bit to get a clearer image.

We start in the cortex. This is connected to the s triatum via an excitatory (increasing activity) neurotransmitter called Glutamate (with some help from Aspartate). So signals from the cortex increase the activity of the striatum. The striatum then splits into two pathways via inhibitory projections (decreasing activity). There’s the direct pathway and the indirect pathway.

In the direct pathway, the increased activity in the striatum causes an inhibition of the Substantia Nigra Pars Reticularis (SNr) and Globus Pallidus Interna (GPi). Normally these two inhibit the Thalamus, but because they are themselves inhibited (by the striatum), the thalamus is released (disinhibited). So an increase in the Striatum results in an increase in the Thalamus via disinhibition. The thalamus is then free to send its signals back to the cortex, which sends the signal to the brainstem, and eventually to the muscles.

Why this way and not just two excitatory connections? That’s because of white noise. The brain can be pretty noisy, and for two excitatory signals to rise above that they need to be a lot higher. Two inhibitory connections don’t have this problem, it’s easier to take off the brake than to step on the gas.

We once again start in the striatum with higher activity, but this time we follow a different path. In the Indirect Pathway, the striatum inhibits the Globus Pallidus Externa (GPe). The GPe constantly inhibits the Sub Thalamic Nucleus (STN), this inhibition is released when the GPe itself gets inhibited, so here too we have a disinhibition. The STN is then free to send excitatory signals to the SNr-GPi combination. This time these two have their activity increased, so their inhibition of the thalamus remains. Instead of releasing the gas, the indirect pathway slams even harder on the brake.

Modulation Of The Pathways

These two pathways seem at odds, with both of these you are pretty much stuck right? Yes, yes you are. Luckily we have another component, one that modulates the two. The Substantia Nigra pars compacta (SNc) sends dopamine to the striatum. Dopamine can attach to two receptors there: D1 and D2 receptors. D1 receptors stimulate the GABAergic neurons, tipping the scales towards the direct pathway. So more dopamine stimulating D1 receptors means more movement. The GABA neurons that control the Indirect pathway respond to Acetylcholine and Glutamate instead. The D2 receptors decrease the GABAergic neurons of the indirect pathway, soothing the effect and preventing full inhibition of movement.

So through dopamine movement is controlled, maintaining a sensitive balance between excitation and inhibition of movement. Not too much and not too little. Messing this up is bad news, we see this in Parkinson’s and Huntington’s disease.

In Parkinson’s Disease there is not enough dopamine due to damage in the Substantia Nigra. This means the direct pathway cannot initiate movement and the indirect pathway is out of control and inhibiting movement all over the place.

In Huntington’s Disease there is damage in the striatum shifting activity towards the direct pathway and preventing the indirect pathway from functioning. This results in the opposite of Parkinson’s, the inability to prevent unintentional movements.

More Loops, More Problems

Now things get even more complicated, since the system above can be used in different ways using slightly different areas. There’s a Motor Loop for motor control (obviously), an Oculomotor Loop for eye movement, a prefrontal loop for planning/working memory/attention, and a Limbic Loop for emotional behavior/motivation. Different books use different names and some group the motor and oculomotor loop together, this is just how I was taught. These loops can function simultaneously (parallel to each other).

Instead of going into detail about the specific differences and similarities of the functional loops, an example might be better. Let’s say you want to touch a glass globe (to see if it’s nice and smooth):

  • The limbic loop plays its part in the decision to move due to activation caused by your desire to see if it’s actually smooth glass (motivation).
  • The prefrontal loop forms a movement plan: the how, where, and when of your reach and perhaps grab.
  • The oculomotor and motor loops play their part in the execution and programming of the behavior to reach the target: so the movement of the eyes, arms, and hands to get a hold of that glass globe.

From the Brain Down the Spine

Ok, we’re nearly there. The instructions have gone through all the areas and have reached the cortex once again. Here we have two pathways: The Lateral or Pyramidal Pathways for voluntary movement and the Ventromedial or Extrapyramidal Pathways for unconscious movements like posture.

Lateral Pathways/Pyramidal Tracts

The more important one is the Corticospinal Tract which innervates the muscles of the body. Neurons of one side controls the muscles on the other side. We start in the neocortex, about 66% motor cortex and 33% somatosensory.

  • Axons move through the capsula interna and continue through the cerebral penducle (a large collection in the midbrain).
  • Then they move through the pons and come together to form a tract at the base of the medulla. The tract forms the medullary pyramid, on account of the pyramid-like shape.
  • At the transition of the medulla to the spine, the majority crosses over to the other side, so the left side controls the right and vice versa.
  • In the lateral column of the spine we now have a nice corticospinal tract that goes all the way to the ventral horns. Here they connect to motor and interneurons that control the muscles.

The second one is the Corticobulbar Tract which controls the muscles of the head and neck. Neurons control muscles on both sides. We again start in the motor cortex.

  • Axons decend down through the capsula interna and down into the midbrain to the penducles.
  • The Corticobulbar Tract exits at different levels of the brainstems to connect to lower motor neurons of the cranial nerves.
  • The Corticobulbar tract does not fully move to the other side of the body, it rather splits in two innervating both sides of the muscles in the head.

Ventromedial Pathways/Extrapyramidal Tracts

There are four ventromedial tracts that originate in the brainstem and end at spinal interneurons connected to the muscles. The extrapyramidal system is concerned about the modulation and regulation of movement. The tracts below are all affected by various other structures like the Nigrostriatal Pathway, the Basal Ganglia, and the cerebellum. The cerebellum in particular is important to smooth out fine movements (alcohol affects the cerebellum, hence the problem of touching your nose). The cerebellum doesn’t initiate nor inhibit movement, it’s more of a modulator using sensory information to make slight adjustments to movements. More in depth information about the cerebellum can be found here: Cerebellum .

    Rubrospinal Tract is still a bit of a mystery, but it’s thought to be involved in fine motor control of hand movements.

  • Red Nucleus → Switches to the other side of the midbrain → Descends into the lateral tegmentum → Through the lateral funiculus of the spinal cord (alongside the corticospinal tract).
  • Vestibular nuclei (input from the balance organs) → Remains ipsilateral (does not cross) → Down through the lumbar region of the spinal cord.
  • Superior Colliculus (receives input from the optic nerves)→ Switches sides and enters the spinal cord → Terminating at the cervical levels.
  • Caudal and Oral Pontine Reticular Nucleus (in the pons) → Lamina VII & VIII of the spinal cord.
  • Medullary Reticular Formation (in the Medulla from the gigantocellular nucleus) → Lamina VII & IX of the spinal cord.

From Neuron to Muscle

Regardless of the pathway taken, we have now a signal that travelled from the brain through the spine and some nerves. This signal still needs to activate a muscles. Muscles are controlled using motor units, which are composed of an upper and a lower motor neuron. The tracts above are the upper motor neurons, which is the neuron that sends the signal from the brain.

Upper Motor neurons then connect to Lower Motor neurons, which in turn connects to the muscle.

Your muscles are basically fibers within fibers within fibers. When we get to the smallest level we have Sarcomeres which are composed of sections divided by Z-lines. Between the Z-lines we have two filaments, actin and myosin. Actin is a long thin filament attached to the Z-line, Myosin is a thick filament attached to the middle called the M-line. What is going to happen is that the Myosin is going to pull on the Actin, causing the Z-lines to contract in towards the M-line. If many of these small fibers do this at the same time the larger structures will follow, causing the entire muscle to contract. This is called the Sliding Filament Model of contraction.

If we zoom in to a single actin and myosin pair it looks a bit like this. When your muscles are at rest actin and myosin don’t touch, but they have a high affinity (they really want to touch). They would touch if it wasn’t for two proteins (tropomyosin and troponin) attached to the actin filament.

We zoom out a little bit now, as we still have a neuron waiting.

  • The lower motor neuron sends an action potential that releases Acetylcholine into the synapse, causing an influx of Sodium which alters the voltage and propagates the signal.
  • The action potential is now inside the muscle, no longer in the neuron. As the action potential makes its way along the muscle cells it hits the Sarcolemma.
  • The Sarcolemma has tubes going deep into the cell (T-Tubules). These tubes lead the action potential towards the Sarcomeres.
  • The Sarcoplasmic Reticulum which encases the sarcomeres is constantly pumping calcium out of the cell (these pumps use ATP as energy). It is also lined with voltage-gated calcium channels which are still closed.
  • When the T-Tubules provide an action potential the Voltage Gated Calcium channels open up causing an influx of calcium into the cell.

The calcium now triggers the two proteins that surround the Actin. Calcium binds to troponin causing it to change shape (as proteins do when they bind). The troponin pulls towards tropomyosin, exposing the acting strands.

The myosin is now free to attach itself to the exposed actin sites. But it can’t just do this on its own, no, only myosin that took some ATP and broke it down into ADP and Phosphate are able. This “charged” m yosin stretches into an extended position. Here it stays, holding onto ADP+Phosphate like a loaded gun.

  1. Now that the actin is exposed and the myosin is primed and ready it releases its energy and shoots towards the actin. It changes shape again pulling on the acting, sliding it inwards.
  2. With the bullet fired, all the energy it got from separating ATP into ADP and Phosphate has been used up and it released the split compounds back into the cell (the release occurs because myosin changed it shape and in this state no longer has a strong affinity for them). Here they will be re-used and turned into ATP again by the mitochondria.
  3. In this state myosin does have a high affinity for ATP, leading to ATP binding to it again. This binding causes another shape change that releases myosin from the actin. This resets the myosin back to its primed and ready state. It can fire again and pull actin in a little bit more.

The myosin thus pulls on the actin, pulling the two Z-lines towards the middle and the sarcomere contracts. In the meantime: the calcium pumps of the Sarcoplasmic Reticulum are busy pumping calcium out, so eventually calcium unbinds from Troponin. This resets the protection and causes actin to become inaccessible to myosin. Now the fun is over, myosin can no longer attach to the actin and the cycle starts anew when an action potential hits.

There you have it, the full pathway of movement from brain to muscle (in a very short and condensed version). A movement plan has been made, this can be for big movements like walking or fine movements like softly touching something. This goes trough a lot of structures, some motor tracts, gets some assistance from the cerebellum and your senses, and then it ends up in your torso, arm, hand, and finger where the muscles move to make it all happen.

Important and convenient sources are:

Mancall, E. L., & Brock, D. G. (2011). Gray’s Clinical Neuroanatomy: The Anatomic Basis for Clinical Neuroscience. Elsevier Health Sciences.

Middleton, F. A., & Strick, P. L. (2000). Basal ganglia and cerebellar loops: motor and cognitive circuits. Brain research reviews, 31(2), 236-250.

Lanciego, J. L., Luquin, N., & Obeso, J. A. (2012). Functional Neuroanatomy of the Basal Ganglia. Cold Spring Harbor Perspectives in Medicine, 2(12), a009621.

And my own summary of the courses concerning the brain and interaction with the environment. The summary is a mix of articles, books, lectures, talks, and group discussions. Sorry, no online source for that.

This question originally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions:

Current explanations from the field of neuroscience suggest that déjà vu occurs when the brain is slightly fatigued and working to 'fact check' a memory. We experience this as being odd because we become aware of the process.

Might we explore a different explanation for déjà vu if we were looking at it from the standpoint of time being non linear and perhaps opening up to the idea of a collective consciousness?

Take a moment and breathe. Place your hand over your chest area, near your heart. Breathe slowly into the area for about a minute, focusing on a sense of ease entering your mind and body. Click here to learn why we suggest this.

They say about 60% of people experience déjà vu during their life, right off the bat that hit me as something I didn’t expect as I feel like almost everyone I know has had it at one time or another. Déjà vu, (‘already seen’ to the French) is the feeling that you are re-living something that has happened before. In the movie The Matrix, where déjà vu is perhaps most thought of in pop culture, Neo experiences a cat going by a doorway twice in a matter of seconds. Same cat, same moves, same everything.

In the film, this moment is presented as a ‘glitch in the matrix,’ however, in real life, déjà vu doesn’t often happen like what is seen in The Matrix, it instead feels as though you can’t recall when the ‘other memory’ happened, more so that what you are experiencing right now has already happened at some time.

Let’s dive into what some believe neuroscience is offering as an explanation.

What Happened:

According to experts like Dr Akira O’Connor, who is a senior psychology lecturer at the University of St Andrews, déjà vu is not only a feeling of familiarity, but also the metacognitive recognition that these feelings are misplaced. In simple terms:

“Déjà vu is basically a conflict between the sensation of familiarity and the awareness that the familiarity is incorrect. And it’s the awareness that you’re being tricked that makes déjà vu so unique compared to other memory events.”

Neuroscientists have determined that this memory illusion occurs when the frontal regions of the brain are attempting to correct an inaccurate memory.

“For the vast majority of people, experiencing déjà vu is probably a good thing. It’s a sign that the fact-checking brain regions are working well, preventing you from misremembering events. In a healthy person, such misremembering is going to happen every day. This is to be expected because your memory involves millions and billions of neurones. It’s very messy.”

While there isn’t a completely agreed upon explanation for what happens in the brain when déjà vu occurs, most models suggest that déjà vu occurs when areas of the brain (such as the temporal lobe) feed the mind’s frontal regions signals that a past experience is repeating itself. The frontal decision making parts of the brain then checks to see if the memory is actually true or possible, perhaps saying something to the effect “have I been here before?”

“If you have actually been in that place before, you may try harder to retrieve more memories. If not, a déjà vu realization can occur.”

It’s typically believed that we are more susceptible to déjà vu when the mind is a bit more fatigued and not as quick to discern that validity of our current moment.

Why It Matters:

What fascinated me about this in particular is two things: I’ve long felt that it’s quite possible that memories may actually be non local, i.e. they exist outside the brain not in the brain, and that perhaps the brain tunes into those memories that are somewhere around us. Or maybe we could say that some memory may exist in the brain, while others are part of some sort of collective field.

The second fascinating part for me is that I wonder if déjà vu has something to do with emerging science that tells us time is not linear. Perhaps when we take a classic scientific model that states all time is linear and all experience is linear, we limit our explanation of what déjà vu might be to something that fits that paradigm. What if the brain is tuning into something relating to quantum potentials that always exist, and that perhaps something different is happening with déjà vu? I’m not sure yet, however this is where déjà vu intrigues me the most.

Of course, the end result of exploring a question like this invites us to shift our worldview around the nature of reality, time and experience. Something that might be uncomfortable for some but I feel post material science is inviting us to do.

The Takeaway:

As with anything that is happening in our lives right now it seems, we are culturally in a time where a long avoided shift in our scientific paradigm is creating a lack of meaningful explanations for many things that happen in life. Is déjà vu one of those things that doesn’t have a good explanation in our current scientific paradigm? The jury might still be out on that, but for me, the current explanation presented in this piece did not quite ‘do it for me’ and my inquisitive mind and gut feeling pushes me to explore these questions through the emerging paradigm of non material science.

Dive Deeper

Click below to watch a sneak peek of our brand new course!

Our new course is called 'Overcoming Bias & Improving Critical Thinking.' This 5 week course is instructed by Dr. Madhava Setty & Joe Martino

If you have been wanting to build your self awareness, improve your.critical thinking, become more heart centered and be more aware of bias, this is the perfect course!


Can We Copy the Brain?

The IEEE Spectrum this month has a story on synthetic brains. In this article I will review the story and comment on the status of the quest: replicating the human brain in synthetic systems. This article is about neuroscience, neuromorphic, artificial neural networks, deep learning, computing hardware in biology and synthetic, and how all of these come together in the the human grand challenge of creating a synthetic brain at or above human level.

Why We Should C opy the Brain: we should do this because we want to create intelligent machines that can do the work for us. In order to do our work, machine will have to live in our environment, have senses similar to our own, and be able to accomplish the same kind of tasks. It does not stop here: machine can do more and better than we can most tasks, simply as we do better than other life forms. And we would like them to do things we cannot do, and do better the things we can do. It is called progress, and we need to do this to bypass biological evolution and speed it up. The article has a good summary of what this will be, and what machines will do for us. More comments below in section PS1. For jobs, see PS3.

In the Future, Machines Will Borrow Our Brain’s Best Tricks: the human brain is one of the most efficient computing machines we know of. In that sense, it is the best “brain” of the know universe (known to men). And what is a brain? It is a computer that allows us to live our life in our environment. What is life? Oh well, maybe for now, let’s just say our life is devoted to procreate, ensure the best for our offsprings, promote future generation and their success, preserve the best environment for all this to happen (or are we now?).

And today us humans are trying to build artificial brains, inspired by our own. And slowly, in the recent years (many more articles and reviews online!), artificial neural networks and deep learning have slowly eroded many gaps between computers and human abilities. It is only inevitable that they will become more and more like a person, because we are actually building them with that goal in mind! We want them to do things for us, as driving our car, providing customer service, being a perfect digital assistants, reading your mind and predicting your needs. And also pervade every instrument and sensor in the world, so they can better assist us to get the right information at the right time, sometimes without us asking for it.

But building a synthetic brain does not mean we need to copy our own. In fact, that is not our goal at all, our goal is to make an even better one! Our brain is made of cells and biological tissues, and our synthetic brains are made of silicon and wires. The physics of these media is not the same, and thus we only take inspiration from the brain algorithms, as we build better and larger synthetic hardware, one step at the time. We are already dreaming of neural networks that can create computing architectures by themselves. The same way that a neural network trains its weights, it could also train to eliminate the last human input in all this: learn to create the neural network model definition!

Even if we wanted to limit ourselves to creating a clone of our brain, it will still rapidly evolve beyond our capabilities, as one of the goals of building it, is to continuously learn new knowledge and improve behavior. It is thus inevitable we will end up with a “better” brain than ours, possibly so much better, we cannot even imagine it. Maybe like our brain is compared to the one of an insect — and more. There may be no limit to how intelligent and knowledgeable a creature can become.

The Brain as Computer: Bad at Math, Good at Everything Else: us humans have studied neural networks for a long time now. And we have been studying our brains for a long time also. But we still do not know how we can predict what is going to happen by just looking at a scene, something we do every moment of our life. We still do not know how we learn new concepts, how we add them to what we already know, how we use the past to predict the future, and how we recognize complex spatio-temporal data, such as recognizing actions in the real world. See this for a summary. We also do not know how to best interact with a real or simulated world, or how we learn to interact with the world.

We may not know all this. But we are making strides. We started by learning to recognize objects (and faces, and road-scenes). We then learned to categorize and create sequences (including speech, text comprehension, language translation, image to text translation, image to image translation, and many more). We are still trying to learn how to learn without a lot of labeled data (unsupervised learning). And we started playing video games, first simple ones, then difficult ones, now very complex ones. It is only a matter of time that AI algorithms will learn about mechanics and physics of our world.

And we got really good at it, better than humans in all these tasks — or some! And we are not planning to stop until we have robots that can do common tasks for us: cook, clean, wash dishes, fold laundry, talk to us (Alexa, Siri, Cortana, etc), understand our feelings and emotions, and many more tasks commonly associated with human intellect and abilities! But how do we get there? We have been very good at having neural networks categorize things, now we need them to predict. Learn long sequences of events, categorizing long sequence of events. And since there are an infinite number of possible events, we cannot train an AI with examples, we do not have all the examples, so we need it to learn on its own. And the best theories of how our brains learns to do this, is by predicting the future constantly, so it knows to ignore all unimportant and previously-seen events, but at the same time know if some event is new. Unsupervised and self-supervised learning will be important components. More here.

Note also that much of this deep learning progress did not come out of neuroscience or psychology, the same way that making good batteries did not come out of alchemy.

Computing hardware is mentioned in this article also, stating that conventional computers may not be as good as some neuromorphic computer approach. We comment on this topic here. There will sure be more efficient hardware coming about, hardware that possibly will be able to run the latest and greatest deep learning algorithms, such as our work here. And it may have neuromorphic components, such as spiking networks and asynchronous communication of sparse data, but it is not this day. This day neuromorphic hardware has yet to run anything similar to the great successes of deep learning algorithms, such as the Google/Baidu speech recognition on mobile phones, Google text translation on the wild on your phone, or tagging of your images in the cloud. These are tangible results we have today, and they use deep learning and they use back-propagation on labeled datasets, and they use conventional digital hardware, soon to be specialized hardware.

What Intelligent Machines Need to Learn From the Neocortex: well this is a duh moment. Jeff Hawkins has written a very exciting book, only a decade ago or so. It hugely inspired all my students, our e-Lab, and myself, to work on synthetic brains, and take inspiration from what we know and can measure of the human brain.

But since then artificial neural networks and deep learning have stolen his thunder. Of course we need to learn to rewire. Of course we are learning a sparse representation. All deep neural networks do this — see PS2. And of course we need to learn to act in an environment (embodiment), we already do this by learning to play video games and drive cars.

But of course it does not say how to tackle real-world tasks, because Numenta is still stuck in its peculiar business model where it does not help itself and it does not help the community. It would be better to listen to the community, share its success and fund smart people and startups. Maybe Jeff does think he alone can solve all this and is better than anyone else. We all are victims of this egocentric behavior…

I have to add I agree with Jeff’s frustration on how categorization-centric deep learning algorithms fail to tackle more complex tasks. We have wrote about this here. But as you can read in this link, we are all working in this area, and there will very soon be much advancement, as there has been in categorization tasks. Rest assured of that! Jeff says: “As I consider the future, I worry that we are not aiming high enough”. If Jeff and Numenta would join, we will all be faster and better off, and retarget our aims.

AI Designers Find Inspiration in Rat Brains: here we get to the culprit of all the problems in brain/cognition/intelligence: studying the brain. I spent more than 10 years trying to build better neuroscience instrumentation, with the goal of helping neuroscientists understand how human perceive the visual world. See this, slide 16-on. This is at a time where people are still poking neurons with 1 or few wires, and making only limited progress at the topics I was mostly interested in: understanding how neural networks are wired, encode information, and build higher-level representation of the real world. Why do I care about this? Because knowing how the brain does some of this would allow us to build a synthetic brain faster, as we would apply principles from biology first, rather that trying to figure out things with trials and errors. And bear in mind biology got there by trial and error anyway, in billions of years of evolution…

With time, I grew increasingly frustrated with the progress of brain studies and neuroscience, because:

  • we do not have the right instruments to study the brain at the scale of small or medium neural networks, and most scientific proposals in this area were small incremental improvements on the little we have, rather than systematically build a new set of neuroscience tools and instruments
  • many colleagues still believe that there is something special about spiking neural network, and real biological brains that somehow no model will be able to capture, or detail of biological neurons that are magical and can never be reproduced in artificial ones
  • it is still impossible to study the representation of neurons in inner layers because of the lack of tools that can record from all possible inputs and outputs in a real neural network. Mapping connectivity will help, but not without knowing what each neuron is supposed to do, at least to some extent
  • to help this problem, we were proposing complex opto-genetic recording instruments that could study the spatio-temporal evolution of brains in real-life conditions (behaving animals), while few researchers wanted to use them and get off the electrode neural recording bandwagon

Working with artificial neural networks allows to surpass many of these limitations, while keeping a variable degree of loyalty to biological principles. Artificial neural networks can be designed on a computer simulations, can run really fast, and can be used today for practical tasks. This is basically what deep learning has been doing in the last 5–10 years. Also these systems are fully observable: we know exactly how neuron work and what response they give and what the inputs were in all conditions. We also know in full detail how they got trained to perform in a specific manner.

But the question is: what biological principle are important to follow? While we have no answer to this questions, we can definitely conclude that if an artificial neural network can solve a practical task, it will be important, regardless of whether it perfectly mimics or not a biological counter-part. Studying a 1mm³ of cortex and hoping that we can get an idea of how the brain works and learn is ill-founded. We may get much data and details, but all of this can be discarded as the only underlying working principle is of importance. For example: we do not need to know where every molecule or drop of water is in a stream, all we need to know is where it is going on average and what is the average stream size. And for testing these underlying models, we can use our ideas or simulations, or even better, design a system that can design a synthetic system for us. We do not need to reverse-engineer every aspect of a piece of tissue, as it has little relevance to its underlying algorithms and operating principle. The same way that we do not need to know how our ear and vocal cord works, in order to send voice all over the world with radio cell-phones, surpassing the capabilities of any biological entities on this planet. Same can be said for airplane wings.

The article says: “AI is not built on “neural networks” similar to those in the brain? They use an overly simplified neuron model. The differences are many, and they matter”. This claim has been uttered many times to claim biology has some special property that we do not know of, and that we cannot make any progress without knowing it. This is plain non-sense, as we have made plenty of progress without knowing all the details, in fact this may prove the details are not important. There has not been a single piece of evidence showing that if we add some “detail” to the simple artificial neuron of deep learning the system can improve in performance. There is no evidence because all neuromorphic system operate on toy models and data, and are not able to scale to the success of deep learning. So no comparison can be made to date. One day this may be the case, but I assure you your “detail” can simply be encoded as more neurons and more layers or a simple rewiring.

Reading this article on Spectrum reminds me that the situation has not changed in the last 5–10 years and that we still do not know how much of the brain works and we still do not have the tools to investigate this. There is much information to back this claim, and there have been two large initiative on studying the brain both in USA and Europe, both with very limited success. I am not negative about this field, I am just stating my observations here. I hope some smart minds come and invent new tools, as I have tried and may have failed, for now. But please let us stop poking the brain with a few wires and keep claiming that EEG will one day solve all our problems. I would be a very happy person if we could make some strides in the neuroscience area, but I think the current way of doing things, basic research, goals and tools have to be reset and redesigned.

And maybe that is why in this IEEE Spectrum article all neuroscientists say it will take hundreds of years to reach human-level AI, while all researchers in AI say it will take 20–50 years. Because in neuroscience so far, there has been little progress towards explaining neural networks, while in AI / deep learning, the progress occurs on a daily basis. I call the the AI/neuroscience divide, which will only grow.

Are there humans which the brain sends signals to the limbs faster than the average? - Biology

Neurons & the Nervous System - Part 2

2 - Spinal nerves (31 pair) & their branches

Divisions of the nervous system



    2 - Visceral - supplies & receives fibers to & from smooth muscle, cardiac muscle, and glands. The visceral motor fibers (those supplying smooth muscle, cardiac muscle, & glands) make up the Autonomic Nervous System. The ANS has two divisions:
    • Parasympathetic division - important for control of 'normal' body functions, e.g., normal operation of digestive system
    • Sympathetic division - also called the 'fight or flight' division important in helping us cope with stress

    1 - Myelencephalon, which includes the medulla

    2 - Metencephalon, which includes the pons and cerebellum

    3 - Mesencephalon, which includes the midbrain (tectum and tegmentum)

    4 - Diencephalon, which includes the thalamus and hypothalamus

    5 - Telencephalon, which includes the cerebrum (cerebral cortex, basal ganglia, & medullary body)

    Human brain (coronal section). The divisions of the brain include the (1) cerebrum, (2) thalamus, (3) midbrain,
    (4) pons, and (5) medulla oblongata. (6) is the top of the spinal cord (Source: Wikipedia).

    Structures of the Brain:

      1 - continuous with spinal cord

    2 - contains ascending & descending tracts that communicate between the spinal cord & various parts of the brain

      • cardioinhibitory center, which regulates heart rate
      • respiratory center, which regulates the basic rhythm of breathing
      • vasomoter center, which regulates the diameter of blood vessels

      2 - Origin of four cranial nerves (V or trigeminal, VI or abducens, VII or facial, & VIII or vestibulocochlear)

      3 - contains pneumotaxic center (a respiratory center)

      The brain stem is the region between the diencephalon (thalamus and hypothalamus) and the spinal cord. It consists of three parts: midbrain, pons, and medulla oblongata. The midbrain is the most superior portion of the brain stem. The pons is the bulging middle portion of the brain stem. This region primarily consists of nerve fibers that form conduction tracts between the higher brain centers and spinal cord. The medulla oblongata, or simply medulla, extends inferiorly from the pons. It is continuous with the spinal cord at the foramen magnum. All the ascending (sensory) and descending (motor) nerve fibers connecting the brain and spinal cord pass through the medulla (Source:

        1 - Corpora quadrigemina - visual reflexes & relay center for auditory information.Two pairs of rounded knobs on the upper surface of the midbrain mark the location of four nuclei, which are called collectively the "corpora quadrigemina." These masses contain the centers for certain visual reflexes, such as those responsible for moving the eyes to view something as the head is turned. They also contain the hearing reflex centers that operate when it is necessary to move the head so that sounds can be heard better.

      2 - Cerebral peduncles - ascending & descending fiber tracts

      3 - Origin of two cranial nerves (III or oculomotor & IV or trochlear)

      1- posterior medullary velum, 2 - choroid plexus, 3 - cisterna cerebellodellaris of subarachnoid cavity, 4 - central canal,
      5 - corpora quadrigemina , 6 - cerebral peduncle, 7 - anterior medullary, 8 - ependymal lining of ventricle, & 9 - cisterna pontis of subarachnoid cavity
      (Source: Wikipedia).

        1 - Control of Autonomic Nervous System

      2 - Reception of sensory impulses from viscera

      3 - Intermediary between nervous system & endocrine system

      4 - Control of body temperature

      7 - Part of limbic system (emotions such as rage and aggression)

      8 - Part of reticular formation

        1 - portions located in the spinal cord, medulla, pons, midbrain, & hypothalamus

      2 - needed for arousal from sleep & to maintain consciousness

        1 - largest portion of the human brain

      • Cortex:
        • outer 2 - 4 mm of the cerebrum
        • consists of gray matter (cell bodies & synapses no myelin)
            • 'folded', with upfolded areas called gyri & depressions or grooves called sulci
            • consists of four primary lobes
                • functional areas include motor areas (initiate impulses that will cause contraction of skeletal muscles) (see A Map of the Motor Cortex), sensory areas (receive sensory impulses from throughout the body), and association areas (for analysis)

                'Forward' ( a ) and 'inverse' ( b ) model control systems for movement. According to 'instructions' from the premotor cortex (P), an area in the motor cortex (controller, or CT) sends impulses to the controlled object (CO a body part). The visual cortex (VC) mediates feedback from the body part to the motor cortex. The dashed arrow indicates that the body part is copied as an 'internal model' in the cerebellum. In the forward-model control system, control of the body part (CO) by the motor cortex (CT) can be precisely performed by referring to the internal feedback. In the inverse-model control system, feedback control by the motor cortex (CT) is replaced by the inverse model itself (Ito 2008).

                The rate of change in cortical thickness in children and teens of varying intelligence. Positive values indicate increasing cortical thickness, negative values indicate cortical thinning. The point of intersection on the x axis (0) represents the age of maximum cortical thickness (5.6 yr for average, 8.5 yr for high, and 11.2 yr for the superior intelligence group).

                Cortex matures faster in youth with superior IQs -- Children and teens with superior IQ's are distinguished by how fast the thinking part of their brains thickens and thins as they grow up. Magnetic resonance imaging (MRI) scans showed that their brain&rsquos outer mantle, or cortex, thickens more rapidly during childhood, reaching its peak later than in their peers &mdash perhaps reflecting a longer developmental window for high-level thinking circuitry. It also thins faster during the late teens, likely due to the withering of unused neural connections as the brain streamlines its operations. Although most previous MRI studies of brain development compared data from different children at different ages, Shaw et al. (2006) controlled for individual variation in brain structure by following the same 307 children and teens, ages 5-19, as they grew up. Most were scanned two or more times at two-year intervals. The resulting scans were divided into three equal groups and analyzed based on IQ test scores: superior (121-145), high (109-120), and average (83-108). The researchers found that the relationship between cortex thickness and IQ varied with age, particularly in the prefrontal cortex, seat of abstract reasoning, planning, and other &ldquoexecutive&rdquo functions. The smartest 7-year-olds tended to start out with a relatively thinner cortex that thickened rapidly, peaking by age 11 or 12 before thinning. In their peers with average IQ, an initially thicker cortex peaked by age 8, with gradual thinning thereafter. Those in the high range showed an intermediate trajectory (see below). Although the cortex was thinning in all groups by the teen years, the superior group showed the highest rates of change. &ldquoBrainy children are not cleverer solely by virtue of having more or less gray matter at any one age,&rdquo explained co-author J. Rapoport. &ldquoRather, IQ is related to the dynamics of cortex maturation.&rdquo The observed differences are consistent with findings from functional magnetic resonance imaging, showing that levels of activation in prefrontal areas correlates with IQ, note the researchers. They suggest that the prolonged thickening of prefrontal cortex in children with superior IQs might reflect an &ldquoextended critical period for development of high-level cognitive circuits.&rdquo Although it&rsquos not known for certain what underlies the thinning phase, evidence suggests it likely reflects &ldquouse-it-or-lose-it&rdquo pruning of brain cells, neurons, and their connections as the brain matures and becomes more efficient during the teen years. &ldquoPeople with very agile minds tend to have a very agile cortex,&rdquo said co-author P. Shaw.

                  • Medullary body:
                    • the 'white matter' of the cerebrum consists of myelinated axons
                    • types of axons include:
                      • commissural fibers - conduct impulses between cerebral hemispheres (and form the corpus callosum)
                            • projection fibers - conduct impulses in & out of the cerebral hemispheres
                            • association fibers - conduct impulses within hemispheres
                            • masses of gray matter in each cerebral hemisphere
                            • important in control of voluntary muscle movements

                              1 - consists of a group of nuclei + fiber tracts

                            2 - located in part in cerebral cortex, thalamus, & hypothalamus

                            • aggression
                            • fear
                            • feeding
                            • sex (regulation of sexual drive & sexual behavior)

                            The spinal cord extends from the skull (foramen magnum) to the first lumbar vertebra. Like the brain, the spinal cord consists of gray matter and white matter. The gray matter (cell bodies & synapses) of the cord is located centrally & is surrounded by white matter (myelinated axons). The white matter of the spinal cord consists of ascending and descending fiber tracts, with the ascending tracts transmitting sensory information (from receptors in the skin, skeletal muscles, tendons, joints, & various visceral receptors) and the descending tracts transmitting motor information (to skeletal muscles, smooth muscle, cardiac muscle, & glands). The spinal cord is also responsible for spinal reflexes.


                            Reflex- rapid (and unconscious) response to changes in the internal or external environment needed to maintain homeostasis

                              1 - receptor - responds to the stimulus
                              2 - afferent pathway (sensory neuron) - transmits impulse into the spinal cord
                              3 - Central Nervous System - the spinal cord processes information
                              4 - efferent pathway (motor neuron) - transmits impulse out of spinal cord
                              5- effector - a muscle or gland that receives the impulse from the motor neuron & carries out the desired response
                            • Somatic afferent
                            • Somatic efferent
                            • Visceral afferent
                            • Visceral efferent

                            Somatic efferent neurons are motor neurons that conduct impulses from the spinal cord to skeletal muscles. These neurons are multipolar neurons, with cell bodies located in the gray matter of the spinal cord. Somatic efferent neurons leave the spinal cord through the ventral root of spinal nerves.

                            Visceral afferent neurons are sensory neurons that conduct impulses initiated in receptors in smooth muscle & cardiac muscle. These neurons are collectively referred to as enteroceptors or visceroceptors. Visceral afferent neurons are unipolar neurons that enter the spinal cord through the dorsal root & their cell bodies are located in the dorsal root ganglia.

                            • Visceral efferent 1 (also called the preganglionic neuron) is a multipolar neuron that begins in the gray matter of the spinal cord, which is where its cell body is located. This neuron leaves the cord through the ventral root of a spinal nerve, leaves the spinal nerve via a structure called the white ramus, then ends in an autonomic ganglion (either sympathetic or parasympathetic). In the ganglion, the visceral efferent 1 neuron synapses with a visceral efferent 2 neuron.
                            • Visceral efferent 2 (also called the postganglionic neuron) is also a multipolar neuron and it begins in the sympathetic ganglion (which is where its cell body is located). Visceral efferent 2 neurons may exit the ganglion through the gray ramus, then proceed to some visceral structure (smooth muscle, cardiac muscle, or gland).

                            The 4 types of peripheral neurons: somatic afferent (top right), somatic efferent (bottom right),
                            visceral afferent (top left), and visceral efferent (bottom left).

                              1 - entirely motor (consisting of the visceral efferent fibers)

                            • sympathetic neurons leave the central nervous system through spinal nerves in the thoracic & lumbar regions of the spinal cord
                            • parasympathetic neurons leave the central nervous system through cranial nerves plus spinal nerves in the sacral region of the spinal cord

                            Autonomic Nervous System - control of involuntary muscle

                            3 - impulses always travel along two neurons: preganglionic & postganglionic

                            4 - Chemical transmitters - all autonomic neurons are either cholinergic or adrenergic

                            How Is The Structure Of The Brain Different?

                            “The neuroanatomy of autism is difficult to describe,” Dr. Culotta says. So it might be easier to talk about the architecture of the brain and how the autistic brain may differ.

                            So what’s different in the structure of this three-pound organ? Let’s start with a quick anatomy refresher: First of all, the brain is split into two halves or hemispheres. It is these two hemispheres that we get the idea of a left brain and a right brain. In reality, our thinking and cognitive processes bounce back and forth between the two halves. “There’s a little bit of difficulty in autism communicating between the left and right hemispheres in the brain. There’s not as many strong connections between the two hemispheres,” Dr. Anderson says.

                            In recent years, science has found that the hemispheres of ASD brains have slightly more symmetry than those of a regular brain. This small difference in asymmetry isn’t enough to diagnosis ASD, according to a report in Nature Communications. And, exactly how the symmetry may play into autism’s traits is still be researched.

                            Here’s what researchers do know. Left-right asymmetry is an important aspect of brain organization. Some functions of the brain tend to be dominated, or to use the technical term lateralized, by a side of the brain. One example is speech and understanding. For most people (95 percent of right-handers and about 70 percent of left-handers) it’s processed in the left cerebral hemisphere. People with ASD tend to have reduced leftward language lateralization, which could be why they also have a higher rate of being lefthanded compared to the general population.

                            The differences in the brain don’t stop there. Another quick Biology 101 review: Within each half, there are lobes: frontal, parietal, occipital, and temporal. Inside these lobes are structures that are in charge of everything from movement to thinking. On top of the lobes, lies the cerebral cortex aka grey matter. This is where information processing happens. The folds in the brain add to the surface area of the cerebral cortex. The more surface area or grey matter there is, the more information that can be processed.

                            Now, we’re going to get a little technical. Grey matter ripples into peaks and troughs called gyri and sulci, respectively. According to researchers from San Diego State University, these deep folds and wrinkles may develop differently in ASD. Specifically, in autistic brains there is significantly more folding in the left parietal and temporal lobes as well as in the right frontal and temporal regions.

                            “These alterations are often correlated with modifications in neuronal network connectivity,” Dr. Culotta says. “In fact, it has been proposed that strongly connected cortical regions are pulled together during development, with gyri forming in between. In the autistic brain, the brain reduced connectivity, known as hypoconnectivity, allows weakly connected regions to drift apart, with sulci forming between them.” Research has shown the deeper theses sulcal pits are, the more language production is affected.

                            Child Autism Test (Self-Assessment)?

                            Take our 3-minute quiz to see if your child may have autism.

                            Despite all this information about how an autistic brain might be set up, its neurobiology is still a mystery. “One thing that has become a more recent observation is that it may not be just about the structure of the brain, in other words, it may not be so much about the hardware as the software,” Dr. Anderson says.

                            “It may be the timing of brain activity that’s abnormal, that the signals from one region of the brain to another get blurred in time,” Dr. Anderson says. “And the results of that is the brain is more stable in autism and it’s not able to move between different thoughts or activities as quickly or as efficiently as someone without autism.”

                            The Neurologist Who Hacked His Brain—And Almost Lost His Mind

                            To revist this article, visit My Profile, then View saved stories.

                            To revist this article, visit My Profile, then View saved stories.

                            The brain surgery lasted 11 and a half hours, beginning on the afternoon of June 21, 2014, and stretching into the Caribbean predawn of the next day. In the afternoon, after the anesthesia had worn off, the neurosurgeon came in, removed his wire-frame glasses, and held them up for his bandaged patient to examine. “What are these called?” he asked.

                            Phil Kennedy stared at the glasses for a moment. Then his gaze drifted up to the ceiling and over to the television. “Uh … uh … ai … aiee,” he stammered after a while, “… aiee … aiee … aiee.”

                            “It’s OK, take your time,” said the surgeon, Joel Cervantes, doing his best to appear calm. Again Kennedy attempted to respond. It looked as if he was trying to force his brain to work, like someone with a sore throat who bears down to swallow.

                            Meanwhile, the surgeon’s mind kept circling back to the same uneasy thought: “I shouldn’t have done this.”

                            When Kennedy grabbed a pen and tried to write a message, it came out as random letters scrawled on a page. “I thought we had damaged him for life,” Powton says.

                            When Kennedy had arrived at the airport in Belize City a few days earlier, he had been lucid and precise, a 66-year-old with the stiff, authoritative good looks of a TV doctor. There had been nothing wrong with him, no medical need for Cervantes to open his skull. But Kennedy wanted brain surgery, and he was willing to pay $30,000 to have it done.

                            Kennedy was himself once a famous neurologist. In the late 1990s he made global headlines for implanting several wire electrodes in the brain of a paralyzed man and then teaching the locked-in patient to control a computer cursor with his mind. Kennedy called his patient the world’s “first cyborg,” and the press hailed his feat as the first time a person had ever communicated through a brain-computer interface. From then on, Kennedy dedicated his life to the dream of building more and better cyborgs and developing a way to fully digitize a person’s thoughts.

                            Now it was the summer of 2014, and Kennedy had decided that the only way to advance his project was to make it personal. For his next breakthrough, he would tap into a healthy human brain. His own.

                            Hence Kennedy’s trip to Belize for surgery. A local orange farmer and former nightclub owner, Paul Powton, had managed the logistics of Kennedy’s operation, and Cervantes—Belize’s first native-born neurosurgeon—wielded the scalpel. Powton and Cervantes were the founders of Quality of Life Surgery, a medical tourism clinic that treats chronic pain and spinal disorders and also specializes these days in tummy tucks, nose jobs, manboob reductions, and other medical enhancements.

                            At first the procedure that Kennedy hired Cervantes to perform—the implantation of a set of glass-and-gold-wire electrodes beneath the surface of his own brain—seemed to go quite well. There wasn’t much bleeding during the surgery. But his recovery was fraught with problems. Two days in, Kennedy was sitting on his bed when, all of a sudden, his jaw began to grind and chatter, and one of his hands began to shake. Powton worried that the seizure would break Kennedy’s teeth.

                            His language problems persisted as well. “He wasn’t making sense anymore,” Powton says. “He kept apologizing, ‘Sorry, sorry,’ because he couldn’t say anything else.” Kennedy could still utter syllables and a few scattered words, but he seemed to have lost the glue that bound them into phrases and sentences. When Kennedy grabbed a pen and tried to write a message, it came out as random letters scrawled on a page.

                            At first Powton had been impressed by what he called Kennedy’s Indiana Jones approach to science: tromping off to Belize, breaking the standard rules of research, gambling with his own mind. Yet now here he was, apparently locked in. “I thought we had damaged him for life,” Powton says. “I was like, what have we done?”

                            Of course, the Irish-born American doctor knew the risks far better than Powton and Cervantes did. After all, Kennedy had invented those glass-and-gold electrodes and overseen their implantation in almost a half dozen other people. So the question wasn’t what Powton and Cervantes had done to Kennedy—but what Phil Kennedy had done to himself.

                            For about as long as there have been computers, there have been people trying to figure out a way to control them with our minds. In 1963 a scientist at Oxford University reported that he had figured out how to use human brain waves to control a simple slide projector. Around the same time, a Spanish neuroscientist at Yale University, José Delgado, grabbed headlines with a grand demonstration at a bullring in Córdoba, Spain. Delgado had invented a device he called a stimoceiver—a radio-controlled brain implant that could pick up neural signals and deliver tiny shocks to the cortex. When Delgado stepped into the ring, he flashed a red cape to incite the bull to charge. As the animal drew close, Delgado pressed two buttons on his radio transmitter: The first triggered the bull’s caudate nucleus and slowed the animal to a halt the second made it turn and trot off toward a wall.

                            Delgado dreamed of using his electrodes to tap directly into human thoughts: to read them, edit them, improve them. “The human race is at an evolutionary turning point. We’re very close to having the power to construct our own mental functions,” he told The New York Times in 1970, after trying out his implants on mentally ill human subjects. “The question is, what sort of humans would we like, ideally, to construct?”

                            Kennedy’s breakthrough was to pull the brain inside the electrode.

                            Not surprisingly, Delgado’s work made a lot of people nervous. And in the years that followed, his program faded, beset by controversy, starved of research funding, and stymied by the complexities of the brain, which was not as susceptible to simple hot-wiring as Delgado had imagined.

                            In the meantime, scientists with more modest agendas—who wanted simply to decipher the brain’s signals, rather than to grab civilization by the neurons—continued putting wires in the heads of laboratory animals. By the 1980s neuroscientists had figured out that if you use an implant to record signals from groups of cells in, say, the motor cortex of a monkey, and then you average all their firings together, you can figure out where the monkey means to move its limb—a finding many regarded as the first major step toward developing brain-controlled prostheses for human patients.

                            But the traditional brain electrode implants used in much of this research had a major drawback: The signals they picked up were notoriously unstable. Because the brain is a jellylike medium, cells sometimes drift out of range while they’re being recorded or end up dying from the trauma of colliding with a pointy piece of metal. Eventually electrodes can get so caked with scar tissue that their signals fade completely.

                            Phil Kennedy’s breakthrough—the one that would define his career in neuroscience and ultimately set him on a path to an operating table in Belize—started out as a way to solve this basic bioengineering problem. His idea was to pull the brain inside the electrode so the electrode would stay safely anchored inside the brain. To do this, he affixed the tips of some Teflon-coated gold wires inside a hollow glass cone. In the same tiny space, he inserted another crucial component: a thin slice of sciatic nerve. This crumb of biomaterial would serve to fertilize the nearby neural tissue, enticing microscopic arms from local cells to unfurl into the cone. Instead of plunging a naked wire into the cortex, Kennedy would coax nerve cells to weave their tendriled growths around the implant, locking it in place like a trellis ensnarled in ivy. (For human subjects he would replace the sciatic nerve with a chemical cocktail known to stimulate neural growth.)

                            The glass cone design seemed to offer an incredible benefit. Now researchers could leave their wires in situ for long stretches of time. Instead of catching snippets of the brain’s activity during single sessions in the lab, they could tune in to lifelong soundtracks of the brain’s electrical chatter.

                            Kennedy called his invention the neurotrophic electrode. Soon after he came up with it, he quit his academic post at Georgia Tech and started up a biotech company called Neural Signals. In 1996, after years of animal testing, Neural Signals received approval from the FDA to implant Kennedy’s cone electrodes in human patients, as a possible lifeline for people who had no other way to move or speak. And in 1998, Kennedy and his medical collaborator, Emory University neurosurgeon Roy Bakay, took on the patient who would make them scientific celebrities.

                            Johnny Ray was a 52-year-old drywall contractor and Vietnam veteran who had suffered a stroke at the base of his brain. The injury had left him on a ventilator, stuck in bed, and paralyzed except for slight twitchings of his face and shoulder. He could answer simple questions by blinking twice for “yes” and once for “no.”

                            Since Ray’s brain had no way to pass its signals down into his muscles, Kennedy tried to wiretap Ray’s head to help him communicate. Kennedy and Bakay placed electrodes in Ray’s primary motor cortex, the patch of tissue that controls basic voluntary movements. (They found the perfect spot by first putting Ray into an MRI machine and asking him to imagine moving his hand. Then they put the implant on the spot that lit up most brightly in his fMRI scans.) Once the cones were in place, Kennedy hooked them up to a radio transmitter implanted on top of Ray’s skull, just beneath the scalp.

                            Three times a week, Kennedy worked with Ray, trying to decode the waves from his motor cortex and then turn them into actions. As time went by, Ray learned to modulate the signals from his implant just by thinking. When Kennedy hooked him up to a computer, he was able to use those modulations to control a cursor on the screen (albeit only along a line from left to right). Then he’d twitch his shoulder to trigger a mouseclick. With this setup, Ray could pick out letters from an onscreen keyboard and very slowly spell out words.

                            “This is right on the cutting edge, it’s Star Wars stuff,” Bakay told an audience of fellow neurosurgeons in October 1998. A few weeks later, Kennedy presented their results at the annual conference of the Society for Neuroscience. That was enough to send the Amazing Story of Johnny Ray—once locked in, now typing with his mind—into newspapers all around the country and the world. That December both Bakay and Kennedy were guests on Good Morning America. In January 1999, news of their experiment appeared in The Washington Post. “As Philip R. Kennedy, physician and inventor, prepares a paralyzed man to operate a computer with his thoughts,” the article began, “it briefly seems possible a historic scene is unfolding in this hospital room and that Kennedy might be a new Alexander Graham Bell.”

                            In the aftermath of his success with Johnny Ray, Kennedy seemed to be on the verge of something big. But when he and Bakay put brain implants in two more locked-in patients in 1999 and 2002, their cases didn’t push the project forward. (One patient’s incision didn’t close and the implant had to be removed the other patient’s disease progressed so rapidly as to make Kennedy’s neural recordings useless.) Ray himself died from a brain aneurysm in the fall of 2002.

                            Meanwhile, other labs were making progress with brain-controlled prostheses, but they were using different equipment—usually small tabs, measuring a couple of millimeters square, with dozens of naked wire leads protruding down into the brain. In the format wars of the tiny neural-implants field, Kennedy’s glass-and-cone electrodes were looking more and more like Betamax: a viable, promising technology that ultimately didn’t take hold.

                            It wasn’t just hardware that set Kennedy apart from the other scientists working on brain-computer interfaces. Most of his colleagues were focused on a single type of neurally controlled prosthesis, the kind the Pentagon liked to fund through Darpa: an implant that would help a patient (or a wounded veteran) use prosthetic limbs. By 2003 a lab at Arizona State University had put a set of implants inside a monkey that allowed the animal to bring a piece of orange to its mouth with a mind-controlled robotic arm. Some years later researchers at Brown University reported that two paralyzed patients had learned to use implants to control robot arms with such precision that one could take a swig of coffee from a bottle.

                            But Kennedy was less interested in robot arms than in human voices. Ray’s mental cursor showed that locked-in patients could share their thoughts through a computer, even if those thoughts did dribble out like tar pitch at three characters per minute. What if Kennedy could build a brain-computer interface that flowed as smoothly as a healthy person’s speech?

                            Human speech is immensely more complicated than movement of a limb—it requires the coordination of more than 100 different muscles.

                            In many ways, Kennedy had taken on the far greater challenge. Human speech is immensely more complicated than any movement of a limb. What seems to us a basic action—formulating words—requires the coordinated contraction and release of more than 100 different muscles, ranging from the diaphragm to those of the tongue and lips. To build a working speech prosthesis of the kind Kennedy imagined, a scientist would have to figure out a way to read all the elaborate orchestration of vocal language from the output of a handful of electrodes.

                            So Kennedy tried something new in 2004, when he put his implants in the brain of one last locked-in patient, a young man named Erik Ramsey, who had been in a car accident and suffered a brain stem stroke like Johnny Ray’s. This time Kennedy and Bakay did not place the cone electrodes in the part of the motor cortex that controls the arms and hands. They pushed the wires farther down a strip of brain tissue that drapes along the sides of the cerebrum like a headband. At the bottom of this region lies a patch of neurons that sends signals to the muscles of the lips and jaw and tongue and larynx. That’s where Ramsey got his implant, 6 millimeters deep.

                            Using this device, Kennedy taught Ramsey to produce simple vowel sounds through a synthesizer. But Kennedy had no way of knowing how Ramsey really felt or what exactly was going on in his head. Ramsey could respond to yes-no questions by moving his eyes up or down, but this method faltered because Ramsey had eye problems. Nor was there any way for Kennedy to corroborate his language trials. He’d asked Ramsey to imagine words while he recorded signals from Ramsey’s brain—but of course Kennedy had no way of knowing whether Ramsey really “said” the words in silence.

                            Ramsey’s health declined, as did the electronics for the implant in his head. As the years went by, Kennedy’s research program suffered too: His grants were not renewed he had to let his engineers and lab techs go his partner, Bakay, died. Now Kennedy worked alone or with temporary hired help. (He still spent business hours treating patients at his neurology clinic.) He felt sure he would make another breakthrough if he could just find another patient—ideally someone who could speak out loud, at least at first. By testing his implant on, say, someone in the early stages of a neurodegenerative disease like ALS, he’d have the chance to record from neurons while the person talked. That way, he could figure out the correspondence between each specific sound and neural cue. He’d have the time to train his speech prosthesis—to refine its algorithm for decoding brain activity.

                            But before Kennedy could find his ALS patient, the FDA revoked its approval for his implants. Under new rules, unless Kennedy could demonstrate that they were safe and sterile—a requirement that would itself require funding that he didn’t have—he says he was banned from using his electrodes on any more human subjects.

                            But Kennedy’s ambition didn’t dim if anything, it overflowed. In the fall of 2012, he self-published a science fiction novel called 2051, which told the story of Alpha, an Irish-born neural electrode pioneer like Kennedy who lived, at the age of 107, as the champion and exemplar of his own technology: a brain wired up inside a 2-foot-tall life-support robot. The novel provided a kind of outline for Kennedy’s dreams: His electrodes wouldn’t simply be a tool for helping locked-in patients to communicate but would also be the engine of an enhanced and cybernetic future in which people live as minds in metal shells.

                            By the time he published his novel, Kennedy knew what his next move would be. The man who had become famous for implanting the very first brain-machine communication interface inside a human patient would once again do something that had never been done before. He had no other choices left. “What the hell,” he thought. “I’ll just do it on myself.”

                            A few days after the operation in Belize, Powton paid one of his daily visits to the guesthouse where Kennedy was convalescing, a bright white villa a block away from the Caribbean. Kennedy’s recovery had continued to go poorly: The more effort he put into talking, the more he seemed to get locked up. And no one from the US, it became clear, was coming to take the doctor off Powton and Cervantes’ hands. When Powton called Kennedy’s fiancée and told her about the complications, she didn’t express much sympathy. “I tried stopping him, but he wouldn’t listen,” she said.

                            On this particular visit, though, things started to look up. It was a hot day, and Powton brought Kennedy a lime juice. When the two men went out into the garden, Kennedy tilted back his head and let out an easy and contented sigh. “It feels good,” he blurted after taking a sip.

                            In 2014, Phil Kennedy hired a neurosurgeon in Belize to implant several electrodes in his brain and then insert a set of electronic components beneath his scalp. Back at home, Kennedy used this system to record his own brain signals in a months-long battery of experiments. His goal: Crack the neural code of human speech.

                            After that, Kennedy still had trouble finding words for things—he might look at a pencil and call it a pen—but his fluency improved. Once Cervantes felt his client had gotten halfway back to normal, he cleared him to go home. His early fears of having damaged Kennedy for life turned out to be unfounded the language loss that left his patient briefly locked in was just a symptom of postoperative brain swelling. With that under control, he would be fine.

                            By the time Kennedy was back at his office seeing patients just a few days later, the clearest remaining indications of his Central American adventure were some lingering pronunciation problems and the sight of his shaved and bandaged head, which he sometimes hid beneath a multicolored Belizean hat. For the next several months, Kennedy stayed on anti-seizure medications as he waited for his neurons to grow inside the three cone electrodes in his skull.

                            Then, in October that same year, Kennedy flew back to Belize for a second surgery, this time to have a power coil and radio transceiver connected to the wires protruding from his brain. That surgery went fine, though both Powton and Cervantes were nonplussed at the components that Kennedy wanted tucked under his scalp. “I was a little surprised they were so big,” Powton says. The electronics had a clunky, retro look to them. Powton, who tinkers with drones in his spare time, was mystified that anyone would sew such an old-fangled gizmo inside his head: “I was like, ‘Haven’t you heard of microelectronics, dude?’ ”

                            Kennedy began the data-gathering phase of his grand self-experiment as soon as he returned home from Belize for the second time. The week before Thanksgiving, he went into his lab and balanced a magnetic power coil and receiver on his head. Then he started to record his brain activity as he said different phrases out loud and to himself—things like “I think she finds the zoo fun” and “The joy of a job makes a boy say wow”—while tapping a button to help sync his words with his neural traces, much like the way a filmmaker’s clapper board syncs picture and sound.

                            Over the next seven weeks, he spent most days seeing patients from 8 am until 3:30 pm and then used the evenings after work to run through his self-administered battery of tests. In his laboratory notes he is listed as Subject PK, as if to anonymize himself. His notes show that he went into the lab on Thanksgiving and on Christmas Eve.

                            The experiment didn’t last as long as he would have liked. The incision in his scalp never fully closed over the bulky mound of his electronics. After having had the full implant in his head for a total of just 88 days, Kennedy went back under the knife. But this time he didn’t bother going to Belize: A surgery to safeguard his health needed no approval from the FDA and would be covered by his regular insurance.

                            On January 13, 2015, a local surgeon opened up Kennedy’s scalp, snipped the wires coming from his brain, and removed the power coil and transceiver. He didn’t try to dig around in Kennedy’s cortex for the tips of the three glass cone electrodes that were embedded there. It was safer to leave those where they lay, enmeshed in Kennedy’s brain tissue, for the rest of his life.

                            Yes, it’s possible to communicate directly via your brain waves. But it’s excruciatingly slow. Other substitutes for speech get the job done faster.

                            Kennedy’s lab sits in a leafy office park on the outskirts of Atlanta, in a yellow clapboard house. A shingle hanging out front identifies Suite B as the home of the Neural Signals Lab. When I meet Kennedy there one day in May 2015, he’s dressed in a tweed jacket and a blue-flecked tie, and his hair is neatly parted and brushed back from his forehead in a way that reveals a small depression in his left temple. “That’s when he was putting the electronics in,” Kennedy says with a slight Irish accent. “The retractor pulled on a branch of the nerve that went to my temporalis muscle. I can’t lift this eyebrow.” Indeed, I notice that the operation has left his handsome face with an asymmetric droop.

                            Kennedy agrees to show me the video of his first surgery in Belize, which has been saved to an old-fashioned CD-ROM. As I mentally prepare myself to see the exposed brain of the man standing next to me, Kennedy places the disc into the drive of a desktop computer running Windows 95. It responds with an awful grinding noise, like someone slowly sharpening a knife.

                            The disc takes a long time to load—so long that we have time to launch into a conversation about his highly unconventional research plan. “Scientists have to be individuals,” he says. “You can’t do science by committee.” As he goes on to talk about how the US too was built by individuals and not committees, the disc drive’s grunting takes on the timbre of a wagon rolling down a rocky trail: ga-chugga-chug, ga-chugga-chug. “Come on, machine!” he says, interrupting his train of thought as he clicks impatiently at some icons on the screen. “Oh for heaven’s sake, I just have inserted the disc!”

                            "We'll extract our brains and connect them to computers that will do everything for us," Kennedy says. "And the brains will live on."

                            “I think people overrate brain surgery as being so terribly dangerous,” he goes on. “Brain surgery is not that difficult.” Ga-chugga-chug, ga-chugga-chug, ga-chugga-chug. “If you’ve got something to do scientifically, you just have to go and do it and not listen to naysayers.”

                            At last a video player window opens on the PC, revealing an image of Kennedy’s skull, his scalp pulled away from it with clamps. The grunting of the disc drive is replaced by the eerie, squeaky sound of metal bit on bone. “Oh, so they’re still drilling my poor head,” he says as we watch his craniotomy begin to play out onscreen.

                            “Just helping ALS patients and locked-in patients is one thing, but that’s not where we stop,” Kennedy says, moving on to the big picture. “The first goal is to get the speech restored. The second goal is to restore movement, and a lot of people are working on that—that’ll happen, they just need better electrodes. And the third goal would then be to start enhancing normal humans.”

                            He clicks the video ahead, to another clip in which we see his brain exposed—a glistening patch of tissue with blood vessels crawling all along the top. Cervantes pokes an electrode down into Kennedy’s neural jelly and starts tugging at the wire. Every so often a blue-gloved hand pauses to dab the cortex with a Gelfoam to stanch a plume of blood.

                            “Your brain will be infinitely more powerful than the brains we have now,” Kennedy continues, as his brain pulsates onscreen. “We’re going to extract our brains and connect them to small computers that will do everything for us, and the brains will live on.”

                            “You’re excited for that to happen?” I ask.

                            “Pshaw, yeah, oh my God,” he says. “This is how we’re evolving.”

                            Sitting there in Kennedy’s office, staring at his old computer monitor, I’m not so sure I agree. It seems like technology always finds new and better ways to disappoint us, even as it grows more advanced every year. My smartphone can build words and sentences from my sloppy finger-swipes. But I still curse at its mistakes. (Damn you, autocorrect!) I know that, around the corner, technology far better than Kennedy’s juddering computer, his clunky electronics, and my Google Nexus 5 phone is on its way. But will people really want to entrust their brains to it?

                            On the screen, Cervantes jabs another wire through Kennedy’s cortex. “The surgeon is very good, actually, a very nice pair of hands,” Kennedy said when we first started watching the video. But now he deviates from our discussion about evolution to bark orders at the screen, like a sports fan in front of a TV. “No, don’t do that, don’t lift it up,” Kennedy says to the pair of hands operating on his brain. “It shouldn’t go in at that angle,” he explains to me before turning back to the computer. “Push it in more than that!” he says. “OK, that’s plenty, that’s plenty. Don’t push anymore!”

                            These days, invasive brain implants have been going out of style. The major funders of neural prosthesis research favor an approach that involves laying a flat grid of electrodes, 8 by 8 or 16 by 16 of them, across the naked surface of the brain. This method, called electrocorticography, or ECoG, provides a more blurred-out, impressionistic measure of activity than Kennedy’s: Instead of tuning to the voices of single neurons, it listens to a bigger chorus—or, I suppose, committee—of them, as many as hundreds of thousands of neurons at a time.

                            Proponents of ECoG argue that these choral traces can convey enough information for a computer to decode the brain’s intent—even what words or syllables a person means to say. Some smearing of the data might even be a boon: You don’t want to fixate on a single wonky violinist when it takes a symphony of neurons to move your vocal cords and lips and tongue. The ECoG grid can also safely stay in place under the skull for a long time, perhaps even longer than Kennedy’s cone electrodes. “We don’t really know what the limits are, but it’s definitely years or decades,” says Edward Chang, a surgeon and neurophysiologist at UC San Francisco, who has become one of the leading figures in the field and who is working on a speech prosthesis of his own.

                            Last summer, as Kennedy was gathering his data to present it at the 2015 meeting of the Society for Neuroscience, another lab published a new procedure for using computers and cranial implants to decode human speech. Called Brain-to-Text, it was developed at the Wadsworth Center in New York in collaboration with researchers in Germany and the Albany Medical Center, and it was tested on seven epileptic patients with implanted ECoG grids. Each subject was asked to read aloud—sections of the Gettysburg Address, the story of Humpty Dumpty, John F. Kennedy’s inaugural, and an anonymous piece of fan fiction related to the TV show Charmed—while their neural data was recorded. Then the researchers used the ECoG traces to train software for converting neural data into speech sounds and fed its output into a predictive language model—a piece of software that works a bit like the speech-to-text engine on your phone—that could guess which words were coming based on what had come before.

                            Kennedy is tired of the Zeno's paradox of human progress. He has no patience for getting halfway to the future. That's why he adamantly pushes forward.

                            Incredibly, the system kind of worked. The computer spat out snippets of text that bore more than a passing resemblance to Humpty Dumpty, Charmed fan fiction, and the rest. “We got a relationship,” says Gerwin Schalk, an ECoG expert and coauthor of the study. “We showed that it reconstructed spoken text much better than chance.” Earlier speech prosthesis work had shown that individual vowel sounds and consonants could be decoded from the brain now Schalk’s group had shown that it’s possible—though difficult and error-prone—to go from brain activity to fully spoken sentences.

                            But even Schalk admits that this was, at best, a proof of concept. It will be a long time before anyone starts sending fully formed thoughts to a computer, he says—and even longer before anyone finds it really useful. Think about speech-recognition software, which has been around for decades, Schalk says. “It was probably 80 percent accurate in 1980 or something, and 80 percent is a pretty remarkable achievement in terms of engineering. But it’s useless in the real world,” he says. “I still don’t use Siri, because it’s not good enough.”

                            In the meantime, there are far simpler and more functional ways to help people who have trouble speaking. If a patient can move a finger, he can type out messages in Morse code. If a patient can move her eyes, she can use eye-tracking software on a smartphone. “These devices are dirt cheap,” Schalk says. “Now you want to replace one of these with a $100,000 brain implant and get something that’s a little better than chance?”

                            Brain remaps itself in child with double hand transplant

                            The first child to undergo a successful hand transplant also is the first child in whom scientists have detected massive changes in how sensations from the hands are represented in the brain. The brain reorganization is thought to have begun six years before the transplant, when the child had both hands amputated because of a severe infection during infancy. Notably, after he received transplanted hands, the patient's brain reverted toward a more typical pattern.

                            Each area of the body that receives nerve sensations sends signals to a corresponding site in the brain. The spatial pattern in which those signals activate the brain's neurons is called somatosensory representation -- particular parts of the brain reflect specific parts of the body.

                            "We know from research in nonhuman primates and from brain imaging studies in adult patients that, following amputation, the brain remaps itself when it no longer receives input from the hands," said first author William Gaetz, PhD, a radiology researcher in the Biomagnetic Imaging Laboratory at Children's Hospital of Philadelphia (CHOP). "The brain area representing sensations from the lips shifts as much as 2 centimeters to the area formerly representing the hands."

                            This brain remapping that occurs after upper limb amputation is called massive cortical reorganization (MCR). "We had hoped to see MCR in our patient, and indeed, we were the first to observe MCR in a child," said Gaetz. "We were even more excited to observe what happened next -- when the patient's new hands started to recover function. For our patient, we found that the process is reversible."

                            Researchers from Children's Hospital of Philadelphia and the Perelman School of Medicine at the University of Pennsylvania published their findings today in the Annals of Clinical and Translational Neurology. Their case report described Zion Harvey, now 10 years old, who received worldwide media coverage two years ago as the first child to undergo a successful hand transplant.

                            A 40-member team led by L. Scott Levin, MD, FACS, chairman of Orthopaedic Surgery and a professor of Plastic Surgery at Penn Medicine, and director of the Hand Transplantation Program at CHOP, performed that milestone surgery in July, 2015 at CHOP. "Zion has been a child of many firsts here at Penn Medicine and Children's Hospital of Philadelphia, and across the world," said Levin, senior author of the paper. He added, "With the changes observed in his brain, which our collaborative team has been closely evaluating since his transplant two years ago, Zion is now the first child to exhibit brain mapping reorientation. This is a tremendous milestone not only for our team and our research, but for Zion himself. It is yet another marker of his amazing progress, and continued advancement with his new limbs."

                            The researchers used magnetoencephalography (MEG), which measures magnetic activity in the brain, to detect the location, signal strength and timing of the patient's responses to sensory stimuli applied lightly to his lips and fingers. They performed MEGs four times in the year following the bilateral hand transplant, performing similar tests on five healthy children who served as age-matched controls.

                            At the first two visits, the patient's finger tips did not respond to tactile stimulation -- being touched with a thin filament. When experimenters touched the patient's lips, the MEG signal registered in the hand area of the brain's cortex, but with a delay of 20 milliseconds compared to controls. At the two later visits, MEG signals from lip stimulation had returned to the lip region of the brain, with a normal response time -- an indication that brain remapping was reverting to a more normal pattern.

                            When experimenters touched the patient's fingertips in the two later visits, the MEG signals appeared in the hand region of the brain, with a shorter delay in response time from visit 3 to visit 4, but with higher-than-normal signal strength. "The sensory signals are arriving in the correct location in the brain, but may not yet be getting fully integrated into the somatosensory network," said Gaetz. "We expect that over time, these sensory responses will become more age-typical."

                            Gaetz added, "These results have raised many new questions and generated excitement about brain plasticity, particularly in children. Some of those new questions include, what is the best age to get a hand transplant? Does MCR always occur after amputation? How does brain mapping look in people born without hands? Would we see MCR reverse in an adult, as we did in this patient? We are planning new research to investigate some of these questions."

                            In the meantime, follow-up studies of this patient provide encouraging details on his functional abilities. "Our follow-up studies 18 months after this transplant showed that he is able to write, dress and feed himself more independently than before his operation -- important considerations in improving his quality of life," said Levin.

                            Neuroscience For Kids

                            Traci: I was wondering about the experiment with a baby and a ringing bell. When the baby got older and heard the bell he did the thing as when he was younger. Do you know the name of the baby?

                            Answer: You may be thinking of Baby "Albert." Albert was an 11-month-old boy who was in an experiment conducted by American psychologist John B. Watson. Watson was doing an experiment on emotional response conditioning. At first, Albert liked playing with a white rat. Later when Albert saw a rat, the experimenters made a loud noise. This frightened Albert. After a few more times of pairing the rat with the loud noise, Albert became frightened of just seeing the rat.

                            Later on, Albert's fear generalized to anything furry like a rabbit, a fur coat and dogs. This "experiment" was done many years ago (Watson, J.B. and Raynor. Conditioned emotion reactions. Journal of Experimental Psychology, 3:1-14, 1920) and such an experiment would be considered unethical these days.

                            H. and C. Approximately how many hairs are on one human head?

                            Answer: According to The Handy Science Answer Book (1994) compiled by the Science and Technology Department of the Carnegie Library of Pittsburgh:

                            An average person has about 100,000 hairs on their scalp.

                            Jerry H. What mammal has the largest brain?

                            Answer: The mammal with the largest brain is the sperm whale. The brain of the sperm whale can weigh as much as 20 pounds. Even though the blue whale has a larger body size, the blue whale brain is about 5 pounds lighter than that of the sperm whale.

                            Stephanie S. I heard something on the news a while ago. Here's the main plot: Patients are anesthetized and are brought into surgery. However during the surgery, the anesthesia wears off and the patients feel everything that's happening to them. BUT they can't say anything or move anything because they're paralyzed for some reason. I think the term for this is "surgical awareness." Have you heard of this?

                            Answer from Dr. Chris B. (anesthesiologist and Neuroscientist Network member) Awareness during surgery and anesthesia does occur but is an extremely rare event. The anesthetic does not "wear off" as the story may have suggested (as an aside, in my opinion, medical/science stories are reported poorly by the media and are frequently inaccurate because the media is selling sensation and not fact. In addition, physicians and scientists are also culpable because they are willing to make their work sound more dramatic/important/sensational than it really is).

                            Anesthetics are given continuously during the surgery but very rarely the amount given may not be sufficient to produce complete unconsciousness. This occurs for two reasons 1) by far, the most common reason is that the patient does not tolerate the anesthetic (all anesthetics depress the cardiovascular system) and the anesthesiologist has to turn the anesthetic down to prevent it from depressing the blood pressure to dangerous levels. This occurs most commonly in patients who are victims of severe trauma and are rapidly losing large volumes of blood. 2) awareness can also occur in patients with a history of alcoholism, sedative/hypnotic abuse (e.g., valium, barbiturates, sleeping pills) because their brains are "resistant" to the sedative effects of the anesthetic.

                            It should be noted that awareness is distressing, but patients usually state that pain was not a problem for them, rather it was the distress of being aware but unable to move for some reason they do not understand. The reason they can't move is that they were given a "muscle relaxant" which temporarily paralyzes the muscles. This is done to make it easier for the surgeons to spread muscles (e.g., abdominal muscles) thus making it easier to expose the surgical site. At the end of surgery, before the patient is awakened, the muscle relaxant is reversed.

                            This question affords another interesting opportunity to see how the media and business are increasingly intertwined. The story you saw was most likely the result of an ADVERTISING campaign for a device made by ASPECT Medical (called the BIS) which they claim is capable of detecting awareness. To sell this product they need to create a market. Anesthesiologists have no interest in it because awareness is not a significant problem (i.e., it is extremely rare and thus there is not much reason to spend money using this device on millions of patients who are not at risk of awareness) and this device has NEVER BEEN PROVED TO DETECT AWARENESS. Thus, to make money the company is basically marketing it to patients by first telling them (through "stories" like the one you saw) that awareness is a terrible problem (and needlessly scaring them) but fortunately they have a solution. Consequently, patients come to their anesthesiologists requesting that they use this unproved, useless, expensive device during their anesthetic. Anesthesiologists are thus "compelled" to buy this instrument to make their patients content. Consequently, medical costs go up, but that is not a problem for ASPECT because they reap the financial rewards.

                            C., L. and E. Can you give us some interesting facts about the nervous system? We're doing a school project.

                            Answer: Here are several pages of interesting facts that should help:

                            Sascha What is the sickness that makes you age faster than normal?

                            Answer: The disorder you are probably thinking of is called Werner's Syndrome. For more about this disorder, please see:

                            M.L. I've been searching for a couple of weeks for a story I've heard about but can't find a source for. Some researcher thought that curare was an "anesthetic" so he tried it on himself. Although paralyzed, he could still feel everything. Do you know this anecdote?

                            Answer Perhaps this is the paper you were looking for:

                            Smith, S.M., Brown, H.O., Toman, J.E.P., and Goodman, L.S., The Lack of Cerebral Effects of d-Tubocurarine, Anesthesiology, Vol. 8, No. 1, (January, 1947) pp. 1-14.

                            M.J. Why do people taking MAO-inhibitors have to be on a tyramine restricted diet?

                            Answer: MAO (monoamine oxidase) is an enzyme that breaks down the class of neurotransmitters called the catecholamines. MAO inhibitors are drugs that block the action of MAO and raise the catecholamine content within neurons. These drugs are used to treat depression.

                            Tyramine is an amino acid found in foods such as cheese, fish and alcoholic drinks. Tyramine activates the sympathetic nervous system. Moreover, the action of tyramine is blocked by MAO. Therefore, in the presence of MAO inhibitors the action of tyramine is intensified and prolonged. This may result in dangerous hypertension and even cerebral hemorrhages.

                            Carolyn. How many muscles are in the human body?

                            Answer: There are over 600 skeletal muscles in the human body. Anatomists disagree on the exact number of muscles. Skeletal muscle makes up about 40% of total body weight.

                            K.H.: Do you have any information about sleep deprivation in teenagers and school start times?

                            Answer: Sleep deprivation IS a huge problem not only for students, but for adults as well. I will refer you to some sources of information concerning later school start times and some schools that have made the switch to later start times. Please see:

                            Phil W. What is "reuptake?"

                            Answer: Reuptake refers to the process by which a neurotransmitter is transported back into a neuron's synaptic terminal. In other words, after a neurotransmitter is released, it floats into the synaptic cleft. One mechanism that stops the action of neurotransmitter is by transporting the neurotransmitter out of the synaptic cleft back into the terminal.

                            Robert A. Who discovered diazepam?

                            Answer: Dr. Leo H. Sternbach of Hoffmann-La Roche (or just Roche) discovered diazepam. From L.H. Sternbach's chapter titled "The Discovery of CNS Active 1,4-Benzodiazepines" in the book The Benzodiazepines: From Molecular Biology to Clinical Practice edited by E.Costa, Raven Press, New York, 1983:

                            Near the end of 1959, we found a product that was, in most of the tests, 3 to 10 times as potent as chlordiazepoxide. We hoped that this superior potency would be associated with other advantages in its clinical spectrum of activity and selected it for a thorough evaluation. The pharmacological and toxicological data looked very promising: the clinical results were equally encouraging and led ultimately near the end of 1963 to the introduction of diazepam, under the trade name Valium.

                            This information is confirmed on the Hoffmann La-Roche web page (scroll down a bit).

                            T. Why after sitting down for a long time and you stand up fast do you sometimes see light spots or stars?

                            Answer: (from Dr. Chris B., Neuroscientist Network member) The phenomenon results from hypoperfusion of the brain, particularly the occipital cortex. But I do not know why one sees "stars" as opposed to something else. Hypoperfusion results because standing up too rapidly can result in a decrease in venous return so the heart is not as full as necessary to maintain adequate cardiac output.

                            Answer: (from Dr. Ed F., Neuroscientist Network member) Orthostatic hypotension, decreased blood flow to the brain because of a gravitational pooling of the blood in the lower extremities. The baroreceptor reflex minimizes this as it senses the decreased blood pressure in the aortic arch and responds by sending a signal, through the brainstem, to increase sympathetic tone in the blood vessels of the lower legs. This results in a vasoconstriction and helps force more blood to the upper part of the body.

                            This reflex is reduced in the elderly and by many medications. That is why people, especially elderly and those on various heart, blood pressure medications, antidepressants and any CNS depressant should get up slowly from a lying or sitting position.

                            Bonnie M. What is the effect of temperature on the shape of the action potential?

                            Answer: The effect of temperature is mainly on ionic permeability of the neuronal membrane. Specifically, sodium channels open and close faster at higher temperature. Reductions in temperature lengthen the action potential and slow conduction velocity. these are the classic experiments of Hodgkin and Katz (1949).

                            Z.Z. How many bones are in the human body?

                            Answer: Babies are born with between 300 to 350 bones. As people get older, some of these bones fuse together. Most adults have 206 bones:

                            6 bones in the ears (3 on each side)

                            26 bones in the vertebral column (spinal column)

                            25 bones in the chest (1 sternum, 12 pairs of ribs)

                            64 bones in the upper limbs (shoulder, arm, wrist, hand and fingers)

                            62 bones in the lower limbs (hip, pelvis, leg, knee, ankle, foot, toes)

                            Laura M. What is the difference between the cerebrum and the cerebral cortex?

                            Answer: The cerebrum refers to the entire cerebral hemispheres. The cerebral cortex is the outermost part of the cerebrum.

                            G.J. Do you think exercise is good for the brain?

                            Answer: Yes, physical exercise does appear to be good for the brain. There are been several studies that show that exercise is beneficial to the brain. In fact, there was a recent experiment in mice that showed that running can increase the number of nerve cells in the brain. For a summary of this research, see:

                            Debbie G. I have always heard a full moon will affect behaviors but your page contradicts that myth. I have been teaching elementary school for 19 years and I was wondering if there have been any case studies on how a full moon affects children's behaviors. I don't keep track of when it's time for a full moon, but I can usually tell by the way my students act.

                            Answer: In my review of the literature on the effects of the full moon on behavior, I found that most studies show no relationship between the phase of the moon and abnormal behavior.

                            In all of the background research and literature searches I conducted for the Moonstruck article I did not find any papers that examined the correlation between the phase of the moon and children's behavior. As I discussed in the article, there are some problems in the design of these studies in that they only determine that a correlation does or does not exist between two variables (i.e., phase of the moon and a change in behavior). They do not prove that the phase of the moon CAUSES a particular behavior.

                            Mark W. What causes "ringing in the ears?"

                            Answer: Ringing in the ears is called "tinnitus." All of the causes are not well understood. Some forms of tinnitus are caused by problems in the inner ear, such as damaged hair cells. However, some forms may NOT have a peripheral origin. In other words, the "ringing" may be in the brain, NOT the ear. For more about tinnitus, please see:

                            J.M. What is the function of dopamine in the brain?

                            Answer: Dopamine is a neurotransmitter (a catecholamine type neurotransmitter). It is found in many places in the central nervous system and has several functions including:

                            1. Movement: dopamine is produced in the substantia nigra (a part of the basal ganglia). In Parkinson's disease, dopamine neurons in the substantia nigra die. This disorder is characterized by tremor, rigidity and slowness of movement. When the dopamine is restored in the brain by giving L-dopa, the movement problems in many cases are reduced.

                            2. Attention: there is some evidence that dopamine is altered in people with attention deficit disorders.

                            3. Emotional behavior: an overactive dopamine system may underlie schizophrenia. Dopamine blocking drug reduce the symptoms of schizophrenia.

                            C. Are there five basic tastes? I looked it up under ask jeeves and it only gave me 4 ( sweet, sour, salty, and bitter.) What is the other one?

                            Answer: The 5th basic taste is called "umami." Umami is a taste that occurs when foods with glutamate (like MSG) are eaten. More information on umami:

                            Sarah N. Does an action potential go in only one direction?

                            Answer: In "normal" cases, yes, the action potential goes in only one direction: toward the axon terminal. However, an action potential will spread in BOTH directions IF it is started in the middle of an axon. This can be done by electrically stimulating the middle of an axon. This is not the normal way action potentials are triggered. Rather, an axon potential usually starts at the axon hillock and sequentially depolarizes the neuronal membrane away from where it started. That is why in normal situations it only travels in one direction.

                            A.N. How did neurotransmitters get their specific names? Such as, why is dopamine called dopamine? serotonin? norepinephrine?

                            Answer (By Neuroscientist Network Member Dr. P.):

                            a) Background
                            Catecholamines: This name refers to all organic compounds that contain a catechol nucleus (a benzene ring with two adjacent hydroxyl residues), a side chain of two carbon atoms (the b-carbon is closest to the ring, the a-carbon is distal), and an amine (NH2) group bound to the a-carbon. The word "catechol" is derived from "catechin," a crystalline substance extracted from the spiny Asian tree "catechu" (Acacia catechu) which is used in the preparation of tannins and other brown dyes. Catechin and catechol are synonymous. In practice, the term catecholamine refers to dopamine (DA, dihydroxyphenylethylamine) and its metabolic products, norepinephrine (NE) and epinephrine (E).

                            b) Neurotransmitters
                            i. Dopamine (DA): The easiest explanation for dopamine's name is that it is a selective compression of its chemical name, dihydroxyphenylethylamine.

                            To better understand the nomenclature, we can start a couple steps back in the pathway that leads to the formation of DA. The amino acids phenylalanine or tyrosine can be the starting compound for the synthetic pathway. If phenylalanine (a compound similar to the catechol structure except there are no hydroxyl groups bound to the benzene ring and there is a carboxyl group (COOH) bound to the same carbon containing the amine group), an enzyme, phenylalanine hydroxylase, adds an hydroxyl group to the benzene ring. This product is tyrosine which can also be provided in the diet directly. The next step in the pathway involves the enzyme tyrosine hydroxylase, the rate limiting step in the entire process, which adds a (second) hydroxyl group to the aromatic ring. The resultant compound is "DOPA" (dihydroxyphenylalanine), a compound with a catechol backbone as described above. The final step in the synthesis of dopamine involves the removal of the carboxyl group from the two carbon side chain of DOPA by the enzyme DOPA decarboxylase. Thus dopamine is composed of the basic catechol backbone (dihydroxyphenylethylamine) with no substitutions on the two carbon side chain.

                            ii. Norepinephrine (NE): Once we've learned how dopamine is formed, the related catechol compounds fall easily into place. Norepinephrine is formed from dopamine through the activity of the enzyme dopamine-b-hydroxylase. Norepinephrine is simply dopamine with an hydroxyl group added to the b-carbon of the two carbon side chain. See Epinephrine for the word derivation.

                            iii. Epinephrine (E): The enzyme phenylethanolamine-N-methyltransferase adds a methyl group to the amine (NH2) bound to the a-carbon of norepinephrine.

                            The derivation of the names epinephrine and norepinephrine are most likely related to the locale of the highest concentration theses substances: the adrenal glands. Because the adrenals sit atop the kidneys, the word epinephrine can be parsed logically: "epi-" means "upon or close to, "nephr" is the contraction of "nephro," a prefix designating the kidney, and "ine," a suffix given to many chemical compounds. The prefix "nor" designates an unaltered parent compound. This suggests that norepinephrine was isolated subsequent to epinephrine and, upon discovery of the relationship, was named appropriately. The parallel nomenclature of "adrenaline" and "noradrenaline" provide a more obvious derivation.

                            Serotonin was named, shortly after its discovery, for its ability to cause powerful contractions of smooth muscle. Thus it was considered a major component in the serum responsible for vasoconstriction and high blood pressure. 2) Acetylcholine is straightforward, formed as product of the enzyme choline acetyltransferase.

                            K.B. How much sleep does a third grader need?

                            Answer: According to the Sleep Well web site (Stanford University Sleep Center):

                            Adolescents need 9 hours and 15 minutes of sleep. Children need 10 hours and adults need 8 1/4 hours.