Information

Is there any evidence of “learning” on a single neuron, or a network?

Is there any evidence of “learning” on a single neuron, or a network?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

In the simplest neural networks (that's the only I've experience with), the learning process is the update of parameters of a function, such that the overall error between inputs and prediction is minimized.

This function being trained, when we feed it with data, it tells us the label. The idea is pretty simple.

If we want to push forward this analogy, just for the sake of doing something, there should be evidence of how a set of exposures to "labelled" inputs, change either the neurons, or the connections between them.

I'm not referring so much to models, but to structural changes observed when a brain learns to put a label to a perception.

I've followed a few theories of computer scientist like Marvin Minsky (society of mind), but no much of the actual evidence.

Is there such an evidence? Is there any informal ground level article you could provide of this?


Neuroscientists decipher brain's noisy code

By analyzing the signals of individual neurons in animals undergoing behavioral tests, neuroscientists at Rice University, Baylor College of Medicine, the University of Geneva and the University of Rochester have deciphered the code the brain uses to make the most of its inherently "noisy" neuronal circuits.

The human brain contains about 100 billion neurons, and each of these sends signals to thousands of other neurons each second. Understanding how neurons work, both individually and collectively, is important to better understand how humans think, as well as to treat neurological and psychiatric disorders like Alzheimer's disease, Parkinson's disease, autism, epilepsy, schizophrenia, depression, traumatic brain injury and paralysis.

"If the brain could always count on receiving the same sensory response to the same stimulus, it would have an easier time," said neuroscientist Xaq Pitkow, lead author of a new study this week in Neuron. "But noise is always there in the brain: studies have repeatedly shown that neurons give a variety of responses to the same stimulus."

Pitkow, assistant professor of neuroscience at Baylor and assistant professor of electrical and computer engineering at Rice, said "noise" can be described as anything that changes neural activity in a way that doesn't depend on the task the brain wants to accomplish.

Not only are neural responses noisy, but each neuron's noise is correlated with the noise in thousands of other neurons. That means that something that affects the output of one neuron may be amplified to affect many more. Because of these correlations, it is extraordinarily difficult for scientists to accurately model how small groups of neurons will affect the way a person or animal reacts to a given stimulus.

Given both these correlated responses and the inherently noisy nature of neuronal signals, scientists have struggled to explain a seeming paradox that was first observed in experiments more than 25 years ago.

"When neuroscientists first analyzed the output of individual neurons, they were surprised to find that the activity of just a single neuron sometimes predicted behavior in certain tasks," Pitkow said.

This perplexing find has turned up in numerous experiments, but neuroscientists have yet to explain it.

"A lot of people have studied this and offered up different kinds of models that make all sorts of assumptions," Pitkow said. "By integrating all of those ideas and applying some analytical techniques, we found there were two different ways this could happen."

He said one possibility is that many neurons are sharing the same information, processing it independently and arriving at the same answer. The other possibility is that each neuron is using different information and casting its vote for a slightly different answer but the brain is doing a poor job of coming to a consensus with the different votes.

"The first model is a bit like trying to find a needle in haystack, and the second is like trying to find a needle on a clean floor while looking backward through a pair of binoculars," Pitkow said. "Each piece of straw looks like a needle, which makes the haystack test very difficult. On the other hand, a needle should really stand out on a clean floor, but it will be hard to find with a bad searching method."

In each case, the neurons are correlated with one other, "but in the first instance the noise correlations can never be removed, and in the second they could and should be removed but they're not," Pitkow said. "And each of these scenarios has very different consequences for the brain's code, how it represents information. In terms of information theory, if the brain has a lot of information and it is not doing a good job of using it, there are very different implications than if all the neurons are correlated and they're all informative in the same way."

To determine which of these scenarios is at play in the brain, Pitkow and colleagues developed two mathematical models, one for each scenario. The models described how information and noise would flow through the network in the two opposing cases.

The team tested each model against the activity of single neurons in monkeys that were undergoing perceptual tests to measure how accurately they could perceive slight movements to the left or right. The experimenters found that some neurons predicted the animals' guesses about whether they were moving left or right.

"When we examined the output, we found that the monkeys' brains were not throwing away information," Pitkow said. "They were using each neuron's information very effectively. And we also saw that even though there were many neurons involved, the guess of any individual neuron was only slightly worse than the animal's actual guess during the test. These two pieces of evidence together indicate the neurons mostly share the same information."

But if every neuron is doing the same processing, why have so many? It's an obvious question, Pitkow said, but it's beyond the scope of what he and his colleagues could address in the current study.

"We didn't explore the value of redundancy in this study, but we are very interested in that question," Pitkow said. He pointed out that the vestibular sensors, the part of the inner ear dedicated to the sense of balance, contain only about 6,000 of the brain's 100 billion neurons. Even those few thousand might be redundant, which would mean that the rest of the neurons they contact also are redundant.

"One intriguing possibility that we are looking into is that redundancy allows the brain to reformat information and approach complex problems from many different angles," he said.


Implications for the neurobiology of learning and memory

If single cells can learn then they must be using a non-synaptic form of memory storage. The idea that intracellular molecules store memories has a long history, mainly in the study of multicellular organisms. We have already mentioned McConnell’s studies of planarians similar ideas were espoused by Georges Ungar based on his studies of rodents (Ungar and Irwin, 1967 Ungar et al., 1968). These studies indicated that memories could be transferred from one organism to another by injection or ingestion of processed brain material. Clearly no synaptic information could survive such processing, so transfer could presumably only occur if the memory substrate was molecular. However, these findings were the subject of much controversy. The failure of careful attempts to replicate them led to a strong consensus against their validity and this line of research eventually died out (Byrne et al., 1966 Travis, 1980 Smith, 1974 Setlow, 1997). Nonetheless, several lines of recent work have revisited these studies (Smalheiser et al., 2001 Shomrat and Levin, 2013). For example, Bédécarrats et al., 2018 showed that long-term sensitization of the siphon-withdrawal reflex in Aplysia could be transferred by injection of RNA from a trained animal into an untrained animal. This study further showed that this form of transfer was mediated by increased excitability of sensory (but not motor) neurons, and depended on DNA methylation, although the study did not establish either RNA or DNA methylation as the engram storage mechanism. In another line of work, Dias and Ressler, 2014 showed that fear conditioning in rodents could be transferred from parents to offspring, an effect that was associated with changes in DNA methylation. These studies not only revive the molecular memory hypothesis, but also point towards specific intracellular mechanisms.

The significance of DNA methylation lies in the fact that DNA methylation state can control transcription. Thus, the set of proteins expressed in a cell can be altered by changes in DNA methylation, which are known to occur in an experience-dependent manner. For example, after fear conditioning, the methylation states of 9.2% of genes in the hippocampus of rats were found to be altered (Duke et al., 2017). As first pointed out by Crick, 1984, and later elaborated by Holliday, 1999, DNA methylation is a potentially stable medium for heritable memory storage, because the methylation state will persist in the face of DNA replication, thanks to the semi-conservative action of DNA methyltransferases. A related idea, put forward independently in Lisman et al., 2018, is that a stable memory could arise from the tug-of-war between enzymatic phosphorylation and dephosphorylation. In essence, the idea is to achieve stability through change: a molecular substrate maintains its activation state by means of continual enzymatic activity. Crick and Lisman suggested that this could solve the problem of molecular turnover that vexes synaptic theories of memory. Consistent with this hypothesis, inhibition of DNA methyltransferase disrupts the formation and maintenance of memory, although it remains to be seen whether methylation states themselves constitute the engram (Miller and Sweatt, 2007 Miller et al., 2010). The proposals of Crick and Lisman apply generally to enzymatic modification processes (e.g. acetylation or glycosylation) acting on macromolecules, provided that the biochemical dynamics can generate the appropriate stable states (Prabakaran et al., 2012).

An important distinction between the forms of dynamical information storage proposed by Crick and Lisman and the storage provided by DNA is that the latter is largely stable in the absence of enzymatic activity, under conditions of thermodynamic equilibrium. In contrast, the former typically relies on enzymatic activity and is only stable if driven away from thermodynamic equilibrium by chemical potential differences generated by core metabolic processes. In other words, the latter may accurately retain information in the absence of a cell over a substantially longer period than the former, which may lose information rapidly in the absence of supporting enzymatic activity.

Another candidate medium for intracellular memory storage is histone modification. In eukaryotes, DNA is wrapped around nucleosomes, composed of histone proteins, to form chromatin. Gene transcription can be controlled by changes in the modification state (acetylation, methylation, ubiquitination, etc.) of these histones. In the cell biology literature, an influential hypothesis posits the existence of a histone ‘code’ (Jenuwein and Allis, 2001 Turner, 2002) or ‘language’ (Lee et al., 2010) that stores information non-genetically, although the nature of that information has been a matter of debate (Sims and Reinberg, 2008 Henikoff and Shilatifard, 2011). Early work demonstrated that learning was accompanied by increased histone acetylation in the rat hippocampus (Schmitt and Matthies, 1979), and more recent work has established that memory can be enhanced by increases in histone acetylation (Levenson et al., 2004 Vecsey et al., 2007 Stefanko et al., 2009). Bronfman et al., 2016 provide an extensive survey of the molecular correlates of learning and memory.

In parallel with these findings, molecular biologists grappling with the information processing that takes place within the organism have begun to suggest that signaling networks may implement forms of learning (Koseska and Bastiaens, 2017 Csermely et al., 2020). In this respect, Koshland’s studies of habituation of signaling responses in PC12 cells, a mammalian cell line of neuroendocrine origin, are especially resonant (McFadden and Koshland, 1990). Koshland’s work was undertaken in full awareness of learning studies conducted by Kandel and Thompson in animals but his pioneering efforts have not been explored further. This reflects, perhaps, the intellectual distance between cognitive science and molecular biology, which the present paper seeks to bridge. The information processing demands on a single-celled organism, which must fend for itself, are presumably quite different from those confronting a single cell within a multi-cellular organism during development and homeostasis, so what role learning plays within the organism remains a tantalizing open question.

Beatrice Gelber, though she could not have known about the specifics of DNA methylation or histone modification, was uncannily prophetic about these developments:

"This paper presents a new approach to behavioral problems which might be called molecular biopsychology… Simply stated, it is hypothesized that the memory engram must be coded in macromolecules… As the geneticist studies the inherited characteristics of an organism the psychologist studies the modification of this inherited matrix by interaction with the environment. Possibly the biochemical and cellular physiological processes which encode new responses are continuous throughout the phyla (as genetic codes are) and therefore would be reasonably similar for a protozoan and a mammal." (Gelber, 1962a, p. 166).

The idea that intracellular mechanisms of memory storage might be conserved across phyla is tantalizing yet untested. The demise of behavioral studies in Paramecia and other ciliates has meant that, despite the wealth of knowledge about ciliate biology, we still know quite little about the molecular mechanisms underlying Gelber’s findings. Nonetheless, we do know that many intracellular pathways that have been implicated in multicellular memory formation exist in ciliates (Table 1). For example, ciliates express calmodulin, MAP kinases, voltage-gated calcium channels, in addition to utilizing various epigenetic mechanisms that might be plausible memory substrates, such as DNA methylation and histone modification. In like manner, key molecular components of neurons and synapses emerged in organisms without nervous systems, including unicellular organisms (Ryan and Grant, 2009 Arendt, 2020). We believe it is an ideal time to revisit the phylogenetic origins of learning experimentally and theoretically.


Abstract

What any sensory neuron knows about the world is one of the cardinal questions in Neuroscience. Information from the sensory periphery travels across synaptically coupled neurons as each neuron encodes information by varying the rate and timing of its action potentials (spikes). Spatiotemporally correlated changes in this spiking regimen across neuronal populations are the neural basis of sensory representations. In the somatosensory cortex, however, spiking of individual (or pairs of) cortical neurons is only minimally informative about the world. Recent studies showed that one solution neurons implement to counteract this information loss is adapting their rate of information transfer to the ongoing synaptic activity by changing the membrane potential at which spike is generated. Here we first introduce the principles of information flow from the sensory periphery to the primary sensory cortex in a model sensory (whisker) system, and subsequently discuss how the adaptive spike threshold gates the intracellular information transfer from the somatic post-synaptic potential to action potentials, controlling the information content of communication across somatosensory cortical neurons.


A Lion in the Undergrowth

The men of old, reported Socrates, saw madness as a gift that provides knowledge or inspiration. “It was when they were mad that the prophetess at Delphi and the priestesses at Dodona achieved so much . . . when sane they did little or nothing.” Today, insanity can still bring the gift of knowledge, but in a different manner. Much of what we know about the brain comes from seeing what happens when it is damaged, or affected in unusual ways. If the Delphic seer were to turn up tomorrow, neuroscientists would whisk her straight off into a brain scanner.

V. S. Ramachandran, a professor of neuroscience and psychology at the University of California, San Diego, has done as much as anyone to reveal the workings of the mind through the malfunctions of the brain. We meet some mighty strange malfunctions in his new book, “The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human.” There is a man who, after a head injury, cannot recognize or respond to people when he sees them, but can happily chat on the phone. We meet a woman who laughs when she should be yelping in pain. There are patients with Capgras syndrome, who come to believe that people who are close to them (or, in one case, the patient’s poodle) are imposters. We meet unfortunates with an intense desire to have their own healthy limbs amputated, others who are paralyzed on one side but insist against all evidence that they are not, and, in Cotard’s syndrome, people who sincerely believe they are dead.

Ramachandran weaves such tales together to build a picture of the specialized areas of the brain and the pathways between them, drawing his map by relating particular types of damage to their corresponding mental deficits. A recurring theme is the way in which many delusions appear to result from the brain trying to make sense of signals that have gone haywire. For example, in the case of a young man who awoke from a coma after a car crash believing that his mother was an imposter, Ramachandran believes that there was damage to a neural route that takes visual information to his amygdala (a part of the brain involved in investing objects with emotional significance). As a result, he suggests, the sight of the young man’s mother did not produce its usual emotional buzz, and his brain coped with this anomaly by rationalizing it as the presence of someone who looked like his mother but was in fact not her.

Ramachandran’s main thesis, though he often strays from it, is that networks of brain cells known as mirror neurons, which were discovered in monkeys in the late 1990s, played a uniquely important part in human evolution. These cells appear to become active in a creature’s brain not only when certain actions are performed by the creature itself but also when the creature observes its fellows performing the same actions. Ramachandran believes that mirror neurons somehow enable us to understand the minds of others, to learn by imitation and to feel empathy, and are perhaps involved in self-awareness. Some dramatic surge in the development of mirror neurons, he argues, explains the birth of distinctively human mental abilities and culture about 150,000 years ago. He also suggests that autism involves some defect in the functioning of mirror neurons.

Much of “The Tell-Tale Brain,” however, is a general tour of neuroscience. There are lively treatments of three areas in which Ramachandran has himself done pioneering work: visual perception, pain in amputated “phantom” limbs, and synesthesia — a family of benign syndromes in which the senses become commingled, as when, for example, letters and numbers that are printed in black and white are perceived as colored. Ramachandran explains how some brains may develop this ability (which seems to be more common among artists than in the general population), and explores its possible connection to the ability to understand metaphor.

There is an intriguing discussion of what Ramachandran calls the “peekaboo principle” — the idea that you can sometimes make something more pleasing by rendering it less visible. He notes that “we prefer this sort of concealment because we are hard-wired to love solving puzzles, and perception is more like puzzle-solving than most people realize.” This, in his view, helps to explain why the sight of a partially clothed person is “often” more attractive than the sight of a completely naked one. But how “often” is this actually true? And if we love solving perceptual puzzles so much, how come we don’t always prefer such concealment? Straight adult males, for example, do not always prefer a picture of a woman in a skimpy bikini to a topless shot. And since they do not always prefer it, how can Ramachandran’s theory be what explains the cases in which they do? (There is perhaps room here for some fruitful scientific cooperation between Playboy Enterprises and Ramachandran’s lab.) A similar problem arises with the ingenious theory Ramachandran offers to account for the appeal of abstract art, which he links to the hard-wired appeal of “ultranormal stimuli.” Since people do not in fact universally prefer abstract to representational art, the theory appears to explain either too much or too little.

Because Ramachandran is an exceptionally inventive researcher who tosses off suggestions at a dizzying pace, readers may sometimes lose track of what is firmly established, what is tentative and what is way out there. His fondness for evolutionary explanations can be particularly freewheeling. For example, he relates the color-matching of clothing and accessories to the experiences of our ancestors when they spotted a lion in the undergrowth by realizing that those yellow patches in between the leaves are parts of a single dangerous object. One wonders if there really is much solid evidence for this charming piece of historical reconstruction, and why, if it is correct, people don’t run away screaming when approached by women with matching shoes and handbags.

Although Ramachandran admits that his account of the significance of mirror neurons is speculative, he doesn’t let on just how controversial it is. In the past four years, a spate of studies has dented every part of the mirror-neuron story. Doubt has been cast on the idea that imitation and the understanding of actions depend on mirror neurons, and on the theory that autism involves a defect in these systems of cells. It has even been claimed that the techniques used to detect the activity of mirror neurons have been widely misinterpreted. Ramachandran may have good reason to discount these skeptical studies, but he surely should have mentioned them.

Even if mirror neurons turn out not to be quite as important as Ramachandran thinks — he has elsewhere predicted that they will do for psychology what DNA did for biology — the book is packed with other evidence that neuroscience has made illuminating progress in recent years. Reading such accounts of exactly what our brains get up to is apt to leave one with the disconcerting thought that they are often a lot cleverer than their owners realize.


Free will? Analysis of worm neurons suggest how a single stimulus can trigger different responses

Even worms have free will. If offered a delicious smell, for example, a roundworm will usually stop its wandering to investigate the source, but sometimes it won't. Just as with humans, the same stimulus does not always provoke the same response, even from the same individual. New research at Rockefeller University, published online in Cell, offers a new neurological explanation for this variability, derived by studying a simple three-cell network within the roundworm brain.

"We found that the collective state of the three neurons at the exact moment an odor arrives determines the likelihood that the worm will move toward the smell. So, in essence, what the worm is thinking about at the time determines how it responds," says study author Cori Bargmann, Torsten N. Wiesel Professor, head of the Lulu and Anthony Wang Laboratory of Neural Circuits and Behavior. "It goes to show that nervous systems aren't passively waiting for signals from outside, they have their own internal patterns of activity that are as important as any external signal when it comes to generating a behavior."

The researchers went a step deeper to tease out the dynamics within the network. By changing the activity of the neurons individually and in combination, first author Andrew Gordus, a research associate in the lab, and his colleagues could pinpoint each neuron's role in generating variability in both brain activity and the behavior associated with it.

The human brain has 86 billion neurons and 100 trillion synapses, or connections, among them. The brain of the microscopic roundworm Caenorhabditis elegans, by comparison, has 302 neurons and 7,000 synapses. So while the worm's brain cannot replicate the complexity of the human brain, scientists can use it to address tricky neurological questions that would be nearly impossible to broach in our own brains.

Worms spend their time wandering, looking for decomposing matter to eat. And when they smell it, they usually stop making random turns and travel straight toward the source. This change in behavior is initially triggered by a sensory neuron that perceives the smell and feeds that information to the network the researchers studied. As the worms pick up the alluring fruity smell of isoamyl alcohol, the neurons in the network transition into a low activity state that allows them to approach the odor. But sometimes the neurons remain highly active, and the worm continues to wander around -- even though its sensory neuron has detected the odor.

By recording the activity of these neurons, Gordus and colleagues found that there were three persistent states among the three neurons: All were off, all were on, or only one, called AIB, was on. If all were off, then, when the odor signal arrived, they stayed off. If all were on, they often, but not always, shut off. And, in the third and most telling scenario, if AIB alone was active when the odor arrived, everything shut off. "This means that for AIB, context matters. If it's on alone, its activity will drop when odor is added, but if it's on with the rest of the network, it has difficulty dropping its activity with the others," Gordus says.

AIB is the first neuron in the network to receive the signal, which it then relays to the other two network members, known as RIM and AVA AVA sends out the final instruction to the muscles. When the researchers shut off RIM and AVA individually and together, they found AIB's response to the odor signal improved. This suggests that input from these two neurons competes with the sensory signal as it feeds down through the network.

Scaled up to account for the more nuanced behaviors of humans, the research may suggest ways in which our brains process competing motivations. "For humans, a hungry state might lead to you walk across the street to a delicious smelling restaurant. However, a competing aversion to the cold might lead you to stay indoors," he says.

In the worm experiments, the competition between neurons was influenced by the state of the network. There is plenty of evidence suggesting network states have a similar impact on animals with much larger and more complex brains, including us, says Bargmann, who is also a Howard Hughes Medical Institute investigator. "In a mammalian nervous system, millions of neurons are active all the time. Traditionally, we think of them as acting individually, but that is changing. Our understanding has evolved toward seeing important functions in terms of collective activity states within the brain."


Learning From Mistakes: Neurons Which Catch Our Mistakes and Correct Our Behavior Identified

Summary: Researchers identified specific neurons in the medial prefrontal cortex, called self monitoring error neurons, that fire immediately after people make a mistake.

Everyone makes little everyday mistakes out of habit–a waiter says, “Enjoy your meal,” and you respond with, “You, too!” before realizing that the person is not, in fact, going to be enjoying your meal. Luckily, there are parts of our brains that monitor our behavior, catching errors and correcting them quickly.

A Caltech-led team of researchers has now identified the individual neurons that may underlie this ability. The work provides rare recordings of individual neurons located deep within the human brain and has implications for psychiatric diseases like obsessive-compulsive disorder.

The work was a collaboration between the laboratories of Ralph Adolphs (PhD 󈨡), Bren Professor of Psychology, Neuroscience, and Biology, and the Allen V. C. Davis and Lenabelle Davis Leadership Chair and director of the Caltech Brain Imaging Center of the Tianqiao and Chrissy Chen Institute for Neuroscience and Ueli Rutishauser (PhD 󈧌), associate professor of neurosurgery, neurology, and biomedical sciences, and Board of Governors Chair in Neurosciences at Cedars-Sinai Medical Center.

“Many people know the feeling of making a mistake and quickly catching oneself–for example, when you are typing and press the wrong key, you can realize you made a mistake without even needing to see the error on the screen,” says Rutishauser, who is also a visiting associate in Caltech’s Division of Biology and Biological Engineering. “This is an example of how we self-monitor our own split-second mistakes. Now, with this research, we know which neurons are involved in this, and we are starting to learn more about how the activity of these neurons helps us change our behavior to correct errors.”

In this work, led by Caltech graduate student Zhongzheng (Brooks) Fu, the researchers aimed to get a precise picture of what happens on the level of individual neurons when a person catches themselves after making an error. To do this, they studied people who have had thin electrodes temporarily implanted into their brains (originally to help localize epileptic seizures). The work was done in collaboration with neurosurgeon Adam Mamelak, professor of neurosurgery at Cedars-Sinai, who has conducted such electrode implantations for clinical monitoring of epilepsy for over a decade and closely collaborated on the research studies.

While neural activity was measured in their medial frontal cortex (MFC), a brain region known to be involved in error monitoring, the epilepsy patients were given a so-called Stroop task to complete. In this task, a word is displayed on a computer screen, and the patients are asked to identify the color of the text. Sometimes, the text and the color are the same (the word “green” for example, is shown in green). In other cases, the word and the color are different (“green” is shown in red text). In the latter case, the correct answer would be “red,” but many people make the error of saying “green.” These are the errors the researchers studied.

The measurements allowed the team to identify specific neurons in the MFC, called self-monitoring error neurons, that would fire immediately after a person made an error, well before they were given feedback about their answer.

For decades, scientists have studied how people self-detect errors using electrodes placed on the surface of the skull that measure the aggregate electrical activity of thousands of neurons. These so-called electroencephalograms reveal that one particular brainwave signature, called the error-related negativity (ERN), is commonly seen on the skull over the MFC right after a person makes an error. In their experiments, Fu and his colleagues simultaneously measured the ERN as well as the firing of individual error neurons.

Researchers in the Adolphs laboratory at Caltech have discovered that certain types of neurons called error neurons are more active when we make a mistake. Take the Stroop test and see how you fare. NeuroscienceNews.com image is adapted from the CalTech news release.

They discovered two fundamental new aspects of the ERN. First, an error neuron’s activity level was positively correlated with the amplitude of the ERN: the larger the ERN for a particular error, the more active were the error neurons. This finding reveals that an observation of the ERN–a noninvasive measurement–provides information about the level of activity of error neurons found deep within the brain. Second, they found that this ERN-single-neuron correlation, in turn, predicted whether the person would change their behavior–that is, if they would slow down and focus more to avoid making an error on their next answer. If the error neurons fired but the brain-wide ERN signature was not seen or was weak, the person might still recognize that they made an error, but they would not modify their behavior for the next task. This suggests that the error neurons need to communicate their error detection to a large brain network in order to influence behavior.

The researchers found further specific evidence for parts of the circuit involved.

“We found error neurons in two different parts of the MFC: the dorsal anterior cingulate cortex (dACC) and the pre-supplementary motor area (pre-SMA),” says Fu. “The error signal appeared in the pre-SMA 50 milliseconds earlier than in the dACC. But only in the dACC was the correlation between the ERN and error neurons predictive of whether a person would modify their behavior. This reveals a hierarchy of processing–an organizational structure of the circuit at the single-neuron level that is important for executive control of behavior.”

The research could also have implications for understanding obsessive-compulsive disorder, a condition in which a person continuously attempts to correct perceived “errors.” For example, some individuals with this condition will feel a need to repeatedly check, in a short time period, if they have locked their door. Some people with obsessive-compulsive disorder have been shown to have an abnormally large ERN potential, indicating that their error-monitoring circuitry is overactive. The discovery of error neurons might facilitate new treatments to suppress this overactivity.

The researchers next hope to identify how the information from error neurons flows through the brain in order to produce behavioral changes like slowing down and focusing. “So far, we have identified two brain regions in the frontal cortex that appear to be part of a sequence of processing steps, but, of course, the entire circuit is going to be much more complex than that,” says Adolphs. “One important future avenue will be to combine studies that have very fine resolution, such as this one, with studies using fMRI [functional magnetic resonance imaging] that give us a whole-brain field of view.”

In addition to Fu, Adolphs, Rutishauser, and Mamelak, other co-authors are Caltech scientist Daw-An Wu Ian Ross of the Huntington Memorial Hospital in Pasadena and Jeffrey Chung of Cedars-Sinai Medical Center. Funding was provided by the National Institutes of Health, the National Science Foundation, and the McKnight Endowment Fund for Neuroscience.


Footnotes

↵ † To whom correspondence should be addressed. E-mail: gqbipitt.edu .

Author contributions: P.-M.L. and G.-Q.B. designed research P.-M.L. and G.-Q.B. performed research P.-M.L. and G.-Q.B. analyzed data and P.-M.L. and G.-Q.B. wrote the paper.

This paper was submitted directly (Track II) to the PNAS office.

Abbreviations: CNQX, 6-cyano-7-nitroquinoxaline-2,3-dione AP5, 2-amino-5-phosphopentanoic acid BMI, bicuculline methiodide CPA, cyclopiazonic acid PSC, polysynaptic current AMPA, α-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid.


Neural Representation

The neural representation of objects at this larger scale, as measured by PET or fMRI, reveals a degree of organization in extrastriate areas that mirrors the selectivity observed at the single-neuron level. That is, much as individual neurons respond preferentially to individual objects, localized regions of the visual cortex respond preferentially to classes of objects. Moreover, as with neurophysiological methods, the clearest preferences are obtained with highly-familiar object classes. A region of the visual cortex known as mid-fusiform gyrus (midFG) shows significantly higher activity when observers view faces as compared to when they view common objects (Kanwisher 2000 ). Similar selectivity for small regions of the cortex near the midFG has been found for letters, places, houses, and chairs. Thus, object classes appear to be represented minimally by large numbers of neurons in localized regions of the extrastriate cortex.

However, even this level of analysis may be misleading. Evidence for localized category-selective cortical regions comes from neuroimaging methods that compare the activation observed for one object class, e.g., faces, to a second object class, e.g., flowers. In truth, viewing any class of objects produces a pattern of activity across much of the ventral temporal cortex that is different from the activation pattern obtained for any other class. These differences, however, are often subtle and relatively small compared to the large differences seen between highly-overlearned categories such as faces and less familiar objects. Such differences may be critical elements in the complete neural code for objects if so, objects and classes of objects may be represented as large-scale networks of neurons distributed over much of the visual cortex.


Describe the differences between rate coding and temporal coding, and give examples of each in the nervous system.

The question of "the nature of the neural code" is obviously an old one, but its particular formulation with alternatives called "rate coding" and "temporal coding" is of a more recent vintage – the early 90s. The attempt to resolve this question kick-started the careers of many a famous neuroscientist, including Clay Reid and Yang Dan. Of course, in its most straightforward interpretation, the question has no single answer: presumably, the nervous system uses different codes when they are called for by the statistics of the stimulus or the nature of the task.

The question has, however, taken on something of a "litmus test" quality. Staunch defenders of the 'rate code' model consider those damn temporal-code nutjobs to be dilettantish gadflies (or worse, theorists!) who "can't have ever done physiology" (JL Gallant, personal correspondence). Partisans of temporal coding think the sclerotic orthodoxy of rate coding is based on insufficient appreciation of the precision and capability of neural systems (ask, e.g., Tony Bell) and a child-like or perhaps neurotic desire for the world to be simple, understandable, and even linear.

The debate has taken on this quasi-political character in part because the evidence is inconclusive, so its resolution comes down to a matter of faith – a question less of what data tells us about the structure of the world as of what world you believe we live in or what motivates you to do neuroscience. This kind of scientific debate is precisely the kind that philosopher of science Thomas Kuhn identifies as the breeding ground of scientific revolutions in his classic work, The Structure of Scientific Revolutions.

This debate over a major neuroscientific paradigm could be, at least partially, resolved within the decade: the attraction of spatiotemporally-modulated optogenetic stimulation is that it will allow for coding theories to be causally tested, rather than merely inferred from correlations. What a time to be alive.

The Rate Code

As stated above, the rate code is the orthodox view of neural coding. Each cell has a stimulus or task feature that it "likes", and when that feature is present or salient, the neuron fires spikes with a (generally stochastic, Poisson) rate that is monotonically (e.g., proportionally) related to the intensity or salience of that feature. This model works well with stimuli that vary slowly in time, since the noisiness of spikes requires integration over some time window.

Since it is the prevailing view, empirical examples of this coding scheme are abundant: it is the basic model of Hubel and Wiesel, it comprises the output of sensory transduction pathways in many systems, and it is the view that (most) simulated neural networks operate with.

One of my favorite objections to this model comes from Bruno Olshausen's ass-kicking paper What is the Other 85% of V1 Doing?, which methodically demonstrates that Hubel-Wiesel rate coding can only explain, at most, what 15% of the cells in V1 are doing. Central to the argument is a metabolic fact: if the population parameters derived from electrophysiological experiments are correct, a rate-coding V1 would consume an order of magnitude more energy. This is in agreement with more careful estimations of the distribution of baseline firing rates of cortical populations, which put even the slowest-firing neurons of the H-W rate model in the top 15% of cells.

The Temporal Code(s)

The basic idea of the temporal code is as simply stated as that of the rate code: information about the stimulus or action is contained in the relative timing of spikes, not just in, or instead of in, the rate of those spikes. The meaning, however, of "relative timing" is flexible and so there are actually multiple "temporal codes", which range from merely extreme examples of rate coding to incompatible schemes.

The more incompatible temporal codes are also, generally, more complex than rate coding (hence temporal coding's popularity with theorists and computationalists), leading the evidence to lean more heavily on correlation (more precisely, calculations of mutual information) than on causation. One other avenue of argument against rate coding is to point to extremely low jitter, or variance around a mean, in spike timings across multiple presentations of the same stimulus, in systems like the retina. Achieving such precise timings is expensive and unnecessary for rate coding, the argument goes, so it can only have evolved to support a temporal code.

"True" Temporal Coding

The original formulation of temporal coding took inspiration from digital computers and the all-or-none nature of the action potential: what if neurons encoded information as bitstreams of timed spikes (…010110011…) or perhaps with short binary 'words' of the same (…010 110 011 …)? The potential information-carrying capacity of such a code was as much a draw as its philosophically-appealing connection to human methods for information transmission and concomitant mathematical pedigree.

Unfortunately for Shannon and Turing, human brains are not computers – there is no central clock to sync processing and no obvious way to chunk time so that such a coding scheme can operate. Furthermore, such codes operate most efficiently when 1's and 0's appear equally frequently – an expensive proposition, metabolically speaking, especially with the time windows necessary to achieve high-bandwidth, low-latency transmission of information.

Weaker versions of this "true" or "vanilla" temporal coding scheme are possible: against a background of silence, cells could transmit information as bursts of spikes with precisely-timed inter-spike intervals. There are also three alternative, simpler temporal codes, listed below. This view of temporal coding, however, is what is usually meant when the term is used in isolation and without further explanation.

EXAMPLE: Low jitter and high specificity for binary words in the LGN. Reinagel and Reid, 2000. Temporal Coding of Visual Information in the Thalamus. J Neurosci.

Sparse Temporal Coding

In this framework, cells fire a single spike, or a small burst, to represent the presence of a transient stimulus. If your definition of "rate" is fluid enough, this can be considered a rate code with a very small time window and a binary function mapping the rate to the stimulus.

EXAMPLE: Rodent S1 represents whisker "stick-slip" events with bursts of action potentials. Jadhav, Wolfe, and Feldman, 2009. Sparse Temporal Coding of Elementary Tactile Features During Active Whisker Sensation. Nat Neurosci.

Phase-Locked Temporal Coding

One answer to the question "relative to what?", which arises when we define temporal coding in terms of "relative spike timings" is "relative to the phase of a network oscillation". This view is obviously controversial, since many neuroscientists think that oscillations are epiphenomenal – i.e. the result, not the cause, of critical computational processes.

One of the most prominent phase coding hypotheses tied grid cells to theta rhythms. The proposal was that how far away an animal is from the place field of a neuron was encoded by the the time at which the place cell fires relative to the beginning of the theta cycle this phenomenon is known as theta phase precession (Skaggs, W. E., & McNaughton, B. L. (1996). Theta Phase Precession in Hippocampal Neurons. Hippocampus, 6, 149-172.). As a general theory of place cell function in all animals, this idea contested see Grid cells without theta oscillations in the entorhinal cortex of bats, but in rodents, the finding is robust.

A similar, but more general, idea has also been proposed for temporal coding by the phase of the cortical gamma cycle during which a neuron fires: neurons fire during the same phase if the features that they represent are properties of a coherent object see The gamma cycle.

EXAMPLE: Rich information is present in phase-locked gamma oscillations in retina and LGN. Koepsell, Wang, Sommer et al., 2009. Retinal Oscillations Carry Visual Information to Cortex. Front Syst Neurosci.

Population Temporal Coding

All the other codes discussed here have been single-cell codes, or answers to the question: how does one cell represent information through the pattern of its spiking? How multiple cells represent information is a whole separate question (literally, it's the next one on the list!). Population temporal coding is an answer to that general question. Any of the models discussed under that question, considered generally, could be combined with temporal coding, though they are generally discussed in terms of rate codes. The combination looks something like this: the relative timing of pairs of neurons (or pairs of assemblies) in a population encodes information. This is an especially compelling theory for how neurons encode information about stimulus timing.

EXAMPLE: Spikes in the retina can be used to rapidly discriminate flickered stimuli. Gollisch and Meister, 2008. Rapid Neural Coding in the Retina with Relative Spike Latencies. Science.