Adaptation in single neurons provides memory for language processing
To understand language, we have to remember the words that were uttered and combine them into an interpretation. How does the brain retain information long enough to accomplish this, despite the fact that neuronal firing events are very short-lived? Hartmut Fitz from the Max Planck Institute for Psycholinguistics and his colleagues propose a neurobiological explanation bridging this discrepancy. Neurons change their spike rate based on experience and this adaptation provides memory for sentence processing.
Did the man bite the dog, or was it the other way around? When processing an utterance, words need to be assembled into the correct interpretation within working memory. One aspect of comprehension is to establish ‘who did what to whom’. This process of unification takes much longer than basic events in neurobiology, like neuronal spikes or synaptic signaling. Hartmut Fitz, lead investigator at the Neurocomputational Models of Language group at the Max Planck Institute for Psycholinguistics, and his colleagues propose an account where adaptive features of single neurons supply memory that is sufficiently long-lived to bridge this temporal gap and support language processing.
Model comparisons
Together with researchers Marvin Uhlmann, Dick van den Broek, Peter Hagoort, Karl Magnus Petersson (all Max Planck Institute for Psycholinguistics) and Renato Duarte (Jülich Research Centre, Germany), Fitz studied working memory in spiking networks through an innovative combination of experimental language research with methods from computational neuroscience.
In a sentence comprehension task, circuits of biological neurons and synapses were exposed to sequential language input which they had to map onto semantic relations that characterize the meaning of an utterance. For example, ‘the cat chases a dog’ means something different than ‘the cat is chased by a dog’ even though both sentences contain similar words. The various cues to meaning need to be integrated within working memory to derive the correct message. The researchers varied the neurobiological features in computationally simulated networks and compared the performance of different versions of the model. This allowed them to pinpoint which of these features implemented the memory capacity required for sentence comprehension.
Towards a computational neurobiology of language
They found that working memory for language processing can be provided by the down-regulation of neuronal excitability in response to external input. “This suggests that working memory could reside within single neurons, which contrasts with other theories where memory is either due to short-term synaptic changes or arises from network connectivity and excitatory feedback,” says Fitz.
Their model shows that this neuronal memory is context-dependent, and sensitive to serial order which makes it ideally suitable for language. Additionally, the model was able to establish binding relations between words and semantic roles with high accuracy.
Source: Read Full Article