Using non-invasive brain recordings to characterize activity in deep structures
Many neurophysiological processes, such as memory, sensory perception and emotion, as well as diseases including Alzheimer’s, depression and autism, are mediated by brain regions located deep beneath the cerebral cortex. Techniques to non-invasively image millisecond-scale activity in these deep brain regions are limited. Now, an international team including A*STAR researchers has shown that magnetoencephalography (MEG) and electroencephalography (EEG) can be used to characterize fast timescale activity in these deep brain structures.
In a recent breakthrough, A*STAR’s Pavitra Krishnaswamy, in a research team spanning the United States, Sweden, and Finland, developed a statistical machine learning approach to resolve deep brain activity with high temporal and spatial resolution. The researchers used simulated test cases and experimental MEG/EEG recordings from healthy volunteers to demonstrate that their approach accurately maps out this deep brain activity amidst concurrent activity in cortical structures.
Deep brain activity typically generates weak MEG/EEG signals that are easily drowned out by louder signals arising from cortical activity. Therefore, characterizing the deep brain sources becomes akin to ‘picking out needles in a haystack’. Rather than solely relying on how ‘loud’ the brainwaves are, the team leveraged the fact that deep brain activity generates distinct spatial patterns across multiple MEG/EEG sensors positioned over the head.
The concept of sparsity—referring to how limited subsets of neurons across the brain ‘fire’ in sequential and coordinated patterns—also underpins the team’s research. If all of the cerebral cortex were active at the same instant, says Krishnaswamy, the cortical signal would “completely dwarf” that of the deep brain. However, when only a limited number of cortical regions are simultaneously active, it is possible to train an algorithm to see through into the deep brain. “When just a limited portion of the cortex is active, even though it appears louder than ongoing deep brain activity, it is possible to transform the data into a space where the deeper signals also have a distinct ‘voice’.”
Source: Read Full Article