Project description:The neural encoding of sensory stimuli is subject to the brain's internal circuit dynamics. Recent work has demonstrated that the resting brain exhibits widespread, coordinated activity that plays out over multisecond timescales in the form of quasi-periodic spiking cascades. Here we demonstrate that these intrinsic dynamics persist during the presentation of visual stimuli and markedly influence the efficacy of feature encoding in the visual cortex. During periods of passive viewing, the sensory encoding of visual stimuli was determined by quasi-periodic cascade cycle evolving over several seconds. During this cycle, high efficiency encoding occurred during peak arousal states, alternating in time with hippocampal ripples, which were most frequent in low arousal states. However, during bouts of active locomotion, these arousal dynamics were abolished: the brain remained in a state in which visual coding efficiency remained high and ripples were absent. We hypothesize that the brain's observed dynamics during awake, passive viewing reflect an adaptive cycle of alternating exteroceptive sensory sampling and internal mnemonic function.
Project description:Suppressing responses to distractor stimuli is a fundamental cognitive function, essential for performing goal-directed tasks. A common framework for the neuronal implementation of distractor suppression is the attenuation of distractor stimuli from early sensory to higher-order processing. However, details of the localization and mechanisms of attenuation are poorly understood. We trained mice to selectively respond to target stimuli in one whisker field and ignore distractor stimuli in the opposite whisker field. During expert task performance, optogenetic inhibition of whisker motor cortex increased the overall tendency to respond and the detection of distractor whisker stimuli. Within sensory cortex, optogenetic inhibition of whisker motor cortex enhanced the propagation of distractor stimuli into target-preferring neurons. Single unit analyses revealed that whisker motor cortex (wMC) decorrelates target and distractor stimulus encoding in target-preferring primary somatosensory cortex (S1) neurons, which likely improves selective target stimulus detection by downstream readers. Moreover, we observed proactive top-down modulation from wMC to S1, through the differential activation of putative excitatory and inhibitory neurons before stimulus onset. Overall, our studies support a contribution of motor cortex to sensory selection, in suppressing behavioral responses to distractor stimuli by gating distractor stimulus propagation within sensory cortex.
Project description:The retina and primary visual cortex (V1) both exhibit diverse neural populations sensitive to diverse visual features. Yet it remains unclear how neural populations in each area partition stimulus space to span these features. One possibility is that neural populations are organized into discrete groups of neurons, with each group signaling a particular constellation of features. Alternatively, neurons could be continuously distributed across feature-encoding space. To distinguish these possibilities, we presented a battery of visual stimuli to mouse retina and V1 while measuring neural responses with multi-electrode arrays. Using machine learning approaches, we developed a manifold embedding technique that captures how neural populations partition feature space and how visual responses correlate with physiological and anatomical properties of individual neurons. We show that retinal populations discretely encode features, while V1 populations provide a more continuous representation. Applying the same analysis approach to convolutional neural networks that model visual processing, we demonstrate that they partition features much more similarly to the retina, indicating they are more like big retinas than little brains.
Project description:The retina and primary visual cortex (V1) both exhibit diverse neural populations sensitive to diverse visual features. Yet it remains unclear how neural populations in each area partition stimulus space to span these features. One possibility is that neural populations are organized into discrete groups of neurons, with each group signaling a particular constellation of features. Alternatively, neurons could be continuously distributed across feature-encoding space. To distinguish these possibilities, we presented a battery of visual stimuli to the mouse retina and V1 while measuring neural responses with multi-electrode arrays. Using machine learning approaches, we developed a manifold embedding technique that captures how neural populations partition feature space and how visual responses correlate with physiological and anatomical properties of individual neurons. We show that retinal populations discretely encode features, while V1 populations provide a more continuous representation. Applying the same analysis approach to convolutional neural networks that model visual processing, we demonstrate that they partition features much more similarly to the retina, indicating they are more like big retinas than little brains.
Project description:Visual prostheses are implantable medical devices that are able to provide some degree of vision to individuals who are blind. This research field is a challenging subject in both ophthalmology and basic science that has progressed to a point where there are already several commercially available devices. However, at present, these devices are only able to restore a very limited vision, with relatively low spatial resolution. Furthermore, there are still many other open scientific and technical challenges that need to be solved to achieve the therapeutic benefits envisioned by these new technologies. This paper provides a brief overview of significant developments in this field and introduces some of the technical and biological challenges that still need to be overcome to optimize their therapeutic success, including long-term viability and biocompatibility of stimulating electrodes, the selection of appropriate patients for each artificial vision approach, a better understanding of brain plasticity and the development of rehabilitative strategies specifically tailored for each patient.
Project description:Time courses of neural responses underlie real-time sensory processing and perception. How these temporal dynamics change may be fundamental to how sensory systems adapt to different perceptual demands. By simultaneously recording from hundreds of neurons in mouse primary visual cortex, we examined neural population responses to visual stimuli at sub-second timescales, during different behavioural states. We discovered that during active behavioural states characterised by locomotion, single-neurons shift from transient to sustained response modes, facilitating rapid emergence of visual stimulus tuning. Differences in single-neuron response dynamics were associated with changes in temporal dynamics of neural correlations, including faster stabilisation of stimulus-evoked changes in the structure of correlations during locomotion. Using Factor Analysis, we examined temporal dynamics of latent population responses and discovered that trajectories of population activity make more direct transitions between baseline and stimulus-encoding neural states during locomotion. This could be partly explained by dampening of oscillatory dynamics present during stationary behavioural states. Functionally, changes in temporal response dynamics collectively enabled faster, more stable and more efficient encoding of new visual information during locomotion. These findings reveal a principle of how sensory systems adapt to perceptual demands, where flexible neural population dynamics govern the speed and stability of sensory encoding.
Project description:Sounds can modulate visual perception as well as neural activity in retinotopic cortex. Most studies in this context investigated how sounds change neural amplitude and oscillatory phase reset in visual cortex. However, recent studies in macaque monkeys show that congruence of audio-visual stimuli also modulates the amount of stimulus information carried by spiking activity of primary auditory and visual neurons. Here, we used naturalistic video stimuli and recorded the spatial patterns of functional MRI signals in human retinotopic cortex to test whether the discriminability of such patterns varied with the presence and congruence of co-occurring sounds. We found that incongruent sounds significantly impaired stimulus decoding from area V2 and there was a similar trend for V3. This effect was associated with reduced inter-trial reliability of patterns (i.e. higher levels of noise), but was not accompanied by any detectable modulation of overall signal amplitude. We conclude that sounds modulate naturalistic stimulus encoding in early human retinotopic cortex without affecting overall signal amplitude. Subthreshold modulation, oscillatory phase reset and dynamic attentional modulation are candidate neural and cognitive mechanisms mediating these effects.
Project description:A fundamental challenge in neuroengineering is determining a proper artificial input to a sensory system that yields the desired perception. In neuroprosthetics, this process is known as artificial sensory encoding, and it holds a crucial role in prosthetic devices restoring sensory perception in individuals with disabilities. For example, in visual prostheses, one key aspect of artificial image encoding is to downsample images captured by a camera to a size matching the number of inputs and resolution of the prosthesis. Here, we show that downsampling an image using the inherent computation of the retinal network yields better performance compared to learning-free downsampling methods. We have validated a learning-based approach (actor-model framework) that exploits the signal transformation from photoreceptors to retinal ganglion cells measured in explanted mouse retinas. The actor-model framework generates downsampled images eliciting a neuronal response in-silico and ex-vivo with higher neuronal reliability than the one produced by a learning-free approach. During the learning process, the actor network learns to optimize contrast and the kernel's weights. This methodological approach might guide future artificial image encoding strategies for visual prostheses. Ultimately, this framework could be applicable for encoding strategies in other sensory prostheses such as cochlear or limb.
Project description:Visual processing depends on sensitive and balanced synaptic neurotransmission. Extracellular matrix proteins in the environment of cells are key modulators in synaptogenesis and synaptic plasticity. In the present study, we provide evidence that the combined loss of the four extracellular matrix components brevican, neurocan, tenascin-C and tenascin-R in quadruple knockout mice leads to severe retinal dysfunction and diminished visual motion processing in vivo. Remarkably, impaired visual motion processing was accompanied by a developmental loss of cholinergic direction-selective starburst amacrine cells. Additionally, we noted imbalance of inhibitory and excitatory synaptic signaling in the quadruple knockout retina. Collectively, the study offers novel insights into the functional importance of four key extracellular matrix proteins for retinal function, visual motion processing and synaptic signaling.
Project description:A common concern for individuals with severe-to-profound hearing loss fitted with cochlear implants (CIs) is difficulty following conversations in noisy environments. Recent work has suggested that these difficulties are related to individual differences in brain function, including verbal working memory and the degree of cross-modal reorganization of auditory areas for visual processing. However, the neural basis for these relationships is not fully understood. Here, we investigated neural correlates of visual verbal working memory and sensory plasticity in 14 CI users and age-matched normal-hearing (NH) controls. While we recorded the high-density electroencephalogram (EEG), participants completed a modified Sternberg visual working memory task where sets of letters and numbers were presented visually and then recalled at a later time. Results suggested that CI users had comparable behavioural working memory performance compared with NH. However, CI users had more pronounced neural activity during visual stimulus encoding, including stronger visual-evoked activity in auditory and visual cortices, larger modulations of neural oscillations and increased frontotemporal connectivity. In contrast, during memory retention of the characters, CI users had descriptively weaker neural oscillations and significantly lower frontotemporal connectivity. We interpret the differences in neural correlates of visual stimulus processing in CI users through the lens of cross-modal and intramodal plasticity.