Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians.
ABSTRACT: The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, "cocktail-party" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the "cocktail party problem".
Project description:Studies suggest that long-term music experience enhances the brain's ability to segregate speech from noise. Musicians' "speech-in-noise (SIN) benefit" is based largely on perception from simple figure-ground tasks rather than competitive, multi-talker scenarios that offer realistic spatial cues for segregation and engage binaural processing. We aimed to investigate whether musicians show perceptual advantages in cocktail party speech segregation in a competitive, multi-talker environment. We used the coordinate response measure (CRM) paradigm to measure speech recognition and localization performance in musicians vs. non-musicians in a simulated 3D cocktail party environment conducted in an anechoic chamber. Speech was delivered through a 16-channel speaker array distributed around the horizontal soundfield surrounding the listener. Participants recalled the color, number, and perceived location of target callsign sentences. We manipulated task difficulty by varying the number of additional maskers presented at other spatial locations in the horizontal soundfield (0-1-2-3-4-6-8 multi-talkers). Musicians obtained faster and better speech recognition amidst up to around eight simultaneous talkers and showed less noise-related decline in performance with increasing interferers than their non-musician peers. Correlations revealed associations between listeners' years of musical training and CRM recognition and working memory. However, better working memory correlated with better speech streaming. Basic (QuickSIN) but not more complex (speech streaming) SIN processing was still predicted by music training after controlling for working memory. Our findings confirm a relationship between musicianship and naturalistic cocktail party speech streaming but also suggest that cognitive factors at least partially drive musicians' SIN advantage.
Project description:Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30), we asked whether musical experience benefits an older cohort of musicians (ages 45-65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.
Project description:Because musicians are trained to discern sounds within complex acoustic scenes, such as an orchestra playing, it has been hypothesized that musicianship improves general auditory scene analysis abilities. Here, we compared musicians and non-musicians in a behavioural paradigm using ambiguous stimuli, combining performance, reaction times and confidence measures. We used 'Shepard tones', for which listeners may report either an upward or a downward pitch shift for the same ambiguous tone pair. Musicians and non-musicians performed similarly on the pitch-shift direction task. In particular, both groups were at chance for the ambiguous case. However, groups differed in their reaction times and judgements of confidence. Musicians responded to the ambiguous case with long reaction times and low confidence, whereas non-musicians responded with fast reaction times and maximal confidence. In a subsequent experiment, non-musicians displayed reduced confidence for the ambiguous case when pure-tone components of the Shepard complex were made easier to discern. The results suggest an effect of musical training on scene analysis: we speculate that musicians were more likely to discern components within complex auditory scenes, perhaps because of enhanced attentional resolution, and thus discovered the ambiguity. For untrained listeners, stimulus ambiguity was not available to perceptual awareness.This article is part of the themed issue 'Auditory and visual scene analysis'.
Project description:In the past 20 years, there has been growing research interest in the association between video games and cognition. Although many studies have found that video game players are better than non-players in multiple cognitive domains, other studies failed to replicate these results. Until now, the vast majority of studies defined video game players based on the number of hours an individual spent playing video games, with relatively few studies focusing on video game expertise using performance criteria. In the current study, we sought to examine whether individuals who play video games at a professional level in the esports industry differ from amateur video game players in their cognitive and learning abilities. We assessed 14 video game players who play in a competitive league (Professional) and 16 casual video game players (Amateur) on set of standard neuropsychological tests evaluating processing speed, attention, memory, executive functions, and manual dexterity. We also examined participants' ability to improve performance on a dynamic visual attention task that required tracking multiple objects in three-dimensions (3D-MOT) over five sessions. Professional players showed the largest performance advantage relative to Amateur players in a test of visual spatial memory (Spatial Span), with more modest benefits in a test of selective and sustained attention (d2 Test of Attention), and test of auditory working memory (Digit Span). Professional players also showed better speed thresholds in the 3D-MOT task overall, but the rate of improvement with training did not differ in the two groups. Future longitudinal studies of elite video game experts are required to determine whether the observed performance benefits of professional gamers may be due to their greater engagement in video game play, or due to pre-existing differences that promote achievement of high performance in action video games.
Project description:Early auditory deprivation may drive the auditory cortex into cross-modal processing of non-auditory sensory information. In a recent study, we had shown that early deaf subjects exhibited increased activation in the superior temporal gyrus (STG) bilaterally during visual spatial working memory; however, the changes in the organization of the STG related spontaneous functional network, and their cognitive relevance are still unknown. To clarify this issue, we applied resting state functional magnetic resonance imaging on 42 early deafness (ED) and 40 hearing controls (HC). We also acquired the visual spatial and numerical n-back working memory (WM) information in these subjects. Compared with hearing subjects, the ED exhibited faster reaction time of visual WM tasks in both spatial and numerical domains. Furthermore, ED subjects exhibited significantly increased functional connectivity between the STG (especially of the right hemisphere) and bilateral anterior insula and dorsal anterior cingulated cortex. Finally, the functional connectivity of STG could predict visual spatial WM performance, even after controlling for numerical WM performance. Our findings suggest that early auditory deprivation can strengthen the spontaneous functional connectivity of STG, which may contribute to the cross-modal involvement of this region in visual working memory.
Project description:This study aimed to measure the initial portion of signal required for the correct identification of auditory speech stimuli (or isolation points, IPs) in silence and noise, and to investigate the relationships between auditory and cognitive functions in silence and noise. Twenty-one university students were presented with auditory stimuli in a gating paradigm for the identification of consonants, words, and final words in highly predictable and low predictable sentences. The Hearing in Noise Test (HINT), the reading span test, and the Paced Auditory Serial Attention Test were also administered to measure speech-in-noise ability, working memory and attentional capacities of the participants, respectively. The results showed that noise delayed the identification of consonants, words, and final words in highly predictable and low predictable sentences. HINT performance correlated with working memory and attentional capacities. In the noise condition, there were correlations between HINT performance, cognitive task performance, and the IPs of consonants and words. In the silent condition, there were no correlations between auditory and cognitive tasks. In conclusion, a combination of hearing-in-noise ability, working memory capacity, and attention capacity is needed for the early identification of consonants and words in noise.
Project description:Individual alpha peak frequency (IAPF), the discrete frequency with the highest power value in the alpha oscillation range of the electroencephalogram, is a stable neurophysiological marker and is closely associated with various cognitive functions, including aspects of attention and working memory. However, the relationship between IAPF and attentional performance as well as the effects of engaging attention on IAPF are unknow. Here, we examined whether IAPF values were associated with attentional performance by evaluating accuracy during the performance of a multiple object tracking (MOT) task, a well-established paradigm for investigating goal-driven attention in dynamic environments, and whether engagement in the task affected IAPF values. In total, 18 elite players and 20 intermediate players completed the study. Resting electroencephalogram recordings were obtained for 120 s while players kept their eyes open and an additional 120 s while players' eyes were closed, before and again after performing the MOT task. Tracking accuracy in the MOT task and IAPF values before and after the MOT task were analyzed. As expected, tracking accuracies were higher in elite players than in intermediate-level players. Baseline IAPF values were significantly and positively correlated with the accuracy of target tracking in the MOT task. IAPF values were higher in elite than intermediate players in both the eyes open and closed conditions and both before and after MOT task performance. Interindividual IAPF values did not differ before and after the MOT task. These findings indicate that IAPF is a stable marker, without intraindividual changes associated with engagement in the MOT task. Elite players had higher IAPF values and exhibited more accurate MOT performance than intermediate-level players; thus, baseline IAPF values may be useful to predict attentional performance in the MOT task among athletes.
Project description:Age-related hearing loss has been associated with increased recruitment of frontal brain areas during speech perception to compensate for the decline in auditory input. This additional recruitment may bind resources otherwise needed for understanding speech. However, it is unknown how increased demands on listening interact with increasing cognitive demands when processing speech in age-related hearing loss. The current study used a full-sentence working memory task manipulating demands on working memory and listening and studied untreated mild to moderate hard of hearing (<i>n</i> = 20) and normal-hearing age-matched participants (<i>n</i> = 19) with functional MRI. On the behavioral level, we found a significant interaction of memory load and listening condition; this was, however, similar for both groups. Under low, but not high memory load, listening condition significantly influenced task performance. Similarly, under easy but not difficult listening conditions, memory load had a significant effect on task performance. On the neural level, as measured by the BOLD response, we found increased responses under high compared to low memory load conditions in the left supramarginal gyrus, left middle frontal gyrus, and left supplementary motor cortex regardless of hearing ability. Furthermore, we found increased responses in the bilateral superior temporal gyri under easy compared to difficult listening conditions. We found no group differences nor interactions of group with memory load or listening condition. This suggests that memory load and listening condition interacted on a behavioral level, however, only the increased memory load was reflected in increased BOLD responses in frontal and parietal brain regions. Hence, when evaluating listening abilities in elderly participants, memory load should be considered as it might interfere with the assessed performance. We could not find any further evidence that BOLD responses for the different memory and listening conditions are affected by mild to moderate age-related hearing loss.
Project description:This study compared 30 older musicians and 30 age-matched non-musicians to investigate the association between lifelong musical instrument training and age-related cognitive decline and brain atrophy (musicians: mean age 70.8 years, musical experience 52.7 years; non-musicians: mean age 71.4 years, no or less than 3 years of musical experience). Although previous research has demonstrated that young musicians have larger gray matter volume (GMV) in the auditory-motor cortices and cerebellum than non-musicians, little is known about older musicians. Music imagery in young musicians is also known to share a neural underpinning [the supramarginal gyrus (SMG) and cerebellum] with music performance. Thus, we hypothesized that older musicians would show superiority to non-musicians in some of the abovementioned brain regions. Behavioral performance, GMV, and brain activity, including functional connectivity (FC) during melodic working memory (MWM) tasks, were evaluated in both groups. Behaviorally, musicians exhibited a much higher tapping speed than non-musicians, and tapping speed was correlated with executive function in musicians. Structural analyses revealed larger GMVs in both sides of the cerebellum of musicians, and importantly, this was maintained until very old age. Task-related FC analyses revealed that musicians possessed greater cerebellar-hippocampal FC, which was correlated with tapping speed. Furthermore, musicians showed higher activation in the SMG during MWM tasks; this was correlated with earlier commencement of instrumental training. These results indicate advantages or heightened coupling in brain regions associated with music performance and imagery in musicians. We suggest that lifelong instrumental training highly predicts the structural maintenance of the cerebellum and related cognitive maintenance in old age.
Project description:In light of the high prevalence of hearing loss and cognitive impairment in the aging population, it is important to know how cognitive tests should be administered for older adults with hearing loss. The purpose of the present study is to examine this question with a cognitive screening test and a working memory test. Specifically, we asked the following questions in 2 experiments. First, does a controlled amplification method affect cognitive test scores? Second, does test modality (visual vs. auditory) impact cognitive test scores? Three test administration conditions were created for both Montreal Cognitive Assessment (MoCA) and working memory test (a word recognition and recall test): auditory amplified, auditory unamplified, and visual. The auditory administration was implemented through a computer program to control for presentation level while the visual condition was implemented through timed computer slides. Data were collected from older individuals with mild-to-moderate sensorineural hearing loss. We did not find any effect of amplification or test modality on the total score of the cognitive screening test (i.e., MoCA). Amplification improved working memory performance as measured by word recall performance, but test modality (auditory vs. visual) did not. These results are consistent with literature in demonstrating a downstream effect of audibility on working memory performance. From a clinical perspective, these findings are informative for developing clinical administration protocols of these tests for older individuals with hearing loss.