Feeling the beat: premotor and striatal interactions in musicians and nonmusicians during beat perception.
ABSTRACT: Little is known about the underlying neurobiology of rhythm and beat perception, despite its universal cultural importance. Here we used functional magnetic resonance imaging to study rhythm perception in musicians and nonmusicians. Three conditions varied in the degree to which external reinforcement versus internal generation of the beat was required. The "volume" condition strongly externally marked the beat with volume changes, the "duration" condition marked the beat with weaker accents arising from duration changes, and the "unaccented" condition required the beat to be entirely internally generated. In all conditions, beat rhythms compared with nonbeat control rhythms revealed putamen activity. The presence of a beat was also associated with greater connectivity between the putamen and the supplementary motor area (SMA), the premotor cortex (PMC), and auditory cortex. In contrast, the type of accent within the beat conditions modulated the coupling between premotor and auditory cortex, with greater modulation for musicians than nonmusicians. Importantly, the response of the putamen to beat conditions was not attributable to differences in temporal complexity between the three rhythm conditions. We propose that a cortico-subcortical network including the putamen, SMA, and PMC is engaged for the analysis of temporal sequences and prediction or generation of putative beats, especially under conditions that may require internal generation of the beat. The importance of this system for auditory-motor interaction and development of precisely timed movement is suggested here by its facilitation in musicians.
Project description:The idea that musical training improves speech perception in challenging listening environments is appealing and of clinical importance, yet the mechanisms of any such musician advantage are not well specified. Here, using functional magnetic resonance imaging (fMRI), we found that musicians outperformed nonmusicians in identifying syllables at varying signal-to-noise ratios (SNRs), which was associated with stronger activation of the left inferior frontal and right auditory regions in musicians compared with nonmusicians. Moreover, musicians showed greater specificity of phoneme representations in bilateral auditory and speech motor regions (e.g., premotor cortex) at higher SNRs and in the left speech motor regions at lower SNRs, as determined by multivoxel pattern analysis. Musical training also enhanced the intrahemispheric and interhemispheric functional connectivity between auditory and speech motor regions. Our findings suggest that improved speech in noise perception in musicians relies on stronger recruitment of, finer phonological representations in, and stronger functional connectivity between auditory and frontal speech motor cortices in both hemispheres, regions involved in bottom-up spectrotemporal analyses and top-down articulatory prediction and sensorimotor integration, respectively.
Project description:Practicing a musical instrument is a rich multisensory experience involving the integration of visual, auditory, and tactile inputs with motor responses. This combined psychophysics-fMRI study used the musician's brain to investigate how sensory-motor experience molds temporal binding of auditory and visual signals. Behaviorally, musicians exhibited a narrower temporal integration window than nonmusicians for music but not for speech. At the neural level, musicians showed increased audiovisual asynchrony responses and effective connectivity selectively for music in a superior temporal sulcus-premotor-cerebellar circuitry. Critically, the premotor asynchrony effects predicted musicians' perceptual sensitivity to audiovisual asynchrony. Our results suggest that piano practicing fine tunes an internal forward model mapping from action plans of piano playing onto visible finger movements and sounds. This internal forward model furnishes more precise estimates of the relative audiovisual timings and hence, stronger prediction error signals specifically for asynchronous music in a premotor-cerebellar circuitry. Our findings show intimate links between action production and audiovisual temporal binding in perception.
Project description:How we measure time and integrate temporal cues from different sensory modalities are fundamental questions in neuroscience. Sensitivity to a "beat" (such as that routinely perceived in music) differs substantially between auditory and visual modalities. Here we examined beat sensitivity in each modality, and examined cross-modal influences, using functional magnetic resonance imaging (fMRI) to characterize brain activity during perception of auditory and visual rhythms. In separate fMRI sessions, participants listened to auditory sequences or watched visual sequences. The order of auditory and visual sequence presentation was counterbalanced so that cross-modal order effects could be investigated. Participants judged whether sequences were speeding up or slowing down, and the pattern of tempo judgments was used to derive a measure of sensitivity to an implied beat. As expected, participants were less sensitive to an implied beat in visual sequences than in auditory sequences. However, visual sequences produced a stronger sense of beat when preceded by auditory sequences with identical temporal structure. Moreover, increases in brain activity were observed in the bilateral putamen for visual sequences preceded by auditory sequences when compared to visual sequences without prior auditory exposure. No such order-dependent differences (behavioral or neural) were found for the auditory sequences. The results provide further evidence for the role of the basal ganglia in internal generation of the beat and suggest that an internal auditory rhythm representation may be activated during visual rhythm perception.
Project description:Working memory (WM) for auditory information has been thought of as a unitary system, but whether WM for verbal and tonal information relies on the same or different functional neuroarchitectures has remained unknown. This fMRI study examines verbal and tonal WM in both nonmusicians (who are trained in speech, but not in music) and highly trained musicians (who are trained in both domains). The data show that core structures of WM are involved in both tonal and verbal WM (Broca's area, premotor cortex, pre-SMA/SMA, left insular cortex, inferior parietal lobe), although with significantly different structural weightings, in both nonmusicians and musicians. Additionally, musicians activated specific subcomponents only during verbal (right insular cortex) or only during tonal WM (right globus pallidus, right caudate nucleus, and left cerebellum). These results reveal the existence of two WM systems in musicians: A phonological loop supporting rehearsal of phonological information, and a tonal loop supporting rehearsal of tonal information. Differences between groups for tonal WM, and between verbal and tonal WM within musicians, were mainly related to structures involved in controlling, programming and planning of actions, thus presumably reflecting differences in action-related sensorimotor coding of verbal and tonal information.
Project description:The perception of a regular beat is fundamental to music processing. Here we examine whether the detection of a regular beat is pre-attentive for metrically simple, acoustically varying stimuli using the mismatch negativity (MMN), an ERP response elicited by violations of acoustic regularity irrespective of whether subjects are attending to the stimuli. Both musicians and non-musicians were presented with a varying rhythm with a clear accent structure in which occasionally a sound was omitted. We compared the MMN response to the omission of identical sounds in different metrical positions. Most importantly, we found that omissions in strong metrical positions, on the beat, elicited higher amplitude MMN responses than omissions in weak metrical positions, not on the beat. This suggests that the detection of a beat is pre-attentive when highly beat inducing stimuli are used. No effects of musical expertise were found. Our results suggest that for metrically simple rhythms with clear accents beat processing does not require attention or musical expertise. In addition, we discuss how the use of acoustically varying stimuli may influence ERP results when studying beat processing.
Project description:Listening to music can induce us to tune in to its beat. Previous neuroimaging studies have shown that the motor system becomes involved in perceptual rhythm and timing tasks in general, as well as during preference-related responses to music. However, the role of preferred rhythm and, in particular, of preferred beat frequency (tempo) in driving activity in the motor system remains unknown. The goals of the present functional magnetic resonance imaging (fMRI) study were to determine whether the musical rhythms that are subjectively judged as beautiful boost activity in motor-related areas and if so, whether this effect is driven by preferred tempo, the underlying pulse people tune in to. On the basis of the subjects' judgments, individual preferences were determined for the different systematically varied constituents of the musical rhythms. Results demonstrate the involvement of premotor and cerebellar areas during preferred compared to not preferred musical rhythms and indicate that activity in the ventral premotor cortex (PMv) is enhanced by preferred tempo. Our findings support the assumption that the premotor activity increase during preferred tempo is the result of enhanced sensorimotor simulation of the beat frequency. This may serve as a mechanism that facilitates the tuning-in to the beat of appealing music.
Project description:The rhythmic nature of speech may recruit entrainment mechanisms in a manner similar to music. In the current study, we tested the hypothesis that individuals who display a severe deficit in synchronizing their taps to a musical beat (called beat-deaf here) would also experience difficulties entraining to speech. The beat-deaf participants and their matched controls were required to align taps with the perceived regularity in the rhythm of naturally spoken, regularly spoken, and sung sentences. The results showed that beat-deaf individuals synchronized their taps less accurately than the control group across conditions. In addition, participants from both groups exhibited more inter-tap variability to natural speech than to regularly spoken and sung sentences. The findings support the idea that acoustic periodicity is a major factor in domain-general entrainment to both music and speech. Therefore, a beat-finding deficit may affect periodic auditory rhythms in general, not just those for music.
Project description:Abstract Absolute pitch (AP) is the ability to identify an auditory pitch without prior context. Current theories posit AP involves automatic retrieval of referents. We tested interference in well-matched AP musicians, non-AP musicians, and nonmusicians with three auditory Stroop tasks. Stimuli were one of two sung pitches with congruent or incongruent verbal cues. The tasks used different lexicons: binary concrete adjectives (i.e., words: Low/High), syllables with no obvious semantic properties (i.e., solmization: Do/So), and abstract semiotic labels (i.e., orthographic: C/G). Participants were instructed to respond to pitch regardless of verbal information during electroencephalographic recording. Incongruent stimuli of words and solmization tasks increased errors and slowed response times (RTs), which was reversed in nonmusicians for the orthographic task. AP musicians made virtually no errors, but their RTs slowed for incongruent stimuli. Frontal theta (4–7 Hz) event-related synchrony was significantly enhanced during incongruence between 350 and 550 ms poststimulus onset in AP, regardless of lexicon or behavior. This effect was found in non-AP musicians and nonmusicians for word task, while orthographic task showed a reverse theta congruency effect. Findings suggest theta synchrony indexes conflict detection in AP. High beta (21–29 Hz) desynchrony indexes response conflict detection in non-AP musicians. Alpha (8–12 Hz) synchrony may reflect top-down attention.
Project description:Behavioral studies suggest that preference for a beat rate (tempo) in auditory sequences is tightly linked to the motor system. However, from a neuroscientific perspective the contribution of motor-related brain regions to tempo preference in the auditory domain remains unclear. A recent fMRI study (Kornysheva et al. : Hum Brain Mapp 31:48-64) revealed that the activity increase in the left ventral premotor cortex (PMv) is associated with the preference for a tempo of a musical rhythm. The activity increase correlated with how strongly the subjects preferred a tempo. Despite this evidence, it remains uncertain whether an interference with activity in the left PMv affects tempo preference strength. Consequently, we conducted an offline repetitive transcranial magnetic stimulation (rTMS) study, in which the cortical excitability in the left PMv was temporarily reduced. As hypothesized, 0.9 Hz rTMS over the left PMv temporarily affected individual tempo preference strength depending on the individual strength of tempo preference in the control session. Moreover, PMv stimulation temporarily interfered with the stability of individual tempo preference strength within and across sessions. These effects were specific to the preference for tempo in contrast to the preference for timbre, bound to the first half of the experiment following PMv stimulation and could not be explained by an impairment of tempo recognition. Our results corroborate preceding fMRI findings and suggest that activity in the left PMv is part of a network that affects the strength of beat rate preference.
Project description:Both musical training and native language have been shown to have experience-based plastic effects on auditory processing. However, the combined effects within individuals are unclear. Recent research suggests that musical training and tone language speaking are not clearly additive in their effects on processing of auditory features and that there may be a disconnect between perceptual and neural signatures of auditory feature processing. The literature has only recently begun to investigate the effects of musical expertise on basic auditory processing for different linguistic groups. This work provides a profile of primary auditory feature discrimination for Mandarin speaking musicians and nonmusicians. The musicians showed enhanced perceptual discrimination for both frequency and duration as well as enhanced duration discrimination in a multifeature discrimination task, compared to nonmusicians. However, there were no differences between the groups in duration processing of nonspeech sounds at a subcortical level or in subcortical frequency representation of a nonnative tone contour, for fo or for the first or second formant region. The results indicate that musical expertise provides a cognitive, but not subcortical, advantage in a population of Mandarin speakers.