Unknown

Dataset Information

0

Visual phonetic processing localized using speech and nonspeech face gestures in video and point-light displays.


ABSTRACT: The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and nonspeech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to nonspeech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and nonspeech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study.

SUBMITTER: Bernstein LE 

PROVIDER: S-EPMC3120928 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC2929818 | biostudies-other
| S-EPMC3415388 | biostudies-other
| S-EPMC2896896 | biostudies-literature
| S-EPMC3866656 | biostudies-literature
| S-EPMC6022678 | biostudies-literature
| S-EPMC6729686 | biostudies-literature
| S-EPMC5816906 | biostudies-literature