Project description:We empirically investigated the effect of mental imagery on young children's music compositional creativity. Children aged 5 to 8 years participated in two music composition sessions. In the control session, participants based their composition on a motif that they had created using a sequence of letter names. In the mental imagery session, participants were given a picture of an animal and instructed to imagine the animal's sounds and movements, before incorporating what they had imagined into their composition. Six expert judges independently rated all music compositions on creativity based on subjective criteria (consensual assessment). Reliability analyses indicated that the expert judges demonstrated a high level of agreement in their ratings. The mental imagery compositions received significantly higher creativity ratings by the expert judges than did the control compositions. These results provide evidence for the effectiveness of mental imagery in enhancing young children's music compositional creativity.
Project description:Traditional AI-planning methods for task planning in robotics require a symbolically encoded domain description. While powerful in well-defined scenarios, as well as human-interpretable, setting this up requires a substantial effort. Different from this, most everyday planning tasks are solved by humans intuitively, using mental imagery of the different planning steps. Here, we suggest that the same approach can be used for robots too, in cases which require only limited execution accuracy. In the current study, we propose a novel sub-symbolic method called Simulated Mental Imagery for Planning (SiMIP), which consists of perception, simulated action, success checking, and re-planning performed on 'imagined' images. We show that it is possible to implement mental imagery-based planning in an algorithmically sound way by combining regular convolutional neural networks and generative adversarial networks. With this method, the robot acquires the capability to use the initially existing scene to generate action plans without symbolic domain descriptions, while at the same time, plans remain human-interpretable, different from deep reinforcement learning, which is an alternative sub-symbolic approach. We create a data set from real scenes for a packing problem of having to correctly place different objects into different target slots. This way efficiency and success rate of this algorithm could be quantified.
Project description:The conscious field includes not only representations about external stimuli (e.g., percepts), but also conscious contents associated with internal states, such as action-related intentions (e.g., urges). Although understudied, the latter may provide unique insights into the nature of consciousness. To illuminate these phenomena, in a new experimental paradigm [Reflexive Imagery Task (RIT)], participants were instructed to not subvocalize the names of visually-presented objects. Each object was presented for 10 s on a screen. Participants indicated whenever they involuntarily subvocalized the object name. Research has revealed that it is difficult to suppress such subvocalizations, which occur on over 80% of the trials. Can the effect survive if one intentionally generates a competing (internally-generated) conscious content? If so, this would suggest that intentional and unintentional contents can co-exist simultaneously in consciousness in interesting ways. To investigate this possibility, in one condition, participants were instructed to reiteratively subvocalize a speech sound ("da, da, da") throughout the trial. This internally generated content is self-generated and intentional. Involuntary subvocalizations of object names still arose on over 80% of the trials. One could hypothesize that subvocalizations occurred because of the pauses between the intended speech sounds, but this is inconsistent with the observation that comparable results arose even when participants subvocalized a continuous, unbroken hum ("daaa….") throughout the trial. Regarding inter-content interactions, the continuous hum and object name seem to co-exist simultaneously in consciousness. This intriguing datum requires further investigation. We discuss the implications of this new paradigm for the study of internally-generated conscious contents.
Project description:Directed, intentional imagination is pivotal for self-regulation in the form of escapism and therapies for a wide variety of mental health conditions, such anxiety and stress disorders, as well as phobias. Clinical application in particular benefits from increasing our understanding of imagination, as well as non-invasive means of influencing it. To investigate imagination, this study draws from the prior observation that music can influence the imagined content during non-directed mind-wandering, as well as the finding that relative orientation within time and space is retained in imagination. One hundred participants performed a directed imagination task that required watching a video of a figure travelling towards a barely visible landmark, and then closing their eyes and imagining a continuation of the journey. During each imagined journey, participants either listened to music or silence. After the imagined journeys, participants reported vividness, the imagined time passed and distance travelled, as well as the imagined content. Bayesian mixed effects models reveal strong evidence that vividness, sentiment, as well imagined time passed and distances travelled, are influenced by the music, and show that aspects of these effects can be modelled through features such as tempo. The results highlight music's potential to support therapies such as Exposure Therapy and Imagery Rescripting, which deploy directed imagination as a clinical tool.
Project description:The role of attention in perceptual learning has been controversial. Numerous studies have reported that learning does not occur on stimulus features that are irrelevant to a subject's task [1,2] and have concluded that focused attention on a feature is necessary for a feature to be learned. In contrast, another line of studies has shown that perceptual learning occurs even on task-irrelevant features that are subthreshold, and concluded that attention on a feature is not required to learn that feature [3-5]. Here we attempt to reconcile these divergent findings by systematically exploring the relation between signal strength of the motion stimuli used during training and the resultant magnitude of perceptual learning. Our results show that performance improvements only occurred for the motion-stimuli trained at low, parathreshold, coherence levels. The results are in accord with the hypothesis that weak task-irrelevant signals fail to be 'noticed', and consequently to be suppressed, by the attention system and thus are learned, while stronger stimulus signals are detected, and suppressed [6], and are not learned. These results provide a parsimonious explanation of why task-irrelevant learning is found in some studies but not others, and could give an important clue to resolving a long-standing controversy.
Project description:Research in attention and action control produced substantial evidence suggesting the presence of feature binding. This study explores the binding of task-irrelevant context features in cued task switching. We predicted that repeating a context feature in trial n retrieves the trial n - 1 episode. Consequently, performance should improve when the retrieved features match the features of the current trial. Two experiments (N = 124; N = 96) employing different tasks and materials showed that repeating the task-irrelevant context improved performance when the task and the response repeated. Furthermore, repeating the task-irrelevant context increased task repetition benefits only when the context feature appeared synchronously with cue onset, but not when the context feature appeared with a 300-ms delay (Experiment 1). Similarly, repeating the task-irrelevant context improved performance when the task and the response repeated only when the context feature was part of the cue, and not when it was part of the target (Experiment 2). Taken together, binding and retrieval processes seem to play a crucial role in task switching, alongside response inhibition processes. In turn, our study provided a better understanding of binding and retrieval of task-irrelevant features in general, and specifically on how they modulate response repetition benefits in task repetitions.
Project description:Group musical improvisation is thought to be akin to conversation, and therapeutically has been shown to be effective at improving communicativeness, sociability, creative expression, and overall psychological health. To understand these therapeutic effects, clarifying the nature of brain activity during improvisational cognition is important. Some insight regarding brain activity during improvisational music cognition has been gained via functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). However, we have found no reports based on magnetoencephalography (MEG). With the present study, we aimed to demonstrate the feasibility of improvisational music performance experimentation in MEG. We designed a novel MEG-compatible keyboard, and used it with experienced musicians (N = 13) in a music performance paradigm to spectral-spatially differentiate spontaneous brain activity during mental imagery of improvisational music performance. Analyses of source activity revealed that mental imagery of improvisational music performance induced greater theta (5-7 Hz) activity in left temporal areas associated with rhythm production and communication, greater alpha (8-12 Hz) activity in left premotor and parietal areas associated with sensorimotor integration, and less beta (15-29 Hz) activity in right frontal areas associated with inhibition control. These findings support the notion that musical improvisation is conversational, and suggest that creation of novel auditory content is facilitated by a more internally-directed, disinhibited cognitive state.
Project description:During music listening, humans routinely acquire the regularities of the acoustic sequences and use them to anticipate and interpret the ongoing melody. Specifically, in line with this predictive framework, it is thought that brain responses during such listening reflect a comparison between the bottom-up sensory responses and top-down prediction signals generated by an internal model that embodies the music exposure and expectations of the listener. To attain a clear view of these predictive responses, previous work has eliminated the sensory inputs by inserting artificial silences (or sound omissions) that leave behind only the corresponding predictions of the thwarted expectations. Here, we demonstrate a new alternate approach in which we decode the predictive electroencephalography (EEG) responses to the silent intervals that are naturally interspersed within the music. We did this as participants (experiment 1, 20 participants, 10 female; experiment 2, 21 participants, 6 female) listened or imagined Bach piano melodies. Prediction signals were quantified and assessed via a computational model of the melodic structure of the music and were shown to exhibit the same response characteristics when measured during listening or imagining. These include an inverted polarity for both silence and imagined responses relative to listening, as well as response magnitude modulations that precisely reflect the expectations of notes and silences in both listening and imagery conditions. These findings therefore provide a unifying view that links results from many previous paradigms, including omission reactions and the expectation modulation of sensory responses, all in the context of naturalistic music listening.SIGNIFICANCE STATEMENT Music perception depends on our ability to learn and detect melodic structures. It has been suggested that our brain does so by actively predicting upcoming music notes, a process inducing instantaneous neural responses as the music confronts these expectations. Here, we studied this prediction process using EEGs recorded while participants listen to and imagine Bach melodies. Specifically, we examined neural signals during the ubiquitous musical pauses (or silent intervals) in a music stream and analyzed them in contrast to the imagery responses. We find that imagined predictive responses are routinely co-opted during ongoing music listening. These conclusions are revealed by a new paradigm using listening and imagery of naturalistic melodies.
Project description:It has been shown that as cognitive demands of a non-emotional task increase, amygdala response to task-irrelevant emotional stimuli is reduced. However, it remains unclear whether effects are due to altered task demands, or altered perceptual input associated with task demands. Here, we present fMRI data from 20 adult males during a novel cognitive conflict task in which the requirement to scan emotional information was necessary for task performance and held constant across levels of cognitive conflict. Response to fearful facial expressions was attenuated under high (vs low) conflict conditions, as indexed by both slower reaction times and reduced right amygdala response. Psychophysiological interaction analysis showed that increased amygdala response to fear in the low conflict condition was accompanied by increased functional coupling with middle frontal gyrus, a prefrontal region previously associated with emotion regulation during cognitive task performance. These data suggest that amygdala response to emotion is modulated as a function of task demands, even when perceptual inputs are closely matched across load conditions. PPI data also show that, in particular emotional contexts, increased functional coupling of amygdala with prefrontal cortex can paradoxically occur when executive demands are lower.