Emergent Synergistic Grasp-Like Behavior in a Visuomotor Joint Action Task: Evidence for Internal Forward Models as Building Blocks of Human Interactions.
ABSTRACT: Central to the mechanistic understanding of the human mind is to clarify how cognitive functions arise from simpler sensory and motor functions. A longstanding assumption is that forward models used by sensorimotor control to anticipate actions also serve to incorporate other people's actions and intentions, and give rise to sensorimotor interactions between people, and even abstract forms of interactions. That is, forward models could aid core aspects of human social cognition. To test whether forward models can be used to coordinate interactions, here we measured the movements of pairs of participants in a novel joint action task. For the task they collaborated to lift an object, each of them using fingers of one hand to push against the object from opposite sides, just like a single person would use two hands to grasp the object bimanually. Perturbations of the object were applied randomly as they are known to impact grasp-specific movement components in common grasping tasks. We found that co-actors quickly learned to make grasp-like movements with grasp components that showed coordination on average based on action observation of peak deviation and velocity of their partner's trajectories. Our data suggest that co-actors adopted pre-existing bimanual grasp programs for their own body to use forward models of their partner's effectors. This is consistent with the long-held assumption that human higher-order cognitive functions may take advantage of sensorimotor forward models to plan social behavior. New and Noteworthy: Taking an approach of sensorimotor neuroscience, our work provides evidence for a long-held belief that the coordination of physical as well as abstract interactions between people originates from certain sensorimotor control processes that form mental representations of people's bodies and actions, called forward models. With a new joint action paradigm and several new analysis approaches we show that, indeed, people coordinate each other's interactions based on forward models and mutual action observation.
Project description:What mechanisms distinguish interactive from non-interactive actions? To answer this question we tested participants while they took turns playing music with a virtual partner: in the interactive joint action condition, the participants played a melody together with their partner by grasping (C note) or pressing (G note) a cube-shaped instrument, alternating in playing one note each. In the non-interactive control condition, players' behavior was not guided by a shared melody, so that the partner's actions and notes were irrelevant to the participant. In both conditions, the participant's and partner's actions were physically congruent (e.g., grasp-grasp) or incongruent (e.g., grasp-point), and the partner's association between actions and notes was coherent with the participant's or reversed. Performance in the non-interactive condition was only affected by physical incongruence, whereas joint action was only affected when the partner's action-note associations were reversed. This shows that task interactivity shapes the sensorimotor coding of others' behaviors, and that joint action is based on active prediction of the partner's action effects rather than on passive action imitation. We suggest that such predictions are based on Dyadic Motor Plans that represent both the agent's and the partner's contributions to the interaction goal, like playing a melody together.
Project description:In the absence of pre-established communicative conventions, people create novel communication systems to successfully coordinate their actions toward a joint goal. In this study, we address two types of such novel communication systems: sensorimotor communication, where the kinematics of instrumental actions are systematically modulated, versus symbolic communication. We ask which of the two systems co-actors preferentially create when aiming to communicate about hidden object properties such as weight. The results of three experiments consistently show that actors who knew the weight of an object transmitted this weight information to their uninformed co-actors by systematically modulating their instrumental actions, grasping objects of particular weights at particular heights. This preference for sensorimotor communication was reduced in a fourth experiment where co-actors could communicate with weight-related symbols. Our findings demonstrate that the use of sensorimotor communication extends beyond the communication of spatial locations to non-spatial, hidden object properties.
Project description:During social interactions people automatically apply stereotypes in order to rapidly categorize others. Racial differences are among the most powerful cues that drive these categorizations and modulate our emotional and cognitive reactivity to others. We investigated whether implicit racial bias may also shape hand kinematics during the execution of realistic joint actions with virtual in- and out-group partners. Caucasian participants were required to perform synchronous imitative or complementary reach-to-grasp movements with avatars that had different skin color (white and black) but showed identical action kinematics. Results demonstrate that stronger visuo-motor interference (indexed here as hand kinematics differences between complementary and imitative actions) emerged: i) when participants were required to predict the partner's action goal in order to on-line adapt their own movements accordingly; ii) during interactions with the in-group partner, indicating the partner's racial membership modulates interactive behaviors. Importantly, the in-group/out-group effect positively correlated with the implicit racial bias of each participant. Thus visuo-motor interference during joint action, likely reflecting predictive embodied simulation of the partner's movements, is affected by cultural inter-individual differences.
Project description:When we grasp and lift novel objects, we rely on visual cues and sensorimotor memories to predictively scale our finger forces and exert compensatory torques according to object properties. Recently, it was shown that object appearance, previous force scaling errors, and previous torque compensation errors strongly impact our percept. However, the influence of visual geometric cues on the perception of object torques and weights in a grasp to lift task is poorly understood. Moreover, little is known about how visual cues, prior expectations, sensory feedback, and sensorimotor memories are integrated for anticipatory torque control and object perception. Here, 12 young and 12 elderly participants repeatedly grasped and lifted an object while trying to prevent object tilt. Before each trial, we randomly repositioned both the object handle, providing a geometric cue on the upcoming torque, as well as a hidden weight, adding an unforeseeable torque variation. Before lifting, subjects indicated their torque expectations, as well as reporting their experience of torque and weight after each lift. Mixed-effect multiple regression models showed that visual shape cues governed anticipatory torque compensation, whereas sensorimotor memories played less of a role. In contrast, the external torque and committed compensation errors at lift-off mainly determined how object torques and weight were perceived. The modest effect of handle position differed for torque and weight perception. Explicit torque expectations were also correlated with anticipatory torque compensation and torque perception. Our main findings generalized across both age groups. Our results suggest distinct weighting of inputs for action and perception according to reliability.
Project description:If language comprehension requires a sensorimotor simulation, how can abstract language be comprehended? We show that preparation to respond in an upward or downward direction affects comprehension of the abstract quantifiers "more and more" and "less and less" as indexed by an N400-like component. Conversely, the semantic content of the sentence affects the motor potential measured immediately before the upward or downward action is initiated. We propose that this bidirectional link between motor system and language arises because the motor system implements forward models that predict the sensory consequences of actions. Because the same movement (e.g., raising the arm) can have multiple forward models for different contexts, the models can make different predictions depending on whether the arm is raised, for example, to place an object or raised as a threat. Thus, different linguistic contexts invoke different forward models, and the predictions constitute different understandings of the language.
Project description:Do infants learn to interpret others' actions through their own experience producing goal-directed action, or does some knowledge of others' actions precede first-person experience? Several studies report that motor experience enhances action understanding, but the nature of this effect is not well understood. The present research investigates what is learned during early motoric production, and it tests whether knowledge of goal-directed actions, including an assumption that actors maximize efficiency given environmental constraints, exists before experience producing such actions. Three-month-old infants (who cannot yet effectively reach for and grasp objects) were given novel experience retrieving objects that rested on a surface with no barriers. They were then shown an actor reaching for an object over a barrier and tested for sensitivity to the efficiency of the action. These infants showed heightened attention when the agent reached inefficiently for a goal object; in contrast, infants who lacked successful reaching experience did not differentiate between direct and indirect reaches. Given that the infants could reach directly for objects during training and were given no opportunity to update their actions based on environmental constraints, the training experience itself is unlikely to have provided a basis for learning about action efficiency. We suggest that infants apply a general assumption of efficient action as soon as they have sufficient information (possibly derived from their own action experience) to identify an agent's goal in a given instance.
Project description:"Two route" theories of object-related action processing posit different temporal activation profiles of grasp-to-move actions (rapidly evoked based on object structure) versus skilled use actions (more slowly activated based on semantic knowledge). We capitalized on the exquisite temporal resolution and multidimensionality of Event-Related Potentials (ERPs) to directly test this hypothesis. Participants viewed manipulable objects (e.g., calculator) preceded by objects sharing either "grasp", "use", or no action attributes (e.g., bar of soap, keyboard, earring, respectively), as well as by action-unrelated but taxonomically-related objects (e.g., abacus); participants judged whether the two objects were related. The results showed more positive responses to "grasp-to-move" primed objects than "skilled use" primed objects or unprimed objects starting in the P1 (0-150 ms) time window and continuing onto the subsequent N1 and P2 components (150-300 ms), suggesting that only "grasp-to-move", but not "skilled use", actions may facilitate visual attention to object attributes. Furthermore, reliably reduced N400s (300-500 ms), an index of semantic processing, were observed to taxonomically primed and "skilled use" primed objects relative to unprimed objects, suggesting that "skilled use" action attributes are a component of distributed, multimodal semantic representations of objects. Together, our findings provide evidence supporting two-route theories by demonstrating that "grasp-to-move" and "skilled use" actions impact different aspects of object processing and highlight the relationship of "skilled use" information to other aspects of semantic memory.
Project description:Successful joint actions require precise temporal and spatial coordination between individuals who aim to achieve a common goal. A growing number of behavioral data suggest that to efficiently couple and coordinate a joint task, the actors have to represent both own and the partner's actions. However it is unclear how the motor system is specifically recruited for joint actions. To find out how the goal and the presence of the partner's hand can impact the motor activity during joint action, we assessed the functional state of 16 participants' motor cortex during observation and associated motor imagery of joint actions, individual actions, and non-goal-directed actions performed with either 1 or 2 hands. As an indicator of the functional state of the motor cortex, we used the reactivity of the rolandic magnetoencephalographic (MEG) beta rhythm following median-nerve stimulation. Motor imagery combined with action observation was associated with activation of the observer's motor cortex, mainly in the hemisphere contralateral to the viewed (and at the same time imagined) hand actions. The motor-cortex involvement was enhanced when the goal of the actions was visible but also, in the ipsilateral hemisphere, when the partner's hand was visible in the display. During joint action, the partner's action, in addition to the participant's own action, thus seems to be represented in the motor cortex so that it can be triggered by the mere presence of an acting hand in the peripersonal space.
Project description:We assessed the factors influencing the planning of actions required to manipulate one of two everyday objects with matching dimensions but openings at opposite ends: a cup and a vase. We found that, for cups, measures of movement preparation to reach and grasp the object were influenced by whether the grasp was made to the functional part of the object (wide opening) and whether the action would end in a supinated as opposed to a pronated grasp. These factors interacted such that effects of hand posture were found only when a less familiar grasp was made to the non-functional part of the cup (the base). These effects were not found with the vase, which has a less familiar location for grasping. We interpret the results in terms of a parallel model of action selection, modulated by both the familiarity of the grasp to a part of the object, likely to reflect object 'affordances' and the end state comfort of the action.
Project description:Voluntary actions towards manipulable objects are usually performed with a particular motor goal (i.e., a task-specific object-target-effector interaction) and in a particular social context (i.e., who would benefit from these actions), but the mutual influence of these two constraints has not yet been properly studied. For this purpose, we asked participants to grasp an object and place it on either a small or large target in relation to Fitts' law (motor goal). This first action prepared them for a second grasp-to-place action which was performed under temporal constraints, either by the participants themselves or by a confederate (social goal). Kinematic analysis of the first preparatory grasp-to-place action showed that, while deceleration time was impacted by the motor goal, peak velocity was influenced by the social goal. Movement duration and trajectory height were modulated by both goals, the effect of the social goal being attenuated by the effect of the motor goal. Overall, these results suggest that both motor and social constraints influence the characteristics of object-oriented actions, with effects that combine in a hierarchical way.