Unknown

Dataset Information

0

Neuro-computational mechanisms and individual biases in action-outcome learning under moral conflict.


ABSTRACT: Learning to predict action outcomes in morally conflicting situations is essential for social decision-making but poorly understood. Here we tested which forms of Reinforcement Learning Theory capture how participants learn to choose between self-money and other-shocks, and how they adapt to changes in contingencies. We find choices were better described by a reinforcement learning model based on the current value of separately expected outcomes than by one based on the combined historical values of past outcomes. Participants track expected values of self-money and other-shocks separately, with the substantial individual difference in preference reflected in a valuation parameter balancing their relative weight. This valuation parameter also predicted choices in an independent costly helping task. The expectations of self-money and other-shocks were biased toward the favored outcome but fMRI revealed this bias to be reflected in the ventromedial prefrontal cortex while the pain-observation network represented pain prediction errors independently of individual preferences.

SUBMITTER: Fornari L 

PROVIDER: S-EPMC9988878 | biostudies-literature | 2023 Mar

REPOSITORIES: biostudies-literature

altmetric image

Publications

Neuro-computational mechanisms and individual biases in action-outcome learning under moral conflict.

Fornari Laura L   Ioumpa Kalliopi K   Nostro Alessandra D AD   Evans Nathan J NJ   De Angelis Lorenzo L   Speer Sebastian P H SPH   Paracampo Riccardo R   Gallo Selene S   Spezio Michael M   Keysers Christian C   Gazzola Valeria V  

Nature communications 20230306 1


Learning to predict action outcomes in morally conflicting situations is essential for social decision-making but poorly understood. Here we tested which forms of Reinforcement Learning Theory capture how participants learn to choose between self-money and other-shocks, and how they adapt to changes in contingencies. We find choices were better described by a reinforcement learning model based on the current value of separately expected outcomes than by one based on the combined historical value  ...[more]

Similar Datasets

| S-EPMC8881635 | biostudies-literature
| S-EPMC8425597 | biostudies-literature
| S-EPMC6207318 | biostudies-literature
| S-EPMC8821718 | biostudies-literature
| S-EPMC6802597 | biostudies-literature
| S-EPMC6899836 | biostudies-literature
| S-EPMC3948624 | biostudies-literature
| S-EPMC5393307 | biostudies-literature
| S-EPMC6739396 | biostudies-literature
| S-EPMC8651977 | biostudies-literature