Project description:ObjectiveTo investigate how the general public trades off explainability versus accuracy of artificial intelligence (AI) systems and whether this differs between healthcare and non-healthcare scenarios.Materials and methodsCitizens' juries are a form of deliberative democracy eliciting informed judgment from a representative sample of the general public around policy questions. We organized two 5-day citizens' juries in the UK with 18 jurors each. Jurors considered 3 AI systems with different levels of accuracy and explainability in 2 healthcare and 2 non-healthcare scenarios. Per scenario, jurors voted for their preferred system; votes were analyzed descriptively. Qualitative data on considerations behind their preferences included transcribed audio-recordings of plenary sessions, observational field notes, outputs from small group work and free-text comments accompanying jurors' votes; qualitative data were analyzed thematically by scenario, per and across AI systems.ResultsIn healthcare scenarios, jurors favored accuracy over explainability, whereas in non-healthcare contexts they either valued explainability equally to, or more than, accuracy. Jurors' considerations in favor of accuracy regarded the impact of decisions on individuals and society, and the potential to increase efficiency of services. Reasons for emphasizing explainability included increased opportunities for individuals and society to learn and improve future prospects and enhanced ability for humans to identify and resolve system biases.ConclusionCitizens may value explainability of AI systems in healthcare less than in non-healthcare domains and less than often assumed by professionals, especially when weighed against system accuracy. The public should therefore be actively consulted when developing policy on AI explainability.
Project description:The basal ganglia (BG) play a key role in decision-making, preventing impulsive actions in some contexts while facilitating fast adaptations in others. The specific contributions of different BG structures to this nuanced behavior remain unclear, particularly under varying situations of noisy and conflicting information that necessitate ongoing adjustments in the balance between speed and accuracy. Theoretical accounts suggest that dynamic regulation of the amount of evidence required to commit to a decision (a dynamic "decision boundary") may be necessary to meet these competing demands. Through the application of novel computational modeling tools in tandem with direct neural recordings from human BG areas, we find that neural dynamics in the theta band manifest as variations in a collapsing decision boundary as a function of conflict and uncertainty. We collected intracranial recordings from patients diagnosed with either Parkinson's disease (PD) (n = 14) or dystonia (n = 3) in the subthalamic nucleus (STN), globus pallidus internus (GPi), and globus pallidus externus (GPe) during their performance of a novel perceptual discrimination task in which we independently manipulated uncertainty and conflict. To formally characterize whether these task and neural components influenced decision dynamics, we leveraged modified diffusion decision models (DDMs). Behavioral choices and response time distributions were best characterized by a modified DDM in which the decision boundary collapsed over time, but where the onset and shape of this collapse varied with conflict. Moreover, theta dynamics in BG structures modulated the onset and shape of this collapse but differentially across task conditions. In STN, theta activity was related to a prolonged decision boundary (indexed by slower collapse and therefore more deliberate choices) during high conflict situations. Conversely, rapid declines in GPe theta during low conflict conditions were related to rapidly collapsing boundaries and expedited choices, with additional complementary decision bound adjustments during high uncertainty situations. Finally, GPi theta effects were uniform across conditions, with increases in theta associated with a prolongation of decision bound collapses. Together, these findings provide a nuanced understanding of how our brain thwarts impulsive actions while nonetheless enabling behavioral adaptation amidst noisy and conflicting information.
Project description:The literature has been relatively silent about post-conflict processes. However, understanding the way humans deal with post-conflict situations is a challenge in our societies. With this in mind, we focus the present study on the rationality of cooperative decision making after an intergroup conflict, i.e., the extent to which groups take advantage of post-conflict situations to obtain benefits from collaborating with the other group involved in the conflict. Based on dual-process theories of thinking and affect heuristic, we propose that intergroup conflict hinders the rationality of cooperative decision making. We also hypothesize that this rationality improves when groups are involved in an in-group deliberative discussion. Results of a laboratory experiment support the idea that intergroup conflict -associated with indicators of the activation of negative feelings (negative affect state and heart rate)- has a negative effect on the aforementioned rationality over time and on both group and individual decision making. Although intergroup conflict leads to sub-optimal decision making, rationality improves when groups and individuals subjected to intergroup conflict make decisions after an in-group deliberative discussion. Additionally, the increased rationality of the group decision making after the deliberative discussion is transferred to subsequent individual decision making.
Project description:Animal approach-avoidance conflict paradigms have been used extensively to operationalize anxiety, quantify the effects of anxiolytic agents, and probe the neural basis of fear and anxiety. Results from human neuroimaging studies support that a frontal-striatal-amygdala neural circuitry is important for approach-avoidance learning. However, the neural basis of decision-making is much less clear in this context. Thus, we combined a recently developed human approach-avoidance paradigm with functional magnetic resonance imaging (fMRI) to identify neural substrates underlying approach-avoidance conflict decision-making. Fifteen healthy adults completed the approach-avoidance conflict (AAC) paradigm during fMRI. Analyses of variance were used to compare conflict to nonconflict (avoid-threat and approach-reward) conditions and to compare level of reward points offered during the decision phase. Trial-by-trial amplitude modulation analyses were used to delineate brain areas underlying decision-making in the context of approach/avoidance behavior. Conflict trials as compared to the nonconflict trials elicited greater activation within bilateral anterior cingulate cortex, anterior insula, and caudate, as well as right dorsolateral prefrontal cortex (PFC). Right caudate and lateral PFC activation was modulated by level of reward offered. Individuals who showed greater caudate activation exhibited less approach behavior. On a trial-by-trial basis, greater right lateral PFC activation related to less approach behavior. Taken together, results suggest that the degree of activation within prefrontal-striatal-insula circuitry determines the degree of approach versus avoidance decision-making. Moreover, the degree of caudate and lateral PFC activation related to individual differences in approach-avoidance decision-making. Therefore, the approach-avoidance conflict paradigm is ideally suited to probe anxiety-related processing differences during approach-avoidance decision-making.
Project description:This study addressed the cognitive impacts of providing correct and incorrect machine learning (ML) outputs in support of an object detection task. The study consisted of five experiments that manipulated the accuracy and importance of mock ML outputs. In each of the experiments, participants were given the T and L task with T-shaped targets and L-shaped distractors. They were tasked with categorizing each image as target present or target absent. In Experiment 1, they performed this task without the aid of ML outputs. In Experiments 2-5, they were shown images with bounding boxes, representing the output of an ML model. The outputs could be correct (hits and correct rejections), or they could be erroneous (false alarms and misses). Experiment 2 manipulated the overall accuracy of these mock ML outputs. Experiment 3 manipulated the proportion of different types of errors. Experiments 4 and 5 manipulated the importance of specific types of stimuli or model errors, as well as the framing of the task in terms of human or model performance. These experiments showed that model misses were consistently harder for participants to detect than model false alarms. In general, as the model's performance increased, human performance increased as well, but in many cases the participants were more likely to overlook model errors when the model had high accuracy overall. Warning participants to be on the lookout for specific types of model errors had very little impact on their performance. Overall, our results emphasize the importance of considering human cognition when determining what level of model performance and types of model errors are acceptable for a given task.
Project description:HIGHLIGHTS We use a simple gambles design in an fMRI study to compare two conditions: ambiguity and conflict.Participants were more conflict averse than ambiguity averse.Ambiguity aversion did not correlate with conflict aversion.Activation in the medial prefrontal cortex correlated with ambiguity level and ambiguity aversion.Activation in the ventral striatum correlated with conflict level and conflict aversion. Studies of decision making under uncertainty generally focus on imprecise information about outcome probabilities ("ambiguity"). It is not clear, however, whether conflicting information about outcome probabilities affects decision making in the same manner as ambiguity does. Here we combine functional magnetic resonance imaging (fMRI) and a simple gamble design to study this question. In this design the levels of ambiguity and conflict are parametrically varied, and ambiguity and conflict gambles are matched on expected value. Behaviorally, participants avoided conflict more than ambiguity, and attitudes toward ambiguity and conflict did not correlate across participants. Neurally, regional brain activation was differentially modulated by ambiguity level and aversion to ambiguity and by conflict level and aversion to conflict. Activation in the medial prefrontal cortex was correlated with the level of ambiguity and with ambiguity aversion, whereas activation in the ventral striatum was correlated with the level of conflict and with conflict aversion. These novel results indicate that decision makers process imprecise and conflicting information differently, a finding that has important implications for basic and clinical research.
Project description:AI is now an integral part of everyday decision-making, assisting us in both routine and high-stakes choices. These AI models often learn from human behavior, assuming this training data is unbiased. However, we report five studies that show that people change their behavior to instill desired routines into AI, indicating this assumption is invalid. To show this behavioral shift, we recruited participants to play the ultimatum game, where they were asked to decide whether to accept proposals of monetary splits made by either other human participants or AI. Some participants were informed their choices would be used to train an AI proposer, while others did not receive this information. Across five experiments, we found that people modified their behavior to train AI to make fair proposals, regardless of whether they could directly benefit from the AI training. After completing this task once, participants were invited to complete this task again but were told their responses would not be used for AI training. People who had previously trained AI persisted with this behavioral shift, indicating that the new behavioral routine had become habitual. This work demonstrates that using human behavior as training data has more consequences than previously thought since it can engender AI to perpetuate human biases and cause people to form habits that deviate from how they would normally act. Therefore, this work underscores a problem for AI algorithms that aim to learn unbiased representations of human preferences.
Project description:A striking neurochemical form of compartmentalization has been found in the striatum of humans and other species, dividing it into striosomes and matrix. The function of this organization has been unclear, but the anatomical connections of striosomes indicate their relation to emotion-related brain regions, including the medial prefrontal cortex. We capitalized on this fact by combining pathway-specific optogenetics and electrophysiology in behaving rats to search for selective functions of striosomes. We demonstrate that a medial prefronto-striosomal circuit is selectively active in and causally necessary for cost-benefit decision-making under approach-avoidance conflict conditions known to evoke anxiety in humans. We show that this circuit has unique dynamic properties likely reflecting striatal interneuron function. These findings demonstrate that cognitive and emotion-related functions are, like sensory-motor processing, subject to encoding within compartmentally organized representations in the forebrain and suggest that striosome-targeting corticostriatal circuits can underlie neural processing of decisions fundamental for survival.
Project description:Approach-avoidance conflict is observed in the competing motivations towards the benefits and away from the costs of a decision. The current study investigates the action dynamics of response motion during such conflicts in an attempt to characterise their dynamic resolution. An approach-avoidance conflict was generated by varying the appetitive consequences of a decision (i.e., point rewards and shorter participation time) in the presence of simultaneous aversive consequences (i.e., shock probability). Across two experiments, approach-avoidance conflict differentially affected response trajectories. Approach trajectories were less complex than avoidance trajectories and, as approach and avoidance motivations neared equipotentiality, response trajectories were more deflected from the shortest route to the eventual choice. Consistency in the location of approach and avoidance response options reduced variability in performance enabling more sensitive estimates of dynamic conflict. The time course of competing influences on response trajectories including trial-to-trial effects and conflict between approach and avoidance were estimated using regression analyses. We discuss these findings in terms of a dynamic theory of approach-avoidance that we hope will lead to insights of practical relevance in the field of maladaptive avoidance.