Susceptibility to audio signals during autonomous driving.
ABSTRACT: We investigate how susceptible human drivers are to auditory signals in three situations: when stationary, when driving, or when being driven by an autonomous vehicle. Previous research has shown that human susceptibility is reduced when driving compared to when being stationary. However, it is not known how susceptible humans are under autonomous driving conditions. At the same time, good susceptibility is crucial under autonomous driving conditions, as such systems might use auditory signals to communicate a transition of control from the automated vehicle to the human driver. We measured susceptibility using a three-stimulus auditory oddball paradigm while participants experienced three driving conditions: stationary, autonomous, or driving. We studied susceptibility through the frontal P3 (fP3) Electroencephalography Event-Related Potential response (EEG ERP response). Results show that the fP3 component is reduced in autonomous compared to stationary conditions, but not as strongly as when participants drove themselves. In addition, the fP3 component is further reduced when the oddball task does not require a response (i.e., in a passive condition, versus active). The implication is that, even in a relatively simple autonomous driving scenario, people's susceptibility of auditory signals is not as high as would be beneficial for responding to auditory stimuli.
Project description:Research on partially automated driving has revealed relevant problems with driving performance, particularly when drivers' intervention is required (e.g., take-over when automation fails). Mental fatigue has commonly been proposed to explain these effects after prolonged automated drives. However, performance problems have also been reported after just a few minutes of automated driving, indicating that other factors may also be involved. We hypothesize that, besides mental fatigue, an underload effect of partial automation may also affect driver attention. In this study, such potential effect was investigated during short periods of partially automated and manual driving and at different speeds. Subjective measures of mental demand and vigilance and performance to a secondary task (an auditory oddball task) were used to assess driver attention. Additionally, modulations of some specific attention-related event-related potentials (ERPs, N1 and P3 components) were investigated. The mental fatigue effects associated with the time on task were also evaluated by using the same measurements. Twenty participants drove in a fixed-base simulator while performing an auditory oddball task that elicited the ERPs. Six conditions were presented (5-6 min each) combining three speed levels (low, comfortable and high) and two automation levels (manual and partially automated). The results showed that, when driving partially automated, scores in subjective mental demand and P3 amplitudes were lower than in the manual conditions. Similarly, P3 amplitude and self-reported vigilance levels decreased with the time on task. Based on previous studies, these findings might reflect a reduction in drivers' attention resource allocation, presumably due to the underload effects of partial automation and to the mental fatigue associated with the time on task. Particularly, such underload effects on attention could explain the performance decrements after short periods of automated driving reported in other studies. However, further studies are needed to investigate this relationship in partial automation and in other automation levels.
Project description:There has been much debate recently over the functional role played by the planum temporale (PT) within the context of the dorsal auditory processing stream. Some studies indicate that regions in the PT support spatial hearing and other auditory functions, whereas others demonstrate sensory-motor response properties. This multifunctionality has led to the claim that the PT is performing a common computational pattern matching operation, then routing the signals (spatial, object, sensory-motor) into an appropriate processing stream. An alternative possibility is that the PT is functionally subdivided with separate regions supporting various functions. We assess this possibility using a within subject fMRI block design. DTI data were also collected to examine connectivity. There were four auditory conditions: stationary noise, moving noise, listening to pseudowords, and shadowing pseudowords (covert repetition). Contrasting the shadow and listen conditions should activate regions specific to sensory-motor processes, while contrasting the stationary and moving noise conditions should activate regions involved in spatial hearing. Subjects (N = 16) showed greater activation for shadowing in left posterior PT, area Spt, when the shadow and listen conditions were contrasted. The motion vs. stationary noise contrast revealed greater activation in a more medial and anterior portion of left PT. Seeds from these two contrasts were then used to guide the DTI analysis in an examination of connectivity via streamline tractography, which revealed different patterns of connectivity. Findings support a heterogeneous model of the PT, with functionally distinct regions for sensory-motor integration and processes involved in auditory spatial perception.
Project description:To further improve the fuel economy of series hybrid electric tracked vehicles, a reinforcement learning (RL)-based real-time energy management strategy is developed in this paper. In order to utilize the statistical characteristics of online driving schedule effectively, a recursive algorithm for the transition probability matrix (TPM) of power-request is derived. The reinforcement learning (RL) is applied to calculate and update the control policy at regular time, adapting to the varying driving conditions. A facing-forward powertrain model is built in detail, including the engine-generator model, battery model and vehicle dynamical model. The robustness and adaptability of real-time energy management strategy are validated through the comparison with the stationary control strategy based on initial transition probability matrix (TPM) generated from a long naturalistic driving cycle in the simulation. Results indicate that proposed method has better fuel economy than stationary one and is more effective in real-time control.
Project description:The evaluation of car drivers' stress condition is gaining interest as research on Autonomous Driving Systems (ADS) progresses. The analysis of the stress response can be used to assess the acceptability of ADS and to compare the driving styles of different autonomous drive algorithms. In this contribution, we present a system based on the analysis of the Electrodermal Activity Skin Potential Response (SPR) signal, aimed to reveal the driver's stress induced by different driving situations. We reduce motion artifacts by processing two SPR signals, recorded from the hands of the subjects, and outputting a single clean SPR signal. Statistical features of signal blocks are sent to a Supervised Learning Algorithm, which classifies between stress and normal driving (non-stress) conditions. We present the results obtained from an experiment using a professional driving simulator, where a group of people is asked to undergo manual and autonomous driving on a highway, facing some unexpected events meant to generate stress. The results of our experiment show that the subjects generally appear more stressed during manual driving, indicating that the autonomous drive can possibly be well received by the public. During autonomous driving, however, significant peaks of the SPR signal are evident during unexpected events. By examining the electrocardiogram signal, the average heart rate is generally higher in the manual case compared to the autonomous case. This further supports our previous findings, even if it may be due, in part, to the physical activity involved in manual driving.
Project description:The aim of this study is to demonstrate the potential of sensory substitution/augmentation (SS/A) techniques for driver assistance systems in a simulated driving environment. Using a group-comparison design, we examined lane-keeping skill acquisition in a driving simulator that can provide information regarding vehicle lateral position by changing the binaural balance of auditory white noise delivered to the driver. Consequently, lane-keeping accuracy was significantly degraded when the lower visual scene (proximal part of the road) was occluded, suggesting it conveyed critical visual information necessary for lane keeping. After 40?minutes of training with auditory cueing of vehicle lateral position, lane-keeping accuracy returned to the baseline (normal driving) level. This indicates that auditory cueing can compensate for the loss of visual information. Taken together, our data suggest that auditory cueing of vehicle lateral position is sufficient for lane-keeping skill acquisition and that SS/A techniques can potentially be used for the development of driver assistance systems, particularly for situations where immediate time-sensitive actions are required in response to rapidly changing sensory information. Although this study is the first to apply SS/A techniques to driver assistance, further studies are however required to establish the generalizability of the findings to real-world settings.
Project description:Motorsports have become an excellent playground for testing the limits of technology, machines, and human drivers. This paper presents a study that used a professional racing simulator to compare the behavior of human and autonomous drivers under an aggressive driving scenario. A professional simulator offers a close-to-real emulation of underlying physics and vehicle dynamics, as well as a wealth of clean telemetry data. In the first study, the participants' task was to achieve the fastest lap while keeping the car on the track. We grouped the resulting laps according to the performance (lap-time), defining driving behaviors at various performance levels. An extensive analysis of vehicle control features obtained from telemetry data was performed with the goal of predicting the driving performance and informing an autonomous system. In the second part of the study, a state-of-the-art reinforcement learning (RL) algorithm was trained to control the brake, throttle and steering of the simulated racing car. We investigated how the features used to predict driving performance in humans can be used in autonomous driving. Our study investigates human driving patterns with the goal of finding traces that could improve the performance of RL approaches. Conversely, they can also be applied to training (professional) drivers to improve their racing line.
Project description:Driving cessation for some older adults can exacerbate physical, cognitive, and mental health challenges due to loss of independence and social isolation. Fully autonomous vehicles may offer an alternative transport solution, increasing social contact and encouraging independence. However, there are gaps in understanding the impact of older adults' passive role on safe human-vehicle interaction, and on their well-being. 37 older adults (mean age ± SD = 68.35 ± 8.49 years) participated in an experiment where they experienced fully autonomous journeys consisting of a distinct stop (an unexpected event versus an expected event). The autonomous behavior of the vehicle was achieved using the Wizard of Oz approach. Subjective ratings of trust and reliability, and driver state monitoring including visual attention strategies (fixation duration and count) and physiological arousal (skin conductance and heart rate), were captured during the journeys. Results revealed that subjective trust and reliability ratings were high after journeys for both types of events. During an unexpected stop, overt visual attention was allocated toward the event, whereas during an expected stop, visual attention was directed toward the human-machine interface (HMI) and distributed across the central and peripheral driving environment. Elevated skin conductance level reflecting increased arousal persisted only after the unexpected event. These results suggest that safety-critical events occurring during passive fully automated driving may narrow visual attention and elevate arousal mechanisms. To improve in-vehicle user experience for older adults, a driver state monitoring system could examine such psychophysiological indices to evaluate functional state and well-being. This information could then be used to make informed decisions on vehicle behavior and offer reassurance during elevated arousal during unexpected events.
Project description:Coordinated attention to information from multiple senses is fundamental to our ability to respond to salient environmental events, yet little is known about brain network mechanisms that guide integration of information from multiple senses. Here we investigate dynamic causal mechanisms underlying multisensory auditory-visual attention, focusing on a network of right-hemisphere frontal-cingulate-parietal regions implicated in a wide range of tasks involving attention and cognitive control. Participants performed three 'oddball' attention tasks involving auditory, visual and multisensory auditory-visual stimuli during fMRI scanning. We found that the right anterior insula (rAI) demonstrated the most significant causal influences on all other frontal-cingulate-parietal regions, serving as a major causal control hub during multisensory attention. Crucially, we then tested two competing models of the role of the rAI in multisensory attention: an 'integrated' signaling model in which the rAI generates a common multisensory control signal associated with simultaneous attention to auditory and visual oddball stimuli versus a 'segregated' signaling model in which the rAI generates two segregated and independent signals in each sensory modality. We found strong support for the integrated, rather than the segregated, signaling model. Furthermore, the strength of the integrated control signal from the rAI was most pronounced on the dorsal anterior cingulate and posterior parietal cortices, two key nodes of saliency and central executive networks respectively. These results were preserved with the addition of a superior temporal sulcus region involved in multisensory processing. Our study provides new insights into the dynamic causal mechanisms by which the AI facilitates multisensory attention.
Project description:BACKGROUND:Anxious hypervigilance is marked by sensitized sensory-perceptual processes and attentional biases to potential danger cues in the environment. How this is realized at the neurocomputational level is unknown but could clarify the brain mechanisms disrupted in psychiatric conditions such as posttraumatic stress disorder. Predictive coding, instantiated by dynamic causal models, provides a promising framework to ground these state-related changes in the dynamic interactions of reciprocally connected brain areas. METHODS:Anxiety states were elicited in healthy participants (n = 19) by exposure to the threat of unpredictable, aversive shocks while undergoing magnetoencephalography. An auditory oddball sequence was presented to measure cortical responses related to deviance detection, and dynamic causal models quantified deviance-related changes in effective connectivity. Participants were also administered alprazolam (double-blinded, placebo-controlled crossover) to determine whether the cortical effects of threat-induced anxiety are reversed by acute anxiolytic treatment. RESULTS:Deviant tones elicited increased auditory cortical responses under threat. Bayesian analyses revealed that hypervigilant responding was best explained by increased postsynaptic gain in primary auditory cortex activity as well as modulation of feedforward, but not feedback, coupling within a temporofrontal cortical network. Increasing inhibitory gamma-aminobutyric acidergic action with alprazolam reduced anxiety and restored feedback modulation within the network. CONCLUSIONS:Threat-induced anxiety produced unbalanced feedforward signaling in response to deviations in predicable sensory input. Amplifying ascending sensory prediction error signals may optimize stimulus detection in the face of impending threats. At the same time, diminished descending sensory prediction signals impede perceptual learning and may, therefore, underpin some of the deleterious effects of anxiety on higher-order cognition.
Project description:In social interactions, people have to pay attention both to the 'what' and 'who'. In particular, expressive changes heard on speech signals have to be integrated with speaker identity, differentiating e.g. self- and other-produced signals. While previous research has shown that self-related visual information processing is facilitated compared to non-self stimuli, evidence in the auditory modality remains mixed. Here, we compared electroencephalography (EEG) responses to expressive changes in sequence of self- or other-produced speech sounds using a mismatch negativity (MMN) passive oddball paradigm. Critically, to control for speaker differences, we used programmable acoustic transformations to create voice deviants that differed from standards in exactly the same manner, making EEG responses to such deviations comparable between sequences. Our results indicate that expressive changes on a stranger's voice are highly prioritized in auditory processing compared to identical changes on the self-voice. Other-voice deviants generate earlier MMN onset responses and involve stronger cortical activations in a left motor and somatosensory network suggestive of an increased recruitment of resources for less internally predictable, and therefore perhaps more socially relevant, signals.