A novel framework for discharge uncertainty quantification applied to 500 UK gauging stations.
ABSTRACT: A generalized framework for discharge uncertainty estimation is presentedAllows estimation of place-specific discharge uncertainties for many catchmentsLocal conditions dominate in determining discharge uncertainty magnitudes.
Project description:Clear communication with patients upon emergency department (ED) discharge is important for patient safety during the transition to outpatient care. Over one-third of patients are discharged from the ED with diagnostic uncertainty, yet there is no established approach for effective discharge communication in this scenario. From 2017 to 2019, the authors developed the Uncertainty Communication Checklist for use in simulation-based training and assessment of emergency physician communication skills when discharging patients with diagnostic uncertainty. This development process followed the established 12-step Checklist Development Checklist framework and integrated patient feedback into 6 of the 12 steps. Patient input was included as it has potential to improve patient-centeredness of checklists related to assessment of clinical performance. Focus group patient participants from 2 clinical sites were included: Thomas Jefferson University Hospital, Philadelphia, PA, and Northwestern University Hospital, Chicago, Illinois.The authors developed a preliminary instrument based on existing checklists, clinical experience, literature review, and input from an expert panel comprising health care professionals and patient advocates. They then refined the instrument based on feedback from 2 waves of patient focus groups, resulting in a final 21-item checklist. The checklist items assess if uncertainty was addressed in each step of the discharge communication, including the following major categories: introduction, test results/ED summary, no/uncertain diagnosis, next steps/follow-up, home care, reasons to return, and general communication skills. Patient input influenced both what items were included and the wording of items in the final checklist. This patient-centered, systematic approach to checklist development is built upon the rigor of the Checklist Development Checklist and provides an illustration of how to integrate patient feedback into the design of assessment tools when appropriate.
Project description:Motivation:Dynamical models describing intracellular phenomena are increasing in size and complexity as more information is obtained from experiments. These models are often over-parameterized with respect to the quantitative data used for parameter estimation, resulting in uncertainty in the individual parameter estimates as well as in the predictions made from the model. Here we combine Bayesian analysis with global sensitivity analysis (GSA) in order to give better informed predictions; to point out weaker parts of the model that are important targets for further experiments, as well as to give guidance on parameters that are essential in distinguishing different qualitative output behaviours. Results:We used approximate Bayesian computation (ABC) to estimate the model parameters from experimental data, as well as to quantify the uncertainty in this estimation (inverse uncertainty quantification), resulting in a posterior distribution for the parameters. This parameter uncertainty was next propagated to a corresponding uncertainty in the predictions (forward uncertainty propagation), and a GSA was performed on the predictions using the posterior distribution as the possible values for the parameters. This methodology was applied on a relatively large model relevant for synaptic plasticity, using experimental data from several sources. We could hereby point out those parameters that by themselves have the largest contribution to the uncertainty of the prediction as well as identify parameters important to separate between qualitatively different predictions. This approach is useful both for experimental design as well as model building. Availability and implementation:Source code is freely available at https://github.com/alexjau/uqsa. Supplementary information:Supplementary data are available at Bioinformatics online.
Project description:Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free) reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter) estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free) reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating.
Project description:The International vocabulary of metrology - Basic and general concepts and associated terms (VIM3, 2.26 measurement uncertainty, JCGM 200:2012) defines uncertainty of measurement as a non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information obtained from performing the measurement. Clinical Laboratory Standards Institute (CLSI) has published a very detailed guideline with a description of sources contributing to measurement uncertainty as well as different approaches for the calculation (Expression of measurement uncertainty in laboratory medicine; Approved Guideline, CLSI C51-A 2012). Many other national and international recommendations and original scientific papers about measurement uncertainty estimation have been published. In Croatia, the estimation of measurement uncertainty is obligatory for accredited medical laboratories. However, since national recommendations are currently not available, each of these laboratories uses a different approach in measurement uncertainty estimation. The main purpose of this document is to describe the minimal requirements for measurement uncertainty estimation. In such way, it will contribute to the harmonization of measurement uncertainty estimation, evaluation and reporting across laboratories in Croatia. This recommendation is issued by the joint Working group for uncertainty of measurement of the Croatian Society for Medical Biochemistry and Laboratory Medicine and Croatian Chamber of Medical Biochemists. The document is based mainly on the recommendations of Australasian Association of Clinical Biochemists (AACB) Uncertainty of Measurement Working Group and is intended for all medical biochemistry laboratories in Croatia.
Project description:Uncertainty is an inherent property of the environment and a central feature of models of decision-making and learning. Theoretical propositions suggest that one form, unexpected uncertainty, may be used to rapidly adapt to changes in the environment, while being influenced by two other forms: risk and estimation uncertainty. While previous studies have reported neural representations of estimation uncertainty and risk, relatively little is known about unexpected uncertainty. Here, participants performed a decision-making task while undergoing functional magnetic resonance imaging (fMRI), which, in combination with a Bayesian model-based analysis, enabled us to separately examine each form of uncertainty examined. We found representations of unexpected uncertainty in multiple cortical areas, as well as the noradrenergic brainstem nucleus locus coeruleus. Other unique cortical regions were found to encode risk, estimation uncertainty, and learning rate. Collectively, these findings support theoretical models in which several formally separable uncertainty computations determine the speed of learning.
Project description:Empirically quantifying tidally-influenced river discharge is typically laborious, expensive, and subject to more uncertainty than estimation of upstream river discharge. The tidal stage-discharge relationship is not monotonic nor necessarily single-valued, so conventional stage-based river rating curves fail in the tidal zone. Herein, we propose an expanded rating curve method incorporating stage-rate-of-change to estimate river discharge under tidal influences across progressive, mixed, and standing waves. This simple and inexpensive method requires (1) stage from a pressure transducer, (2) flow direction from a tilt current meter, and (3) a series of ADP surveys at different flow rates for model calibration. The method was validated using excerpts from 12 tidal USGS gauging stations during baseflow conditions. USGS gauging stations model discharge using a different more complex and expensive method. Comparison of new and previous models resulted in good R2 correlations (min 0.62, mean 0.87 with S.D. 0.10, max 0.97). The method for modeling tidally-influenced discharge during baseflow conditions was applied de novo to eight intertidal stations in the Mission and Aransas Rivers, Texas, USA. In these same rivers, the model was further expanded to identify and estimate tidally-influenced stormflow discharges. The Mission and Aransas examples illustrated the potential scientific and management utility of the applied tidal rating curve method for isolating transient tidal influences and quantifying baseflow and storm discharges to sensitive coastal waters.
Project description:BACKGROUND: Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. METHODS: We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. RESULTS: We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. CONCLUSIONS: Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.
Project description:Uncertainty is inherent to our knowledge about the state of the world yet often not communicated alongside scientific facts and numbers. In the "posttruth" era where facts are increasingly contested, a common assumption is that communicating uncertainty will reduce public trust. However, a lack of systematic research makes it difficult to evaluate such claims. We conducted five experiments-including one preregistered replication with a national sample and one field experiment on the BBC News website (total n = 5,780)-to examine whether communicating epistemic uncertainty about facts across different topics (e.g., global warming, immigration), formats (verbal vs. numeric), and magnitudes (high vs. low) influences public trust. Results show that whereas people do perceive greater uncertainty when it is communicated, we observed only a small decrease in trust in numbers and trustworthiness of the source, and mostly for verbal uncertainty communication. These results could help reassure all communicators of facts and science that they can be more open and transparent about the limits of human knowledge.
Project description:Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.
Project description:Healthy adults flexibly adapt their learning strategies to ongoing changes in uncertainty, a key feature of adaptive behaviour. However, the developmental trajectory of this ability is yet unknown, as developmental studies have not incorporated trial-to-trial variation in uncertainty in their analyses or models. To address this issue, we compared adolescents' and adults' trial-to-trial dynamics of uncertainty, learning rate, and exploration in two tasks that assess learning in noisy but otherwise stable environments. In an estimation task-which provides direct indices of trial-specific learning rate-both age groups reduced their learning rate over time, as self-reported uncertainty decreased. Accordingly, the estimation data in both groups was better explained by a Bayesian model with dynamic learning rate (Kalman filter) than by conventional reinforcement-learning models. Furthermore, adolescents' learning rates asymptoted at a higher level, reflecting an over-weighting of the most recent outcome, and the estimated Kalman-filter parameters suggested that this was due to an overestimation of environmental volatility. In a choice task, both age groups became more likely to choose the higher-valued option over time, but this increase in choice accuracy was smaller in the adolescents. In contrast to the estimation task, we found no evidence for a Bayesian expectation-updating process in the choice task, suggesting that estimation and choice tasks engage different learning processes. However, our modeling results of the choice task suggested that both age groups reduced their degree of exploration over time, and that the adolescents explored overall more than the adults. Finally, age-related differences in exploration parameters from fits to the choice data were mediated by participants' volatility parameter from fits to the estimation data. Together, these results suggest that adolescents overestimate the rate of environmental change, resulting in elevated learning rates and increased exploration, which may help understand developmental changes in learning and decision-making.