Project description:BackgroundRecently, the dental age estimation method developed by Cameriere has been widely recognized and accepted. Although machine learning (ML) methods can improve the accuracy of dental age estimation, no machine learning research exists on the use of the Cameriere dental age estimation method, making this research innovative and meaningful.AimThe purpose of this research is to use 7 lower left permanent teeth and three models [random forest (RF), support vector machine (SVM), and linear regression (LR)] based on the Cameriere method to predict children's dental age, and compare with the Cameriere age estimation.Subjects and methodsThis was a retrospective study that collected and analyzed orthopantomograms of 748 children (356 females and 392 males) aged 5-13 years. Data were randomly divided into training and test datasets in an 80-20% proportion for the ML algorithms. The procedure, starting with randomly creating new training and test datasets, was repeated 20 times. 7 permanent developing teeth on the left mandible (except wisdom teeth) were recorded using the Cameriere method. Then, the traditional Cameriere formula and three models (RF, SVM, and LR) were used to estimate the dental age. The age prediction accuracy was measured by five indicators: the coefficient of determination (R2), mean error (ME), root mean square error (RMSE), mean square error (MSE), and mean absolute error (MAE).ResultsThe research showed that the ML models have better accuracy than the traditional Cameriere formula. The ME, MAE, MSE, and RMSE values of the SVM model (0.004, 0.489, 0.392, and 0.625, respectively) and the RF model (- 0.004, 0.495, 0.389, and 0.623, respectively) were lower with the highest accuracy. In contrast, the ME, MAE, MSE and RMSE of the European Cameriere formula were 0.592, 0.846, 0.755, and 0.869, respectively, and those of the Chinese Cameriere formula were 0.748, 0.812, 0.890 and 0.943, respectively.ConclusionsCompared to the Cameriere formula, ML methods based on the Cameriere's maturation stages were more accurate in estimating dental age. These results support the use of ML algorithms instead of the traditional Cameriere formula.
Project description:ObjectiveThe study sought to determine whether machine learning can predict initial inpatient total daily dose (TDD) of insulin from electronic health records more accurately than existing guideline-based dosing recommendations.Materials and methodsUsing electronic health records from a tertiary academic center between 2008 and 2020 of 16,848 inpatients receiving subcutaneous insulin who achieved target blood glucose control of 100-180 mg/dL on a calendar day, we trained an ensemble machine learning algorithm consisting of regularized regression, random forest, and gradient boosted tree models for 2-stage TDD prediction. We evaluated the ability to predict patients requiring more than 6 units TDD and their point-value TDDs to achieve target glucose control.ResultsThe method achieves an area under the receiver-operating characteristic curve of 0.85 (95% confidence interval [CI], 0.84-0.87) and area under the precision-recall curve of 0.65 (95% CI, 0.64-0.67) for classifying patients who require more than 6 units TDD. For patients requiring more than 6 units TDD, the mean absolute percent error in dose prediction based on standard clinical calculators using patient weight is in the range of 136%-329%, while the regression model based on weight improves to 60% (95% CI, 57%-63%), and the full ensemble model further improves to 51% (95% CI, 48%-54%).DiscussionOwingto the narrow therapeutic window and wide individual variability, insulin dosing requires adaptive and predictive approaches that can be supported through data-driven analytic tools.ConclusionsMachine learning approaches based on readily available electronic medical records can discriminate which inpatients will require more than 6 units TDD and estimate individual doses more accurately than standard guidelines and practices.
Project description:Metasurfaces, interacted with artificial intelligence, have now been motivating many contemporary research studies to revisit established fields, e.g., direction of arrival (DOA) estimation. Conventional DOA estimation techniques typically necessitate bulky-sized beam-scanning equipment for signal acquisition or complicated reconstruction algorithms for data postprocessing, making them ineffective for in-situ detection. In this article, we propose a machine-learning-enabled metasurface for DOA estimation. For certain incident signals, a tunable metasurface is controlled in sequence, generating a series of field intensities at the single receiving probe. The perceived data are subsequently processed by a pretrained random forest model to access the incident angle. As an illustrative example, we experimentally demonstrate a high-accuracy intelligent DOA estimation approach for a wide range of incident angles and achieve more than 95% accuracy with an error of less than 0.5° . The reported strategy opens a feasible route for intelligent DOA detection in full space and wide band. Moreover, it will provide breakthrough inspiration for traditional applications incorporating time-saving and equipment-simplified majorization.
Project description:Making use of the general physical model of the Mach-Zehnder interferometer with photon loss which is a fundamental physical issue, we investigate the continuous-variable quantum phase estimation based on machine learning approach, and an efficient recursive Bayesian estimation algorithm for Gaussian states phase estimation has been proposed. With the proposed algorithm, the performance of the phase estimation may be improved distinguishably. For example, the physical limits (i.e., the standard quantum limit and Heisenberg limit) for the phase estimation precision may be reached in more efficient ways especially in the situation of the prior information being employed, the range for the estimated phase parameter can be extended from [0, π/2] to [0, 2π] compared with the conventional approach, and influences of the photon losses on the output parameter estimation precision may be suppressed dramatically in terms of saturating the lossy bound. In addition, the proposed algorithm can be extended to the time-variable or multi-parameter estimation framework.
Project description:After an association between genetic variants and a phenotype has been established, further study goals comprise the classification of patients according to disease risk or the estimation of disease probability. To accomplish this, different statistical methods are required, and specifically machine-learning approaches may offer advantages over classical techniques. In this paper, we describe methods for the construction and evaluation of classification and probability estimation rules. We review the use of machine-learning approaches in this context and explain some of the machine-learning algorithms in detail. Finally, we illustrate the methodology through application to a genome-wide association analysis on rheumatoid arthritis.
Project description:Gene expression profiles were generated from 199 primary breast cancer patients. Samples 1-176 were used in another study, GEO Series GSE22820, and form the training data set in this study. Sample numbers 200-222 form a validation set. This data is used to model a machine learning classifier for Estrogen Receptor Status. RNA was isolated from 199 primary breast cancer patients. A machine learning classifier was built to predict ER status using only three gene features.
Project description:The decision on when it is appropriate to stop antimicrobial treatment in an individual patient is complex and under-researched. Ceasing too early can drive treatment failure, while excessive treatment risks adverse events. Under- and over-treatment can promote the development of antimicrobial resistance (AMR). We extracted routinely collected electronic health record data from the MIMIC-IV database for 18,988 patients (22,845 unique stays) who received intravenous antibiotic treatment during an intensive care unit (ICU) admission. A model was developed that utilises a recurrent neural network autoencoder and a synthetic control-based approach to estimate patients' ICU length of stay (LOS) and mortality outcomes for any given day, under the alternative scenarios of if they were to stop vs. continue antibiotic treatment. Control days where our model should reproduce labels demonstrated minimal difference for both stopping and continuing scenarios indicating estimations are reliable (LOS results of 0.24 and 0.42 days mean delta, 1.93 and 3.76 root mean squared error, respectively). Meanwhile, impact days where we assess the potential effect of the unobserved scenario showed that stopping antibiotic therapy earlier had a statistically significant shorter LOS (mean reduction 2.71 days, p -value <0.01). No impact on mortality was observed. In summary, we have developed a model to reliably estimate patient outcomes under the contrasting scenarios of stopping or continuing antibiotic treatment. Retrospective results are in line with previous clinical studies that demonstrate shorter antibiotic treatment durations are often non-inferior. With additional development into a clinical decision support system, this could be used to support individualised antimicrobial cessation decision-making, reduce the excessive use of antibiotics, and address the problem of AMR.
Project description:Circadian rhythms influence physiology, metabolism, and molecular processes in the human body. Estimation of individual body time (circadian phase) is therefore highly relevant for individual optimization of behavior (sleep, meals, sports), diagnostic sampling, medical treatment, and for treatment of circadian rhythm disorders. Here, we provide a partial least squares regression (PLSR) machine learning approach that uses plasma-derived metabolomics data in one or more samples to estimate dim light melatonin onset (DLMO) as a proxy for circadian phase of the human body. For this purpose, our protocol was aimed to stay close to real-life conditions. We found that a metabolomics approach optimized for either women or men under entrained conditions performed equally well or better than existing approaches using more labor-intensive RNA sequencing-based methods. Although estimation of circadian body time using blood-targeted metabolomics requires further validation in shift work and other real-world conditions, it currently may offer a robust, feasible technique with relatively high accuracy to aid personalized optimization of behavior and clinical treatment after appropriate validation in patient populations.
Project description:AimThe objective of this research was to perform a pilot study to develop an automatic analysis of periapical radiographs from patients with and without periodontitis for the percentage alveolar bone loss (ABL) on the approximal surfaces of teeth using a supervised machine learning model, that is, convolutional neural networks (CNN).Material and methodsA total of 1546 approximal sites from 54 participants on mandibular periapical radiographs were manually annotated (MA) for a training set (n = 1308 sites), a validation set (n = 98 sites), and a test set (n = 140 sites). The training and validation sets were used for the development of a CNN algorithm. The algorithm recognised the cemento-enamel junction, the most apical extent of the alveolar crest, the apex, and the surrounding alveolar bone.ResultsFor the total of 140 images in the test set, the CNN scored a mean of 23.1 ± 11.8 %ABL, whilst the corresponding value for MA was 27.8 ± 13.8 %ABL. The intraclass correlation (ICC) was 0.601 (P < .001), indicating moderate reliability. Further subanalyses for various tooth types and various bone loss patterns showed that ICCs remained significant, although the algorithm performed with excellent reliability for %ABL on nonmolar teeth (incisors, canines, premolars; ICC = 0.763).ConclusionsA CNN trained algorithm on radiographic images showed a diagnostic performance with moderate to good reliability to detect and quantify %ABL in periapical radiographs.
Project description:HIV/AIDS is an ongoing global pandemic, with an estimated 39 million infected worldwide. Early detection is anticipated to help improve outcomes and prevent further infections. Point-of-care diagnostics make HIV/AIDS diagnoses available both earlier and to a broader population. Wide-spread and automated HIV risk estimation can offer objective guidance. This supports providers in making an informed decision when considering patients with high HIV risk for HIV testing or pre-exposure prophylaxis (PrEP). We propose a novel machine learning method that allows providers to use the data from a patient's previous stays at the clinic to estimate their HIV risk. All features available in the clinical data are considered, making the set of features objective and independent of expert opinions. The proposed method builds on association rules that are derived from the data. The incidence rate ratio (IRR) is determined for each rule. Given a new patient, the mean IRR of all applicable rules is used to estimate their HIV risk. The method was tested and validated on the publicly available clinical database MIMIC-IV, which consists of around 525,000 hospital stays that included a stay at the intensive care unit or emergency department. We evaluated the method using the area under the receiver operating characteristic curve (AUC). The best performance with an AUC of 0.88 was achieved with a model consisting of 53 rules. A threshold value of 0.66 leads to a sensitivity of 98% and a specificity of 53%. The rules were grouped into drug abuse, psychological illnesses (e.g., PTSD), previously known associations (e.g., pulmonary diseases), and new associations (e.g., certain diagnostic procedures). In conclusion, we propose a novel HIV risk estimation method that builds on existing clinical data. It incorporates a wide range of features, leading to a model that is independent of expert opinions. It supports providers in making informed decisions in the point-of-care diagnostics process by estimating a patient's HIV risk.