Project description:MotivationIt has become routine in neuroscience studies to measure brain networks for different individuals using neuroimaging. These networks are typically expressed as adjacency matrices, with each cell containing a summary of connectivity between a pair of brain regions. There is an emerging statistical literature describing methods for the analysis of such multi-network data in which nodes are common across networks but the edges vary. However, there has been essentially no consideration of the important problem of outlier detection. In particular, for certain subjects, the neuroimaging data are so poor quality that the network cannot be reliably reconstructed. For such subjects, the resulting adjacency matrix may be mostly zero or exhibit a bizarre pattern not consistent with a functioning brain. These outlying networks may serve as influential points, contaminating subsequent statistical analyses. We propose a simple Outlier DetectIon for Networks (ODIN) method relying on an influence measure under a hierarchical generalized linear model for the adjacency matrices. An efficient computational algorithm is described, and ODIN is illustrated through simulations and an application to data from the UK Biobank.ResultsODIN was successful in identifying moderate to extreme outliers. Removing such outliers can significantly change inferences in downstream applications.Availability and implementationODIN has been implemented in both Python and R and these implementations along with other code are publicly available at github.com/pritamdey/ODIN-python and github.com/pritamdey/ODIN-r, respectively.Supplementary informationSupplementary data are available at Bioinformatics online.
Project description:PurposeBecause the manual contouring process is labor-intensive and time-consuming, segmentation of organs-at-risk (OARs) is a weak link in radiotherapy treatment planning process. Our goal was to develop a synthetic MR (sMR)-aided dual pyramid network (DPN) for rapid and accurate head and neck multi-organ segmentation in order to expedite the treatment planning process.MethodsForty-five patients' CT, MR, and manual contours pairs were included as our training dataset. Nineteen OARs were target organs to be segmented. The proposed sMR-aided DPN method featured a deep attention strategy to effectively segment multiple organs. The performance of sMR-aided DPN method was evaluated using five metrics, including Dice similarity coefficient (DSC), Hausdorff distance 95% (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and volume difference. Our method was further validated using the 2015 head and neck challenge data.ResultsThe contours generated by the proposed method closely resemble the ground truth manual contours, as evidenced by encouraging quantitative results in terms of DSC using the 2015 head and neck challenge data. Mean DSC values of 0.91 ± 0.02, 0.73 ± 0.11, 0.96 ± 0.01, 0.78 ± 0.09/0.78 ± 0.11, 0.88 ± 0.04/0.88 ± 0.06 and 0.86 ± 0.08/0.85 ± 0.1 were achieved for brain stem, chiasm, mandible, left/right optic nerve, left/right parotid, and left/right submandibular, respectively.ConclusionsWe demonstrated the feasibility of sMR-aided DPN for head and neck multi-organ delineation on CT images. Our method has shown superiority over the other methods on the 2015 head and neck challenge data results. The proposed method could significantly expedite the treatment planning process by rapidly segmenting multiple OARs.
Project description:Conventional dimensionality reduction methods like Multidimensional Scaling (MDS) are sensitive to the presence of orthogonal outliers, leading to significant defects in the embedding. We introduce a robust MDS method, called DeCOr-MDS (Detection and Correction of Orthogonal outliers using MDS), based on the geometry and statistics of simplices formed by data points, that allows to detect orthogonal outliers and subsequently reduce dimensionality. We validate our methods using synthetic datasets, and further show how it can be applied to a variety of large real biological datasets, including cancer image cell data, human microbiome project data and single cell RNA sequencing data, to address the task of data cleaning and visualization.
Project description:This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape-location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape-location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively.
Project description:Computer tomography (CT) scans' capabilities in detecting lesions have been increasing remarkably in the past decades. In this paper, we propose a multi-organ lesion detection (MOLD) approach to better address real-life chest-related clinical needs. MOLD is a challenging task, especially within a large, high resolution image volume, due to various types of background information interference and large differences in lesion sizes. Furthermore, the appearance similarity between lesions and other normal tissues demands more discriminative features. In order to overcome these challenges, we introduce depth-aware (DA) and skipped-layer hierarchical training (SHT) mechanisms with the novel Dense 3D context enhanced (Dense 3DCE) lesion detection model. The novel Dense 3DCE framework considers the shallow, medium, and deep-level features together comprehensively. In addition, equipped with our SHT scheme, the backpropagation process can now be supervised under precise control, while the DA scheme can effectively incorporate depth domain knowledge into the scheme. Extensive experiments have been carried out on a publicly available, widely used DeepLesion dataset, and the results prove the effectiveness of our DA-SHT Dense 3DCE network in the MOLD task.
Project description:BackgroundArtificial intelligence (AI) systems designed to detect abnormalities in abdominal computed tomography (CT) could reduce radiologists' workload and improve diagnostic processes. However, development of such models has been hampered by the shortage of large expert-annotated datasets. Here, we used information from free-text radiology reports, rather than manual annotations, to develop a deep-learning-based pipeline for comprehensive detection of abdominal CT abnormalities.MethodsIn this multicentre retrospective study, we developed a deep-learning-based pipeline to detect abnormalities in the liver, gallbladder, pancreas, spleen, and kidneys. Abdominal CT exams and related free-text reports obtained during routine clinical practice collected from three institutions were used for training and internal testing, while data collected from six institutions were used for external testing. A multi-organ segmentation model and an information extraction schema were used to extract specific organ images and disease information, CT images and radiology reports, respectively, which were used to train a multiple-instance learning model for anomaly detection. Its performance was evaluated using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and F1 score against radiologists' ground-truth labels.FindingsWe trained the model for each organ on images selected from 66,684 exams (39,255 patients) and tested it on 300 (295 patients) and 600 (596 patients) exams for internal and external validation, respectively. In the external test cohort, the overall AUC for detecting organ abnormalities was 0.886. Whereas models trained on human-annotated labels performed better with the same number of exams, those trained on larger datasets with labels auto-extracted via the information extraction schema significantly outperformed human-annotated label-derived models.InterpretationUsing disease information from routine clinical free-text radiology reports allows development of accurate anomaly detection models without requiring manual annotations. This approach is applicable to various anatomical sites and could streamline diagnostic processes.FundingJapan Science and Technology Agency.
Project description:State-of-the-art next-generation sequencing, transcriptomics, proteomics and other high-throughput 'omics' technologies enable the efficient generation of large experimental data sets. These data may yield unprecedented knowledge about molecular pathways in cells and their role in disease. Dimension reduction approaches have been widely used in exploratory analysis of single omics data sets. This review will focus on dimension reduction approaches for simultaneous exploratory analyses of multiple data sets. These methods extract the linear relationships that best explain the correlated structure across data sets, the variability both within and between variables (or observations) and may highlight data issues such as batch effects or outliers. We explore dimension reduction techniques as one of the emerging approaches for data integration, and how these can be applied to increase our understanding of biological systems in normal physiological function and disease.
Project description:Accurate detection of liver lesions from multi-phase contrast-enhanced CT (CECT) scans is a fundamental step for precise liver diagnosis and treatment. However, the analysis of multi-phase contexts is heavily challenged by the misalignment caused by respiration coupled with the movement of organs. Here, we proposed an AI system for multi-phase liver lesion segmentation (named MULLET) for precise and fully automatic segmentation of real-patient CECT images. MULLET enables effectively embedding the important ROIs of CECT images and exploring multi-phase contexts by introducing a transformer-based attention mechanism. Evaluated on 1,229 CECT scans from 1,197 patients, MULLET demonstrated significant performance gains in terms of Dice, Recall, and F2 score, which are 5.80%, 6.57%, and 5.87% higher than state of the arts, respectively. MULLET has been successfully deployed in real-world settings. The deployed AI web server provides a powerful system to boost clinical workflows of liver lesion diagnosis and could be straightforwardly extended to general CECT analyses.
Project description:PurposeTo evaluate the efficacy of volumetric CT attenuation-based parameters obtained through automated 3D organ segmentation on virtual non-contrast (VNC) images from dual-energy CT (DECT) for assessing hepatic steatosis.Materials and methodsThis retrospective study included living liver donor candidates having liver DECT and MRI-determined proton density fat fraction (PDFF) assessments. Employing a 3D deep learning algorithm, the liver and spleen were automatically segmented from VNC images (derived from contrast-enhanced DECT scans) and true non-contrast (TNC) images, respectively. Mean volumetric CT attenuation values of each segmented liver (L) and spleen (S) were measured, allowing for liver attenuation index (LAI) calculation, defined as L minus S. Agreements of VNC and TNC parameters for hepatic steatosis, i.e., L and LAI, were assessed using intraclass correlation coefficients (ICC). Correlations between VNC parameters and MRI-PDFF values were assessed using the Pearson's correlation coefficient. Their performance to identify MRI-PDFF ≥ 5% and ≥ 10% was evaluated using receiver operating characteristic (ROC) curve analysis.ResultsOf 252 participants, 56 (22.2%) and 16 (6.3%) had hepatic steatosis with MRI-PDFF ≥ 5% and ≥ 10%, respectively. LVNC and LAIVNC showed excellent agreement with LTNC and LAITNC (ICC = 0.957 and 0.968) and significant correlations with MRI-PDFF values (r = - 0.585 and - 0.588, Ps < 0.001). LVNC and LAIVNC exhibited areas under the ROC curve of 0.795 and 0.806 for MRI-PDFF ≥ 5%; and 0.916 and 0.932, for MRI-PDFF ≥ 10%, respectively.ConclusionVolumetric CT attenuation-based parameters from VNC images generated by DECT, via automated 3D segmentation of the liver and spleen, have potential for opportunistic hepatic steatosis screening, as an alternative to TNC images.
Project description:A novel 3D nnU-Net-based of algorithm was developed for fully-automated multi-organ segmentation in abdominal CT, applicable to both non-contrast and post-contrast images. The algorithm was trained using dual-energy CT (DECT)-obtained portal venous phase (PVP) and spatiotemporally-matched virtual non-contrast images, and tested using a single-energy (SE) CT dataset comprising PVP and true non-contrast (TNC) images. The algorithm showed robust accuracy in segmenting the liver, spleen, right kidney (RK), and left kidney (LK), with mean dice similarity coefficients (DSCs) exceeding 0.94 for each organ, regardless of contrast enhancement. However, pancreas segmentation demonstrated slightly lower performance with mean DSCs of around 0.8. In organ volume estimation, the algorithm demonstrated excellent agreement with ground-truth measurements for the liver, spleen, RK, and LK (intraclass correlation coefficients [ICCs] > 0.95); while the pancreas showed good agreements (ICC = 0.792 in SE-PVP, 0.840 in TNC). Accurate volume estimation within a 10% deviation from ground-truth was achieved in over 90% of cases involving the liver, spleen, RK, and LK. These findings indicate the efficacy of our 3D nnU-Net-based algorithm, developed using DECT images, which provides precise segmentation of the liver, spleen, and RK and LK in both non-contrast and post-contrast CT images, enabling reliable organ volumetry, albeit with relatively reduced performance for the pancreas.