Project description:ObjectivesGliomas and brain metastases (Mets) are the most common brain malignancies. The treatment strategy and clinical prognosis of patients are different, requiring accurate diagnosis of tumor types. However, the traditional radiomics diagnostic pipeline requires manual annotation and lacks integrated methods for segmentation and classification. To improve the diagnosis process, a gliomas and Mets computer-aided diagnosis method with automatic lesion segmentation and ensemble decision strategy on multi-center datasets was proposed.MethodsOverall, 1,022 high-grade gliomas and 775 Mets patients' preoperative MR images were adopted in the study, including contrast-enhanced T1-weighted (T1-CE) and T2-fluid attenuated inversion recovery (T2-flair) sequences from three hospitals. Two segmentation models trained on the gliomas and Mets datasets, respectively, were used to automatically segment tumors. Multiple radiomics features were extracted after automatic segmentation. Several machine learning classifiers were used to measure the impact of feature selection methods. A weight soft voting (RSV) model and ensemble decision strategy based on prior knowledge (EDPK) were introduced in the radiomics pipeline. Accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC) were used to evaluate the classification performance.ResultsThe proposed pipeline improved the diagnosis of gliomas and Mets with ACC reaching 0.8950 and AUC reaching 0.9585 after automatic lesion segmentation, which was higher than those of the traditional radiomics pipeline (ACC:0.8850, AUC:0.9450).ConclusionThe proposed model accurately classified gliomas and Mets patients using MRI radiomics. The novel pipeline showed great potential in diagnosing gliomas and Mets with high generalizability and interpretability.
Project description:BackgroundFlux variability analysis is often used to determine robustness of metabolic models in various simulation conditions. However, its use has been somehow limited by the long computation time compared to other constraint-based modeling methods.ResultsWe present an open source implementation of flux variability analysis called fastFVA. This efficient implementation makes large-scale flux variability analysis feasible and tractable allowing more complex biological questions regarding network flexibility and robustness to be addressed.ConclusionsNetworks involving thousands of biochemical reactions can be analyzed within seconds, greatly expanding the utility of flux variability analysis in systems biology.
Project description:A brain tumor magnetic resonance image processing algorithm can help doctors to diagnose and treat the patient's condition, which has important application significance in clinical medicine. This paper proposes a network model based on the combination of U-net and DenseNet to solve the problems of class imbalance in multi-modal brain tumor image segmentation and the loss of effective information features caused by the integration of features in the traditional U-net network. The standard convolution blocks of the coding path and decoding path on the original network are improved to dense blocks, which enhances the transmission of features. The mixed loss function composed of the Binary Cross Entropy Loss function and the Tversky coefficient is used to replace the original single cross-entropy loss, which restrains the influence of irrelevant features on segmentation accuracy. Compared with U-Net, U-Net++, and PA-Net the algorithm in this paper has significantly improved the segmentation accuracy, reaching 0.846, 0.861, and 0.782 respectively in the Dice coefficient index of WT, TC, and ET. The PPV coefficient index has reached 0.849, 0.883, and 0.786 respectively. Compared with the traditional U-net network, the Dice coefficient index of the proposed algorithm exceeds 0.8%, 4.0%, and 1.4%, respectively, and the PPV coefficient index in the tumor core area and tumor enhancement area increases by 3% and 1.2% respectively. The proposed algorithm has the best performance in tumor core area segmentation, and its Sensitivity index has reached 0.924, which has good research significance and application value.
Project description:It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as "multistate". These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations.
Project description:A brain Magnetic resonance imaging (MRI) scan of a single individual consists of several slices across the 3D anatomical view. Therefore, manual segmentation of brain tumors from magnetic resonance (MR) images is a challenging and time-consuming task. In addition, an automated brain tumor classification from an MRI scan is non-invasive so that it avoids biopsy and make the diagnosis process safer. Since the beginning of this millennia and late nineties, the effort of the research community to come-up with automatic brain tumor segmentation and classification method has been tremendous. As a result, there are ample literature on the area focusing on segmentation using region growing, traditional machine learning and deep learning methods. Similarly, a number of tasks have been performed in the area of brain tumor classification into their respective histological type, and an impressive performance results have been obtained. Considering state of-the-art methods and their performance, the purpose of this paper is to provide a comprehensive survey of three, recently proposed, major brain tumor segmentation and classification model techniques, namely, region growing, shallow machine learning and deep learning. The established works included in this survey also covers technical aspects such as the strengths and weaknesses of different approaches, pre- and post-processing techniques, feature extraction, datasets, and models' performance evaluation metrics.
Project description:The accurate automatic segmentation of gliomas and its intra-tumoral structures is important not only for treatment planning but also for follow-up evaluations. Several methods based on 2D and 3D Deep Neural Networks (DNN) have been developed to segment brain tumors and to classify different categories of tumors from different MRI modalities. However, these networks are often black-box models and do not provide any evidence regarding the process they take to perform this task. Increasing transparency and interpretability of such deep learning techniques is necessary for the complete integration of such methods into medical practice. In this paper, we explore various techniques to explain the functional organization of brain tumor segmentation models and to extract visualizations of internal concepts to understand how these networks achieve highly accurate tumor segmentations. We use the BraTS 2018 dataset to train three different networks with standard architectures and outline similarities and differences in the process that these networks take to segment brain tumors. We show that brain tumor segmentation networks learn certain human-understandable disentangled concepts on a filter level. We also show that they take a top-down or hierarchical approach to localizing the different parts of the tumor. We then extract visualizations of some internal feature maps and also provide a measure of uncertainty with regards to the outputs of the models to give additional qualitative evidence about the predictions of these networks. We believe that the emergence of such human-understandable organization and concepts might aid in the acceptance and integration of such methods in medical diagnosis.
Project description:Image registration and segmentation are the two most studied problems in medical image analysis. Deep learning algorithms have recently gained a lot of attention due to their success and state-of-the-art results in variety of problems and communities. In this paper, we propose a novel, efficient, and multi-task algorithm that addresses the problems of image registration and brain tumor segmentation jointly. Our method exploits the dependencies between these tasks through a natural coupling of their interdependencies during inference. In particular, the similarity constraints are relaxed within the tumor regions using an efficient and relatively simple formulation. We evaluated the performance of our formulation both quantitatively and qualitatively for registration and segmentation problems on two publicly available datasets (BraTS 2018 and OASIS 3), reporting competitive results with other recent state-of-the-art methods. Moreover, our proposed framework reports significant amelioration (p < 0.005) for the registration performance inside the tumor locations, providing a generic method that does not need any predefined conditions (e.g., absence of abnormalities) about the volumes to be registered. Our implementation is publicly available online at https://github.com/TheoEst/joint_registration_tumor_segmentation.
Project description:Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes.
Project description:Designing mechanical metamaterials is overwhelming for most computational approaches because of the staggering number and complexity of flexible elements that constitute their architecture-particularly if these elements don't repeat in periodic patterns or collectively occupy irregular bulk shapes. We introduce an approach, inspired by the freedom and constraint topologies (FACT) methodology, that leverages simplified assumptions to enable the design of such materials with ~6 orders of magnitude greater computational efficiency than other approaches (e.g., topology optimization). Metamaterials designed using this approach are called directionally compliant metamaterials (DCMs) because they manifest prescribed compliant directions while possessing high stiffness in all other directions. Since their compliant directions are governed by both macroscale shape and microscale architecture, DCMs can be engineered with the necessary design freedom to facilitate arbitrary form and unprecedented anisotropy. Thus, DCMs show promise as irregularly shaped flexure bearings, compliant prosthetics, morphing structures, and soft robots.
Project description:MotivationIt is well known that patterns of differential gene expression across biological conditions are often shared by many genes, particularly those within functional groups. Taking advantage of these patterns can lead to increased statistical power and biological clarity when testing for differential expression in a microarray experiment. The optimal discovery procedure (ODP), which maximizes the expected number of true positives for each fixed number of expected false positives, is a framework aimed at this goal. Storey et al. introduced an estimator of the ODP for identifying differentially expressed genes. However, their ODP estimator grows quadratically in computational time with respect to the number of genes. Reducing this computational burden is a key step in making the ODP practical for usage in a variety of high-throughput problems.ResultsHere, we propose a new estimate of the ODP called the modular ODP (mODP). The existing 'full ODP' requires that the likelihood function for each gene be evaluated according to the parameter estimates for all genes. The mODP assigns genes to modules according to a Kullback-Leibler distance, and then evaluates the statistic only at the module-averaged parameter estimates. We show that the mODP is relatively insensitive to the choice of the number of modules, but dramatically reduces the computational complexity from quadratic to linear in the number of genes. We compare the full ODP algorithm and mODP on simulated data and gene expression data from a recent study of Morrocan Amazighs. The mODP and full ODP algorithm perform very similarly across a range of comparisons.AvailabilityThe mODP methodology has been implemented into EDGE, a comprehensive gene expression analysis software package in R, available at http://genomine.org/edge/.