Project description:TumorPrism3D software was developed to segment brain tumors with a straightforward and user-friendly graphical interface applied to two- and three-dimensional brain magnetic resonance (MR) images. The MR images of 185 patients (103 males, 82 females) with glioblastoma multiforme were downloaded from The Cancer Imaging Archive (TCIA) to test the tumor segmentation performance of this software. Regions of interest (ROIs) corresponding to contrast-enhancing lesions, necrotic portions, and non-enhancing T2 high signal intensity components were segmented for each tumor. TumorPrism3D demonstrated high accuracy in segmenting all three tumor components in cases of glioblastoma multiforme. They achieved a better Dice similarity coefficient (DSC) ranging from 0.83 to 0.91 than 3DSlicer with a DSC ranging from 0.80 to 0.84 for the accuracy of segmented tumors. Comparative analysis with the widely used 3DSlicer software revealed TumorPrism3D to be approximately 37.4% faster in the segmentation process from initial contour drawing to final segmentation mask determination. The semi-automated nature of TumorPrism3D facilitates reproducible tumor segmentation at a rapid pace, offering the potential for quantitative analysis of tumor characteristics and artificial intelligence-assisted segmentation in brain MR imaging.
Project description:Brain tumor semantic segmentation is a critical medical image processing work, which aids clinicians in diagnosing patients and determining the extent of lesions. Convolutional neural networks (CNNs) have demonstrated exceptional performance in computer vision tasks in recent years. For 3D medical image tasks, deep convolutional neural networks based on an encoder-decoder structure and skip-connection have been frequently used. However, CNNs have the drawback of being unable to learn global and remote semantic information well. On the other hand, the transformer has recently found success in natural language processing and computer vision as a result of its usage of a self-attention mechanism for global information modeling. For demanding prediction tasks, such as 3D medical picture segmentation, local and global characteristics are critical. We propose SwinBTS, a new 3D medical picture segmentation approach, which combines a transformer, convolutional neural network, and encoder-decoder structure to define the 3D brain tumor semantic segmentation job as a sequence-to-sequence prediction challenge in this research. To extract contextual data, the 3D Swin Transformer is utilized as the network's encoder and decoder, and convolutional operations are employed for upsampling and downsampling. Finally, we achieve segmentation results using an improved Transformer module that we built for increasing detail feature extraction. Extensive experimental results on the BraTS 2019, BraTS 2020, and BraTS 2021 datasets reveal that SwinBTS outperforms state-of-the-art 3D algorithms for brain tumor segmentation on 3D MRI scanned images.
Project description:PurposeAlthough classical techniques for image segmentation may work well for some images, they may perform poorly or not work at all for others. It often depends on the properties of the particular image segmentation task under study. The reliable segmentation of brain tumors in medical images represents a particularly challenging and essential task. For example, some brain tumors may exhibit complex so-called "bottle-neck" shapes which are essentially circles with long indistinct tapering tails, known as a "dual tail." Such challenging conditions may not be readily segmented, particularly in the extended tail region or around the so-called "bottle-neck" area. In those cases, existing image segmentation techniques often fail to work well.MethodsExisting research on image segmentation using wormhole and entangle theory is first analyzed. Next, a random positioning search method that uses a quantum-behaved particle swarm optimization (QPSO) approach is improved by using a hyperbolic wormhole path measure for seeding and linking particles. Finally, our novel quantum and wormhole-behaved particle swarm optimization (QWPSO) is proposed.ResultsExperimental results show that our QWPSO algorithm can better cluster complex "dual tail" regions into groupings with greater adaptability than conventional QPSO. Experimental work also improves operational efficiency and segmentation accuracy compared with current competing reference methods.ConclusionOur QWPSO method appears extremely promising for isolating smeared/indistinct regions of complex shape typical of medical image segmentation tasks. The technique is especially advantageous for segmentation in the so-called "bottle-neck" and "dual tail"-shaped regions appearing in brain tumor images.
Project description:Gliomas are the most common primary brain malignancies. Accurate and robust tumor segmentation and prediction of patients' overall survival are important for diagnosis, treatment planning and risk factor identification. Here we present a deep learning-based framework for brain tumor segmentation and survival prediction in glioma, using multimodal MRI scans. For tumor segmentation, we use ensembles of three different 3D CNN architectures for robust performance through a majority rule. This approach can effectively reduce model bias and boost performance. For survival prediction, we extract 4,524 radiomic features from segmented tumor regions, then, a decision tree and cross validation are used to select potent features. Finally, a random forest model is trained to predict the overall survival of patients. The 2018 MICCAI Multimodal Brain Tumor Segmentation Challenge (BraTS), ranks our method at 2nd and 5th place out of 60+ participating teams for survival prediction tasks and segmentation tasks respectively, achieving a promising 61.0% accuracy on the classification of short-survivors, mid-survivors and long-survivors.
Project description:Accurate segmentation of the subcortical structures is frequently required in neuroimaging studies. Most existing methods use only a T1-weighted MRI volume to segment all supported structures and usually rely on a database of training data. We propose a new method that can use multiple image modalities simultaneously and a single reference segmentation for initialisation, without the need for a manually labelled training set. The method models intensity profiles in multiple images around the boundaries of the structure after nonlinear registration. It is trained using a set of unlabelled training data, which may be the same images that are to be segmented, and it can automatically infer the location of the physical boundary using user-specified priors. We show that the method produces high-quality segmentations of the striatum, which is clearly visible on T1-weighted scans, and the globus pallidus, which has poor contrast on such scans. The method compares favourably to existing methods, showing greater overlap with manual segmentations and better consistency.
Project description:BackgroundImage segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development.PurposeTo determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation.MethodsWe conducted a systematic review of 572 brain tumour segmentation studies during 2015-2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score).Statistical testsWe compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour.ResultsWe found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation.ConclusionU-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available.
Project description:It is often a difficult task to accurately segment brain magnetic resonance (MR) images with intensity in-homogeneity and noise. This paper introduces a novel level set method for simultaneous brain MR image segmentation and intensity inhomogeneity correction. To reduce the effect of noise, novel anisotropic spatial information, which can preserve more details of edges and corners, is proposed by incorporating the inner relationships among the neighbor pixels. Then the proposed energy function uses the multivariate Student's t-distribution to fit the distribution of the intensities of each tissue. Furthermore, the proposed model utilizes Hidden Markov random fields to model the spatial correlation between neigh-boring pixels/voxels. The means of the multivariate Student's t-distribution can be adaptively estimated by multiplying a bias field to reduce the effect of intensity inhomogeneity. In the end, we reconstructed the energy function to be convex and calculated it by using the Split Bregman method, which allows our framework for random initialization, thereby allowing fully automated applications. Our method can obtain the final result in less than 1 second for 2D image with size 256 × 256 and less than 300 seconds for 3D image with size 256 × 256 × 171. The proposed method was compared to other state-of-the-art segmentation methods using both synthetic and clinical brain MR images and increased the accuracies of the results more than 3%.
Project description:Image registration and segmentation are the two most studied problems in medical image analysis. Deep learning algorithms have recently gained a lot of attention due to their success and state-of-the-art results in variety of problems and communities. In this paper, we propose a novel, efficient, and multi-task algorithm that addresses the problems of image registration and brain tumor segmentation jointly. Our method exploits the dependencies between these tasks through a natural coupling of their interdependencies during inference. In particular, the similarity constraints are relaxed within the tumor regions using an efficient and relatively simple formulation. We evaluated the performance of our formulation both quantitatively and qualitatively for registration and segmentation problems on two publicly available datasets (BraTS 2018 and OASIS 3), reporting competitive results with other recent state-of-the-art methods. Moreover, our proposed framework reports significant amelioration (p < 0.005) for the registration performance inside the tumor locations, providing a generic method that does not need any predefined conditions (e.g., absence of abnormalities) about the volumes to be registered. Our implementation is publicly available online at https://github.com/TheoEst/joint_registration_tumor_segmentation.
Project description:Glioma grading is critical in treatment planning and prognosis. This study aims to address this issue through MRI-based classification to develop an accurate model for glioma diagnosis. Here, we employed a deep learning pipeline with three essential steps: (1) MRI images were segmented using preprocessing approaches and UNet architecture, (2) brain tumor regions were extracted using segmentation, then (3) high-grade gliomas and low-grade gliomas were classified using the VGG and GoogleNet implementations. Among the additional preprocessing techniques used in conjunction with the segmentation task, the combination of data augmentation and Window Setting Optimization was found to be the most effective tool, resulting in the Dice coefficient of 0.82, 0.91, and 0.72 for enhancing tumor, whole tumor, and tumor core, respectively. While most of the proposed models achieve comparable accuracies of about 93 % on the testing dataset, the pipeline of VGG combined with UNet segmentation obtains the highest accuracy of 97.44 %. In conclusion, the presented architecture illustrates a realistic model for detecting gliomas; moreover, it emphasizes the significance of data augmentation and segmentation in improving model performance.