Graph-regularized 3D shape reconstruction from highly anisotropic and noisy images.
ABSTRACT: Analysis of microscopy images can provide insight into many biological processes. One particularly challenging problem is cellular nuclear segmentation in highly anisotropic and noisy 3D image data. Manually localizing and segmenting each and every cellular nucleus is very time-consuming, which remains a bottleneck in large-scale biological experiments. In this work, we present a tool for automated segmentation of cellular nuclei from 3D fluorescent microscopic data. Our tool is based on state-of-the-art image processing and machine learning techniques and provides a user-friendly graphical user interface. We show that our tool is as accurate as manual annotation and greatly reduces the time for the registration.
Project description:Motivation:Progress in 3D electron microscopy (EM) imaging has greatly facilitated neuroscience research in high-throughput data acquisition. Correspondingly, high-throughput automated image analysis methods are necessary to work on par with the speed of data being produced. One such example is the need for automated EM image segmentation for neurite reconstruction. However, the efficiency and reliability of current methods are still lagging far behind human performance. Results:Here, we propose DeepEM3D, a deep learning method for segmenting 3D anisotropic brain electron microscopy images. In this method, the deep learning model can efficiently build feature representation and incorporate sufficient multi-scale contextual information. We propose employing a combination of novel boundary map generation methods with optimized model ensembles to address the inherent challenges of segmenting anisotropic images. We evaluated our method by participating in the 3D segmentation of neurites in EM images (SNEMI3D) challenge. Our submission is ranked #1 on the current leaderboard as of Oct 15, 2016. More importantly, our result was very close to human-level performance in terms of the challenge evaluation metric: namely, a Rand error of 0.06015 versus the human value of 0.05998. Availability and Implementation:The code is available at https://github.com/divelab/deepem3d/. Contact:firstname.lastname@example.org. Supplementary information:Supplementary data are available at Bioinformatics online.
Project description:BACKGROUND:The segmentation of a 3D image is a task that can hardly be automatized in certain situations, notably when the contrast is low and/or the distance between elements is small. The existing supervised methods require a high amount of user input, e.g. delineating the domain in all planar sections. RESULTS:We present FitEllipsoid, a supervised segmentation code that allows fitting ellipsoids to 3D images with a minimal amount of interactions: the user clicks on a few points on the boundary of the object on 3 orthogonal views. The quantitative geometric results of the segmentation of ellipsoids can be exported as a csv file or as a binary image. The core of the code is based on an original computational approach to fit ellipsoids to point clouds in an affine invariant manner. The plugin is validated by segmenting a large number of 3D nuclei in tumor spheroids, allowing to analyze the distribution of their shapes. User experiments show that large collections of nuclei can be segmented with a high accuracy much faster than with more traditional 2D slice by slice delineation approaches. CONCLUSIONS:We designed a user-friendly software FitEllipsoid allowing to segment hundreds of ellipsoidal shapes in a supervised manner. It may be used directly to analyze biological samples, or to generate segmentation databases necessary to train learning algorithms. The algorithm is distributed as an open-source plugin to be used within the image analysis software Icy. We also provide a Matlab toolbox available with GitHub.
Project description:Segmentation is a fundamental problem that dominates the success of microscopic image analysis. In almost 25 years of cell detection software development, there is still no single piece of commercial software that works well in practice when applied to early mouse embryo or stem cell image data. To address this need, we developed MINS (modular interactive nuclear segmentation) as a MATLAB/C++-based segmentation tool tailored for counting cells and fluorescent intensity measurements of 2D and 3D image data. Our aim was to develop a tool that is accurate and efficient yet straightforward and user friendly. The MINS pipeline comprises three major cascaded modules: detection, segmentation, and cell position classification. An extensive evaluation of MINS on both 2D and 3D images, and comparison to related tools, reveals improvements in segmentation accuracy and usability. Thus, its accuracy and ease of use will allow MINS to be implemented for routine single-cell-level image analyses.
Project description:Segmenting cell nuclei within microscopy images is a ubiquitous task in biological research and clinical applications. Unfortunately, segmenting low-contrast overlapping objects that may be tightly packed is a major bottleneck in standard deep learning-based models. We report a Nuclear Segmentation Tool (NuSeT) based on deep learning that accurately segments nuclei across multiple types of fluorescence imaging data. Using a hybrid network consisting of U-Net and Region Proposal Networks (RPN), followed by a watershed step, we have achieved superior performance in detecting and delineating nuclear boundaries in 2D and 3D images of varying complexities. By using foreground normalization and additional training on synthetic images containing non-cellular artifacts, NuSeT improves nuclear detection and reduces false positives. NuSeT addresses common challenges in nuclear segmentation such as variability in nuclear signal and shape, limited training sample size, and sample preparation artifacts. Compared to other segmentation models, NuSeT consistently fares better in generating accurate segmentation masks and assigning boundaries for touching nuclei.
Project description:BACKGROUND AND PURPOSE:In clinical diagnosis, medical image segmentation plays a key role in the analysis of pathological regions. Despite advances in automatic and semi-automatic segmentation techniques, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a lower number of interactions, and a user-independent solution to reduce the time frame between image acquisition and diagnosis. METHODS:We present a new interactive method for correcting image segmentations. Our method provides 3D shape corrections through 2D interactions. This approach enables an intuitive and natural corrections of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle and knee joint segmentations from MR images. RESULTS:Experimental results show that full segmentation corrections could be performed within an average correction time of 5.5±3.3 minutes and an average of 56.5±33.1 user interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.02 for both anatomies. In addition, for users with different levels of expertise, our method yields a correction time and number of interaction decrease from 38±19.2 minutes to 6.4±4.3 minutes, and 339±157.1 to 67.7±39.6 interactions, respectively.
Project description:Purpose: Lesion volume is a meaningful measure in multiple sclerosis (MS) prognosis. Manual lesion segmentation for computing volume in a single or multiple time points is time consuming and suffers from intra and inter-observer variability. Methods: In this paper, we present MSmetrix-long: a joint expectation-maximization (EM) framework for two time point white matter (WM) lesion segmentation. MSmetrix-long takes as input a 3D T1-weighted and a 3D FLAIR MR image and segments lesions in three steps: (1) cross-sectional lesion segmentation of the two time points; (2) creation of difference image, which is used to model the lesion evolution; (3) a joint EM lesion segmentation framework that uses output of step (1) and step (2) to provide the final lesion segmentation. The accuracy (Dice score) and reproducibility (absolute lesion volume difference) of MSmetrix-long is evaluated using two datasets. Results: On the first dataset, the median Dice score between MSmetrix-long and expert lesion segmentation was 0.63 and the Pearson correlation coefficient (PCC) was equal to 0.96. On the second dataset, the median absolute volume difference was 0.11 ml. Conclusions: MSmetrix-long is accurate and consistent in segmenting MS lesions. Also, MSmetrix-long compares favorably with the publicly available longitudinal MS lesion segmentation algorithm of Lesion Segmentation Toolbox.
Project description:Premise:X-ray microcomputed tomography (microCT) can be used to measure 3D leaf internal anatomy, providing a holistic view of tissue organization. Previously, the substantial time needed for segmenting multiple tissues limited this technique to small data sets, restricting its utility for phenotyping experiments and limiting our confidence in the inferences of these studies due to low replication numbers. Methods and Results:We present a Python codebase for random forest machine learning segmentation and 3D leaf anatomical trait quantification that dramatically reduces the time required to process single-leaf microCT scans into detailed segmentations. By training the model on each scan using six hand-segmented image slices out of >1500 in the full leaf scan, it achieves >90% accuracy in background and tissue segmentation. Conclusions:Overall, this 3D segmentation and quantification pipeline can reduce one of the major barriers to using microCT imaging in high-throughput plant phenotyping.
Project description:Purpose:To describe and evaluate a free, online tool for automatically segmenting optical coherence tomography (OCT) images from different devices and computing summary measures such as retinal thickness. Methods:ReLayer (https://relayer.online) is an online platform to which OCT scan images can be uploaded and analyzed. Results can be downloaded as plaintext (.csv) files. The segmentation method includes a novel, one-dimensional active contour model, designed to locate the inner limiting membrane, inner/outer segment, and retinal pigment epithelium. The method, designed for B-scans from Heidelberg Engineering Spectralis, was adapted for Topcon 3D OCT-2000 and OptoVue AngioVue. The method was applied to scans from healthy and pathological eyes, and was validated against segmentation by the manufacturers, the IOWA Reference Algorithms, and manual segmentation. Results:Segmentation of a B-scan took ≤1 second. In healthy eyes, mean difference in retinal thickness from ReLayer and the reference standard was below the resolution of the Spectralis and 3D OCT-2000, and slightly above the resolution of the AngioVue. In pathological eyes, ReLayer performed similarly to IOWA (P = 0.97) and better than Spectralis (P < 0.001). Conclusions:A free online platform (ReLayer) is capable of segmenting OCT scans with similar speed, accuracy, and reliability as the other tested algorithms, but offers greater accessibility. ReLayer could represent a valuable tool for researchers requiring the full segmentation, often not made available by commercial software. Translational Relevance:A free online platform (ReLayer) provides free, accessible segmentation of OCT images: data often not available via existing commercial software.
Project description:The subcellular localization of objects, such as organelles, proteins, or other molecules, instructs cellular form and function. Understanding the underlying spatial relationships between objects through colocalization analysis of microscopy images is a fundamental approach used to inform biological mechanisms. We generated an automated and customizable computational tool, the SubcellularDistribution pipeline, to facilitate object-based image analysis from three-dimensional (3D) fluorescence microcopy images. To test the utility of the SubcellularDistribution pipeline, we examined the subcellular distribution of mRNA relative to centrosomes within syncytial Drosophila embryos. Centrosomes are microtubule-organizing centers, and RNA enrichments at centrosomes are of emerging importance. Our open-source and freely available software detected RNA distributions comparably to commercially available image analysis software. The SubcellularDistribution pipeline is designed to guide the user through the complete process of preparing image analysis data for publication, from image segmentation and data processing to visualization.This article has an associated First Person interview with the first author of the paper.
Project description:Endobronchial ultrasound (EBUS) is now commonly used for cancer-staging bronchoscopy. Unfortunately, EBUS is challenging to use and interpreting EBUS video sequences is difficult. Other ultrasound imaging domains, hampered by related difficulties, have benefited from computer-based image-segmentation methods. Yet, so far, no such methods have been proposed for EBUS. We propose image-segmentation methods for 2-D EBUS frames and 3-D EBUS sequences. Our 2-D method adapts the fast-marching level-set process, anisotropic diffusion, and region growing to the problem of segmenting 2-D EBUS frames. Our 3-D method builds upon the 2-D method while also incorporating the geodesic level-set process for segmenting EBUS sequences. Tests with lung-cancer patient data showed that the methods ran fully automatically for nearly 80% of test cases. For the remaining cases, the only user-interaction required was the selection of a seed point. When compared to ground-truth segmentations, the 2-D method achieved an overall Dice index = 90.0% ±4.9%, while the 3-D method achieved an overall Dice index = 83.9 ± 6.0%. In addition, the computation time (2-D, 0.070 s/frame; 3-D, 0.088 s/frame) was two orders of magnitude faster than interactive contour definition. Finally, we demonstrate the potential of the methods for EBUS localization in a multimodal image-guided bronchoscopy system.