LimeSeg: a coarse-grained lipid membrane simulation for 3D image segmentation.
ABSTRACT: BACKGROUND:3D segmentation is often a prerequisite for 3D object display and quantitative measurements. Yet existing voxel-based methods do not directly give information on the object surface or topology. As for spatially continuous approaches such as level-set, active contours and meshes, although providing surfaces and concise shape description, they are generally not suitable for multiple object segmentation and/or for objects with an irregular shape, which can hamper their adoption by bioimage analysts. RESULTS:We developed LimeSeg, a computationally efficient and spatially continuous 3D segmentation method. LimeSeg is easy-to-use and can process many and/or highly convoluted objects. Based on the concept of SURFace ELements ("Surfels"), LimeSeg resembles a highly coarse-grained simulation of a lipid membrane in which a set of particles, analogous to lipid molecules, are attracted to local image maxima. The particles are self-generating and self-destructing thus providing the ability for the membrane to evolve towards the contour of the objects of interest. The capabilities of LimeSeg: simultaneous segmentation of numerous non overlapping objects, segmentation of highly convoluted objects and robustness for big datasets are demonstrated on experimental use cases (epithelial cells, brain MRI and FIB-SEM dataset of cellular membrane system respectively). CONCLUSION:In conclusion, we implemented a new and efficient 3D surface reconstruction plugin adapted for various sources of images, which is deployed in the user-friendly and well-known ImageJ environment.
Project description:Deformable models are widely used for image segmentation, most commonly to find single objects within an image. Although several methods have been proposed to segment multiple objects using deformable models, substantial limitations in their utility remain. This paper presents a multiple object segmentation method using a novel and efficient object representation for both two and three dimensions. The new framework guarantees object relationships and topology, prevents overlaps and gaps, enables boundary-specific speeds, and has a computationally efficient evolution scheme that is largely independent of the number of objects. Maintaining object relationships and straightforward use of object-specific and boundary-specific smoothing and advection forces enables the segmentation of objects with multiple compartments, a critical capability in the parcellation of organs in medical imaging. Comparing the new framework with previous approaches shows its superior performance and scalability.
Project description:<h4>Background</h4>Quantitative imaging of epithelial tissues requires bioimage analysis tools that are widely applicable and accurate. In the case of imaging 3D tissues, a common preprocessing step consists of projecting the acquired 3D volume on a 2D plane mapping the tissue surface. While segmenting the tissue cells is amenable on 2D projections, it is still very difficult and cumbersome in 3D. However, for many specimen and models used in developmental and cell biology, the complex content of the image volume surrounding the epithelium in a tissue often reduces the visibility of the biological object in the projection, compromising its subsequent analysis. In addition, the projection may distort the geometry of the tissue and can lead to strong artifacts in the morphology measurement.<h4>Results</h4>Here we introduce a user-friendly toolbox built to robustly project epithelia on their 2D surface from 3D volumes and to produce accurate morphology measurement corrected for the projection distortion, even for very curved tissues. Our toolbox is built upon two components. LocalZProjector is a configurable Fiji plugin that generates 2D projections and height-maps from potentially large 3D stacks (larger than 40 GB per time-point) by only incorporating signal of the planes with local highest variance/mean intensity, despite a possibly complex image content. DeProj is a MATLAB tool that generates correct morphology measurements by combining the height-map output (such as the one offered by LocalZProjector) and the results of a cell segmentation on the 2D projection, hence effectively deprojecting the 2D segmentation in 3D. In this paper, we demonstrate their effectiveness over a wide range of different biological samples. We then compare its performance and accuracy against similar existing tools.<h4>Conclusions</h4>We find that LocalZProjector performs well even in situations where the volume to project also contains unwanted signal in other layers. We show that it can process large images without a pre-processing step. We study the impact of geometrical distortions on morphological measurements induced by the projection. We measured very large distortions which are then corrected by DeProj, providing accurate outputs.
Project description:Clinical examination of three-dimensional image data of compound anatomical objects, such as complex joints, remains a tedious process, demanding the time and the expertise of physicians. For instance, automation of the segmentation task of the TMJ (temporomandibular joint) has been hindered by its compound three-dimensional shape, multiple overlaid textures, an abundance of surrounding irregularities in the skull, and a virtually omnidirectional range of the jaw's motion-all of which extend the manual annotation process to more than an hour per patient. To address the challenge, we invent a new workflow for the 3D segmentation task: namely, we propose to segment empty spaces between all the tissues surrounding the object-the so-called negative volume segmentation. Our approach is an end-to-end pipeline that comprises a V-Net for bone segmentation, a 3D volume construction by inflation of the reconstructed bone head in all directions along the normal vector to its mesh faces. Eventually confined within the skull bones, the inflated surface occupies the entire "negative" space in the joint, effectively providing a geometrical/topological metric of the joint's health. We validate the idea on the CT scans in a 50-patient dataset, annotated by experts in maxillofacial medicine, quantitatively compare the asymmetry given the left and the right negative volumes, and automate the entire framework for clinical adoption.
Project description:The object detection algorithm based on vehicle-mounted lidar is a key component of the perception system on autonomous vehicles. It can provide high-precision and highly robust obstacle information for the safe driving of autonomous vehicles. However, most algorithms are often based on a large amount of point cloud data, which makes real-time detection difficult. To solve this problem, this paper proposes a 3D fast object detection method based on three main steps: First, the ground segmentation by discriminant image (GSDI) method is used to convert point cloud data into discriminant images for ground points segmentation, which avoids the direct computing of the point cloud data and improves the efficiency of ground points segmentation. Second, the image detector is used to generate the region of interest of the three-dimensional object, which effectively narrows the search range. Finally, the dynamic distance threshold clustering (DDTC) method is designed for different density of the point cloud data, which improves the detection effect of long-distance objects and avoids the over-segmentation phenomenon generated by the traditional algorithm. Experiments have showed that this algorithm can meet the real-time requirements of autonomous driving while maintaining high accuracy.
Project description:Three-dimensional (3D) shape perception is one of the most important functions of vision. It is crucial for many tasks, from object recognition to tool use, and yet how the brain represents shape remains poorly understood. Most theories focus on purely geometrical computations (e.g., estimating depths, curvatures, symmetries). Here, however, we find that shape perception also involves sophisticated inferences that parse shapes into features with distinct causal origins. Inspired by marble sculptures such as Strazza's The Veiled Virgin (1850), which vividly depict figures swathed in cloth, we created composite shapes by wrapping unfamiliar forms in textile, so that the observable surface relief was the result of complex interactions between the underlying object and overlying fabric. Making sense of such structures requires segmenting the shape based on their causes, to distinguish whether lumps and ridges are due to the shrouded object or to the ripples and folds of the overlying cloth. Three-dimensional scans of the objects with and without the textile provided ground-truth measures of the true physical surface reliefs, against which observers' judgments could be compared. In a virtual painting task, participants indicated which surface ridges appeared to be caused by the hidden object and which were due to the drapery. In another experiment, participants indicated the perceived depth profile of both surface layers. Their responses reveal that they can robustly distinguish features belonging to the textile from those due to the underlying object. Together, these findings reveal the operation of visual shape-segmentation processes that parse shapes based on their causal origin.
Project description:Recent advances in automated high-resolution fluorescence microscopy and robotic handling have made the systematic and cost effective study of diverse morphological changes within a large population of cells possible under a variety of perturbations, e.g., drugs, compounds, metal catalysts, RNA interference (RNAi). Cell population-based studies deviate from conventional microscopy studies on a few cells, and could provide stronger statistical power for drawing experimental observations and conclusions. However, it is challenging to manually extract and quantify phenotypic changes from the large amounts of complex image data generated. Thus, bioimage informatics approaches are needed to rapidly and objectively quantify and analyze the image data. This paper provides an overview of the bioimage informatics challenges and approaches in image-based studies for drug and target discovery. The concepts and capabilities of image-based screening are first illustrated by a few practical examples investigating different kinds of phenotypic changes caEditorsused by drugs, compounds, or RNAi. The bioimage analysis approaches, including object detection, segmentation, and tracking, are then described. Subsequently, the quantitative features, phenotype identification, and multidimensional profile analysis for profiling the effects of drugs and targets are summarized. Moreover, a number of publicly available software packages for bioimage informatics are listed for further reference. It is expected that this review will help readers, including those without bioimage informatics expertise, understand the capabilities, approaches, and tools of bioimage informatics and apply them to advance their own studies.
Project description:Direct comparison of three-dimensional (3D) objects is computationally expensive due to the need for translation, rotation, and scaling of the objects to evaluate their similarity. In applications of 3D object comparison, often identifying specific local regions of objects is of particular interest. We have recently developed a set of 2D moment invariants based on discrete orthogonal Krawtchouk polynomials for comparison of local image patches. In this work, we extend them to 3D and construct 3D Krawtchouk descriptors (3DKDs) that are invariant under translation, rotation, and scaling. The new descriptors have the ability to extract local features of a 3D surface from any region-of-interest. This property enables comparison of two arbitrary local surface regions from different 3D objects. We present the new formulation of 3DKDs and apply it to the local shape comparison of protein surfaces in order to predict ligand molecules that bind to query proteins.
Project description:Volume-object annotation system (VANO) is a cross-platform image annotation system that enables one to conveniently visualize and annotate 3D volume objects including nuclei and cells. An application of VANO typically starts with an initial collection of objects produced by a segmentation computation. The objects can then be labeled, categorized, deleted, added, split, merged and redefined. VANO has been used to build high-resolution digital atlases of the nuclei of Caenorhabditis elegans at the L1 stage and the nuclei of Drosophila melanogaster's ventral nerve cord at the late embryonic stage.Platform independent executables of VANO, a sample dataset, and a detailed description of both its design and usage are available at research.janelia.org/peng/proj/vano. VANO is open-source for co-development.
Project description:There have been recent developments in grippers that are based on capillary force and condensed water droplets. These are used for manipulating micro-sized objects. Recently, one-finger grippers have been produced that are able to reliably grip using the capillary force. To release objects, either the van der Waals, gravitational or inertial-forces method is used. This article presents methods for reliably gripping and releasing micro-objects using the capillary force. The moisture from the surrounding air is condensed into a thin layer of water on the contact surfaces of the objects. From the thin layer of water, a water meniscus between the micro-sized object, the gripper and the releasing surface is created. Consequently, the water meniscus between the object and the releasing surface produces a high enough capillary force to release the micro-sized object from the tip of the one-finger gripper. In this case, either polystyrene, glass beads with diameters between 5–60 µm, or irregularly shaped dust particles of similar sizes were used. 3D structures made up of micro-sized objects could be constructed using this method. This method is reliable for releasing during assembly and also for gripping, when the objects are removed from the top of the 3D structure—the so-called “disassembling gripping” process. The accuracy of the release was lower than 0.5 µm.
Project description:SUMMARY:Segmentation of single cells in microscopy images is one of the major challenges in computational biology. It is the first step of most bioimage analysis tasks, and essential to create training sets for more advanced deep learning approaches. Here, we propose 3D-Cell-Annotator to solve this task using 3D active surfaces together with shape descriptors as prior information in a semi-automated fashion. The software uses the convenient 3D interface of the widely used Medical Imaging Interaction Toolkit (MITK). Results on 3D biological structures (e.g. spheroids, organoids and embryos) show that the precision of the segmentation reaches the level of a human expert. AVAILABILITY AND IMPLEMENTATION:3D-Cell-Annotator is implemented in CUDA/C++ as a patch for the segmentation module of MITK. The 3D-Cell-Annotator enabled MITK distribution can be downloaded at: www.3D-cell-annotator.org. It works under Windows 64-bit systems and recent Linux distributions even on a consumer level laptop with a CUDA-enabled video card using recent NVIDIA drivers. SUPPLEMENTARY INFORMATION:Supplementary data are available at Bioinformatics online.