Project description:ObjectivesEvaluating craniofacial phenotype-genotype correlations prenatally is increasingly important; however, it is subjective and challenging with 3D ultrasound. We developed an automated landmark propagation pipeline using 3D motion-corrected, slice-to-volume reconstructed (SVR) fetal MRI for craniofacial measurements.MethodsA literature review and expert consensus identified 31 craniofacial biometrics for fetal MRI. An MRI atlas with defined anatomical landmarks served as a template for subject registration, auto-labelling, and biometric calculation. We assessed 108 healthy controls and 24 fetuses with Down syndrome (T21) in the third trimester (29-36 weeks gestational age, GA) to identify meaningful biometrics in T21. Reliability and reproducibility were evaluated in 10 random datasets by four observers.ResultsAutomated labels were produced for all 132 subjects with a 0.03% placement error rate. Seven measurements, including anterior base of skull length and maxillary length, showed significant differences with large effect sizes between T21 and control groups (ANOVA, p<0.001). Manual measurements took 25-35 minutes per case, while automated extraction took approximately 5 minutes. Bland-Altman plots showed agreement within manual observer ranges except for mandibular width, which had higher variability. Extended GA growth charts (19-39 weeks), based on 280 control fetuses, were produced for future research.ConclusionThis is the first automated atlas-based protocol using 3D SVR MRI for fetal craniofacial biometrics, accurately revealing morphological craniofacial differences in a T21 cohort. Future work should focus on improving measurement reliability, larger clinical cohorts, and technical advancements, to enhance prenatal care and phenotypic characterisation.
Project description:An important step in the preprocessing of resting state functional magnetic resonance images (rs-fMRI) is the separation of brain from non-brain voxels. Widely used imaging tools such as FSL’s BET2 and AFNI’s 3dSkullStrip accomplish this task effectively in children and adults. In fetal functional brain imaging, however, the presence of maternal tissue around the brain coupled with the non-standard position of the fetal head limit the usefulness of these tools. Accurate brain masks are thus generated manually, a time-consuming and tedious process that slows down preprocessing of fetal rs-fMRI. Recently, deep learning-based segmentation models such as convolutional neural networks (CNNs) have been increasingly used for automated segmentation of medical images, including the fetal brain. Here, we propose a computationally efficient end-to-end generative adversarial neural network (GAN) for segmenting the fetal brain. This method, which we call FetalGAN, yielded whole brain masks that closely approximated the manually labeled ground truth. FetalGAN performed better than 3D U-Net model and BET2: FetalGAN, Dice score = 0.973 ± 0.013, precision = 0.977 ± 0.015; 3D U-Net, Dice score = 0.954 ± 0.054, precision = 0.967 ± 0.037; BET2, Dice score = 0.856 ± 0.084, precision = 0.758 ± 0.113. FetalGAN was also faster than 3D U-Net and the manual method (7.35 s vs. 10.25 s vs. ∼5 min/volume). To the best of our knowledge, this is the first successful implementation of 3D CNN with GAN on fetal fMRI brain images and represents a significant advance in fully automating processing of rs-MRI images.
Project description:Perivascular Spaces (PVS) are a feature of Small Vessel Disease (SVD), and are an important part of the brain's circulation and glymphatic drainage system. Quantitative analysis of PVS on Magnetic Resonance Images (MRI) is important for understanding their relationship with neurological diseases. In this work, we propose a segmentation technique based on the 3D Frangi filtering for extraction of PVS from MRI. We used ordered logit models and visual rating scales as alternative ground truth for Frangi filter parameter optimization and evaluation. We optimized and validated our proposed models on two independent cohorts, a dementia sample (N = 20) and patients who previously had mild to moderate stroke (N = 48). Results demonstrate the robustness and generalisability of our segmentation method. Segmentation-based PVS burden estimates correlated well with neuroradiological assessments (Spearman's ρ = 0.74, p < 0.001), supporting the potential of our proposed method.
Project description:The automated measurement of crop phenotypic parameters is of great significance to the quantitative study of crop growth. The segmentation and classification of crop point cloud help to realize the automation of crop phenotypic parameter measurement. At present, crop spike-shaped point cloud segmentation has problems such as fewer samples, uneven distribution of point clouds, occlusion of stem and spike, disorderly arrangement of point clouds, and lack of targeted network models. The traditional clustering method can realize the segmentation of the plant organ point cloud with relatively independent spatial location, but the accuracy is not acceptable. This paper first builds a desktop-level point cloud scanning apparatus based on a structured-light projection module to facilitate the point cloud acquisition process. Then, the rice ear point cloud was collected, and the rice ear point cloud data set was made. In addition, data argumentation is used to improve sample utilization efficiency and training accuracy. Finally, a 3D point cloud convolutional neural network model called Panicle-3D was designed to achieve better segmentation accuracy. Specifically, the design of Panicle-3D is aimed at the multiscale characteristics of plant organs, combined with the structure of PointConv and long and short jumps, which accelerates the convergence speed of the network and reduces the loss of features in the process of point cloud downsampling. After comparison experiments, the segmentation accuracy of Panicle-3D reaches 93.4%, which is higher than PointNet. Panicle-3D is suitable for other similar crop point cloud segmentation tasks.
Project description:Automatically segmenting anatomical structures from 3D brain MRI images is an important task in neuroimaging. One major challenge is to design and learn effective image models accounting for the large variability in anatomy and data acquisition protocols. A deformable template is a type of generative model that attempts to explicitly match an input image with a template (atlas), and thus, they are robust against global intensity changes. On the other hand, discriminative models combine local image features to capture complex image patterns. In this paper, we propose a robust brain image segmentation algorithm that fuses together deformable templates and informative features. It takes advantage of the adaptation capability of the generative model and the classification power of the discriminative models. The proposed algorithm achieves both robustness and efficiency, and can be used to segment brain MRI images with large anatomical variations. We perform an extensive experimental study on four datasets of T1-weighted brain MRI data from different sources (1,082 MRI scans in total) and observe consistent improvement over the state-of-the-art systems.
Project description:Fetal MRI is widely used for quantitative brain volumetry studies. However, currently, there is a lack of universally accepted protocols for fetal brain parcellation and segmentation. Published clinical studies tend to use different segmentation approaches that also reportedly require significant amounts of time-consuming manual refinement. In this work, we propose to address this challenge by developing a new robust deep learning-based fetal brain segmentation pipeline for 3D T2w motion corrected brain images. At first, we defined a new refined brain tissue parcellation protocol with 19 regions-of-interest using the new fetal brain MRI atlas from the Developing Human Connectome Project. This protocol design was based on evidence from histological brain atlases, clear visibility of the structures in individual subject 3D T2w images and the clinical relevance to quantitative studies. It was then used as a basis for developing an automated deep learning brain tissue parcellation pipeline trained on 360 fetal MRI datasets with different acquisition parameters using semi-supervised approach with manually refined labels propagated from the atlas. The pipeline demonstrated robust performance for different acquisition protocols and GA ranges. Analysis of tissue volumetry for 390 normal participants (21-38 weeks gestational age range), scanned with three different acquisition protocols, did not reveal significant differences for major structures in the growth charts. Only minor errors were present in < 15% of cases thus significantly reducing the need for manual refinement. In addition, quantitative comparison between 65 fetuses with ventriculomegaly and 60 normal control cases were in agreement with the findings reported in our earlier work based on manual segmentations. These preliminary results support the feasibility of the proposed atlas-based deep learning approach for large-scale volumetric analysis. The created fetal brain volumetry centiles and a docker with the proposed pipeline are publicly available online at https://hub.docker.com/r/fetalsvrtk/segmentation (tag brain_bounti_tissue).
Project description:PURPOSE:To develop a dual-radiofrequency (RF), dual-echo, 3D ultrashort echo-time (UTE) pulse sequence and bone-selective image reconstruction for rapid high-resolution craniofacial MRI. METHODS:The proposed pulse sequence builds on recently introduced dual-RF UTE imaging. While yielding enhanced bone specificity by exploiting high sensitivity of short T2 signals to variable RF pulse widths, the parent technique exacts a 2-fold scan time penalty relative to standard dual-echo UTE. In the proposed method, the parent sequence's dual-RF scheme was incorporated into dual-echo acquisitions while radial view angles are varied every pulse-to-pulse repetition period. The resulting 4 echoes (2 for each RF) were combined by view-sharing to construct 2 sets of k-space data sets, corresponding to short and long TEs, respectively, leading to a 2-fold increase in imaging efficiency. Furthermore, by exploiting the sparsity of bone signals in echo-difference images, acceleration was achieved by solving a bone-sparsity constrained image reconstruction problem. In vivo studies were performed to evaluate the effectiveness of the proposed acceleration approaches in comparison to the parent method. RESULTS:The proposed technique achieves 1.1-mm isotropic skull imaging in 3 minutes without visual loss of image quality, compared to the parent technique (scan time = 12 minutes). Bone-specific images and corresponding 3D renderings of the skull were found to depict the expected craniofacial anatomy over the entire head. CONCLUSION:The proposed method is able to achieve high-resolution volumetric craniofacial images in a clinically practical imaging time, and thus may prove useful as a potential alternative to computed tomography.
Project description:ObjectiveIn previous work we have developed a fast sequence that focusses on cerebrospinal fluid (CSF) based on the long T2 of CSF. By processing the data obtained with this CSF MRI sequence, brain parenchymal volume (BPV) and intracranial volume (ICV) can be automatically obtained. The aim of this study was to assess the precision of the BPV and ICV measurements of the CSF MRI sequence and to validate the CSF MRI sequence by comparison with 3D T1-based brain segmentation methods.Materials and methodsTen healthy volunteers (2 females; median age 28 years) were scanned (3T MRI) twice with repositioning in between. The scan protocol consisted of a low resolution (LR) CSF sequence (0:57min), a high resolution (HR) CSF sequence (3:21min) and a 3D T1-weighted sequence (6:47min). Data of the HR 3D-T1-weighted images were downsampled to obtain LR T1-weighted images (reconstructed imaging time: 1:59 min). Data of the CSF MRI sequences was automatically segmented using in-house software. The 3D T1-weighted images were segmented using FSL (5.0), SPM12 and FreeSurfer (5.3.0).ResultsThe mean absolute differences for BPV and ICV between the first and second scan for CSF LR (BPV/ICV: 12±9/7±4cc) and CSF HR (5±5/4±2cc) were comparable to FSL HR (9±11/19±23cc), FSL LR (7±4, 6±5cc), FreeSurfer HR (5±3/14±8cc), FreeSurfer LR (9±8, 12±10cc), and SPM HR (5±3/4±7cc), and SPM LR (5±4, 5±3cc). The correlation between the measured volumes of the CSF sequences and that measured by FSL, FreeSurfer and SPM HR and LR was very good (all Pearson's correlation coefficients >0.83, R2 .67-.97). The results from the downsampled data and the high-resolution data were similar.ConclusionBoth CSF MRI sequences have a precision comparable to, and a very good correlation with established 3D T1-based automated segmentations methods for the segmentation of BPV and ICV. However, the short imaging time of the fast CSF MRI sequence is superior to the 3D T1 sequence on which segmentation with established methods is performed.
Project description:Left atrial (LA) fibrosis plays a vital role as a mediator in the progression of atrial fibrillation. 3D late gadolinium-enhancement (LGE) MRI has been proven effective in identifying LA fibrosis. Image analysis of 3D LA LGE involves manual segmentation of the LA wall, which is both lengthy and challenging. Automated segmentation poses challenges owing to the diverse intensities in data from various vendors, the limited contrast between LA and surrounding tissues, and the intricate anatomical structures of the LA. Current approaches relying on 3D networks are computationally intensive since 3D LGE MRIs and the networks are large. Regarding this issue, most researchers came up with two-stage methods: initially identifying the LA center using a scaled-down version of the MRIs and subsequently cropping the full-resolution MRIs around the LA center for final segmentation. We propose a lightweight transformer-based 3D architecture, Usformer, designed to precisely segment LA volume in a single stage, eliminating error propagation associated with suboptimal two-stage training. The transposed attention facilitates capturing the global context in large 3D volumes without significant computation requirements. Usformer outperforms the state-of-the-art supervised learning methods in terms of accuracy and speed. First, with the smallest Hausdorff Distance (HD) and Average Symmetric Surface Distance (ASSD), it achieved a dice score of 93.1% and 92.0% in the 2018 Atrial Segmentation Challenge and our local institutional dataset, respectively. Second, the number of parameters and computation complexity are largely reduced by 2.8x and 3.8x, respectively. Moreover, Usformer does not require a large dataset. When only 16 labeled MRI scans are used for training, Usformer achieves a 92.1% dice score in the challenge dataset. The proposed Usformer delineates the boundaries of the LA wall relatively accurately, which may assist in the clinical translation of LA LGE for planning catheter ablation of atrial fibrillation.
Project description:Recent advancements in deep learning have facilitated significant progress in medical image analysis. However, there is lack of studies specifically addressing the needs of surgeons in terms of practicality and precision for surgical planning. Accurate understanding of anatomical structures, such as the liver and its intrahepatic structures, is crucial for preoperative planning from a surgeon's standpoint. This study proposes a deep learning model for automatic segmentation of liver parenchyma, vascular and biliary structures, and tumor mass in hepatobiliary phase liver MRI to improve preoperative planning and enhance patient outcomes. A total of 120 adult patients who underwent liver resection due to hepatic mass and had preoperative gadoxetic acid-enhanced MRI were included in the study. A 3D residual U-Net model was developed for automatic segmentation of liver parenchyma, tumor mass, hepatic vein (HV), portal vein (PV), and bile duct (BD). The model's performance was assessed using Dice similarity coefficient (DSC) by comparing the results with manually delineated structures. The model achieved high accuracy in segmenting liver parenchyma (DSC 0.92 ± 0.03), tumor mass (DSC 0.77 ± 0.21), hepatic vein (DSC 0.70 ± 0.05), portal vein (DSC 0.61 ± 0.03), and bile duct (DSC 0.58 ± 0.15). The study demonstrated the potential of the 3D residual U-Net model to provide a comprehensive understanding of liver anatomy and tumors for preoperative planning, potentially leading to improved surgical outcomes and increased patient safety.