Project description:Lung lesions vary considerably in size, density, and shape, and can attach to surrounding anatomic structures such as chest wall or mediastinum. Automatic segmentation of the lesions poses a challenge. This work communicates a new three-dimensional algorithm for the segmentation of a wide variety of lesions, ranging from tumors found in patients with advanced lung cancer to small nodules detected in lung cancer screening programs.The authors' algorithm uniquely combines the image processing techniques of marker-controlled watershed, geometric active contours as well as Markov random field (MRF). The user of the algorithm manually selects a region of interest encompassing the lesion on a single slice and then the watershed method generates an initial surface of the lesion in three dimensions, which is refined by the active geometric contours. MRF improves the segmentation of ground glass opacity portions of part-solid lesions. The algorithm was tested on an anthropomorphic thorax phantom dataset and two publicly accessible clinical lung datasets. These clinical studies included a same-day repeat CT (prewalk and postwalk scans were performed within 15 min) dataset containing 32 lung lesions with one radiologist's delineated contours, and the first release of the Lung Image Database Consortium (LIDC) dataset containing 23 lung nodules with 6 radiologists' delineated contours. The phantom dataset contained 22 phantom nodules of known volumes that were inserted in a phantom thorax.For the prewalk scans of the same-day repeat CT dataset and the LIDC dataset, the mean overlap ratios of lesion volumes generated by the computer algorithm and the radiologist(s) were 69% and 65%, respectively. For the two repeat CT scans, the intra-class correlation coefficient (ICC) was 0.998, indicating high reliability of the algorithm. The mean relative difference was -3% for the phantom dataset.The performance of this new segmentation algorithm in delineating tumor contour and measuring tumor size illustrates its potential clinical value for assisting in noninvasive diagnosis of pulmonary nodules, therapy response assessment, and radiation treatment planning.
Project description:Metastatic breast cancer patients receive lifelong medication and are regularly monitored for disease progression. The aim of this work was to (1) propose networks to segment breast cancer metastatic lesions on longitudinal whole-body PET/CT and (2) extract imaging biomarkers from the segmentations and evaluate their potential to determine treatment response. Baseline and follow-up PET/CT images of 60 patients from the EPICUREseinmeta study were used to train two deep-learning models to segment breast cancer metastatic lesions: One for baseline images and one for follow-up images. From the automatic segmentations, four imaging biomarkers were computed and evaluated: SULpeak, Total Lesion Glycolysis (TLG), PET Bone Index (PBI) and PET Liver Index (PLI). The first network obtained a mean Dice score of 0.66 on baseline acquisitions. The second network obtained a mean Dice score of 0.58 on follow-up acquisitions. SULpeak, with a 32% decrease between baseline and follow-up, was the biomarker best able to assess patients' response (sensitivity 87%, specificity 87%), followed by TLG (43% decrease, sensitivity 73%, specificity 81%) and PBI (8% decrease, sensitivity 69%, specificity 69%). Our networks constitute promising tools for the automatic segmentation of lesions in patients with metastatic breast cancer allowing treatment response assessment with several biomarkers.
Project description:BackgroundBolus tracking can optimize the time delay between contrast injection and diagnostic scan initiation in contrast-enhanced computed tomography (CT), yet the procedure is time-consuming and subject to inter- and intra-operator variances which affect the enhancement levels in diagnostic scans. The objective of the current study is to use artificial intelligence algorithms to fully automate the bolus tracking procedure in contrast-enhanced abdominal CT exams for improved standardization and diagnostic accuracy while providing a simplified imaging workflow.MethodsThis retrospective study used abdominal CT exams collected under a dedicated Institutional Review Board (IRB). Input data consisted of CT topograms and images with high heterogeneity in terms of anatomy, sex, cancer pathologies, and imaging artifacts acquired with four different CT scanner models. Our method consisted of two sequential steps: (I) automatic locator scan positioning on topograms, and (II) automatic region-of-interest (ROI) positioning within the aorta on locator scans. The task of locator scan positioning is formulated as a regression problem, where the limited amount of annotated data is circumvented using transfer learning. The task of ROI positioning is formulated as a segmentation problem.ResultsOur locator scan positioning network offered improved positional consistency compared to a high degree of variance in manual slice positionings, verifying inter-operator variance as a significant source of error. When trained using expert-user ground-truth labels, the locator scan positioning network achieved a sub-centimeter error (9.76±6.78 mm) on a test dataset. The ROI segmentation network achieved a sub-millimeter absolute error (0.99±0.66 mm) on a test dataset.ConclusionsLocator scan positioning networks offer improved positional consistency compared to manual slice positionings and verified inter-operator variance as an important source of error. By significantly reducing operator-related decisions, this method opens opportunities to standardize and simplify the workflow of bolus tracking procedures for contrast-enhanced CT.
Project description:Radiomics investigates the predictive role of quantitative parameters calculated from radiological images. In oncology, tumour segmentation constitutes a crucial step of the radiomic workflow. Manual segmentation is time-consuming and prone to inter-observer variability. In this study, a state-of-the-art deep-learning network for automatic segmentation (nnU-Net) was applied to computed tomography images of lung tumour patients, and its impact on the performance of survival radiomic models was assessed. In total, 899 patients were included, from two proprietary and one public datasets. Different network architectures (2D, 3D) were trained and tested on different combinations of the datasets. Automatic segmentations were compared to reference manual segmentations performed by physicians using the DICE similarity coefficient. Subsequently, the accuracy of radiomic models for survival classification based on either manual or automatic segmentations were compared, considering both hand-crafted and deep-learning features. The best agreement between automatic and manual contours (DICE = 0.78 ± 0.12) was achieved averaging 2D and 3D predictions and applying customised post-processing. The accuracy of the survival classifier (ranging between 0.65 and 0.78) was not statistically different when using manual versus automatic contours, both with hand-crafted and deep features. These results support the promising role nnU-Net can play in automatic segmentation, accelerating the radiomic workflow without impairing the models' accuracy. Further investigations on different clinical endpoints and populations are encouraged to confirm and generalise these findings.
Project description:With the Coronavirus disease 2019 (COVID-19) spread, causing a world pandemic, and recently, the virus new variants continue to appear, making the situation more challenging and threatening, the visual assessment and quantification by expert radiologists have become costly and error-prone. Hence, there is a need to propose a model to predict the COVID-19 cases at the earliest possible to control the disease spread. In order to assist the medical professionals and reduce workload and the time the COVID-19 diagnosis cycle takes, this paper proposes a novel neural network architecture termed as O-Net to automatically segment chest Computerised Tomography Ct-scans infected by COVID-19 with optimised computing power and memory occupation. The O-Net consists of two convolutional autoencoders with an upsampling channel and a downsampling channel. Experimental tests show our proposal's effectiveness and potential, with a dice score of 0.86, pixel accuracy, precision, specificity of 0.99, 0.99, 0.98, respectively. Performance on the external dataset illustrates generalisation and scalability capabilities of the O-Net model to Ct-scan obtained from different scanners with different sizes. The second objective of this work is to introduce our virtual reality platform, COVIR, that visualises and manipulates 3D reconstructed lungs and segmented infected lesions caused by COVID-19. COVIR platform acts as a reading and visualisation support for medical practitioners to diagnose COVID-19 lung infection. The COVIR platform could be used for medical education professional practice and training. It was tested by Thirteen participants (medical staff, researchers, and collaborators), they conclude that the 3D VR visualisation of segmented Ct-Scan provides an aid diagnosis tool for better interpretation.
Project description:Even with over 80% of the population being vaccinated against COVID-19, the disease continues to claim victims. Therefore, it is crucial to have a secure Computer-Aided Diagnostic system that can assist in identifying COVID-19 and determining the necessary level of care. This is especially important in the Intensive Care Unit to monitor disease progression or regression in the fight against this epidemic. To accomplish this, we merged public datasets from the literature to train lung and lesion segmentation models with five different distributions. We then trained eight CNN models for COVID-19 and Common-Acquired Pneumonia classification. If the examination was classified as COVID-19, we quantified the lesions and assessed the severity of the full CT scan. To validate the system, we used Resnetxt101 Unet++ and Mobilenet Unet for lung and lesion segmentation, respectively, achieving accuracy of 98.05%, F1-score of 98.70%, precision of 98.7%, recall of 98.7%, and specificity of 96.05%. This was accomplished in just 19.70 s per full CT scan, with external validation on the SPGC dataset. Finally, when classifying these detected lesions, we used Densenet201 and achieved accuracy of 90.47%, F1-score of 93.85%, precision of 88.42%, recall of 100.0%, and specificity of 65.07%. The results demonstrate that our pipeline can correctly detect and segment lesions due to COVID-19 and Common-Acquired Pneumonia in CT scans. It can differentiate these two classes from normal exams, indicating that our system is efficient and effective in identifying the disease and assessing the severity of the condition.
Project description:PurposeAI-assisted techniques for lesion registration and segmentation have the potential to make CT-based tumor follow-up assessment faster and less reader-dependent. However, empirical evidence on the advantages of AI-assisted volumetric segmentation for lymph node and soft tissue metastases in follow-up CT scans is lacking. The aim of this study was to assess the efficiency, quality, and inter-reader variability of an AI-assisted workflow for volumetric segmentation of lymph node and soft tissue metastases in follow-up CT scans. Three hypotheses were tested: (H1) Assessment time for follow-up lesion segmentation is reduced using an AI-assisted workflow. (H2) The quality of the AI-assisted segmentation is non-inferior to the quality of fully manual segmentation. (H3) The inter-reader variability of the resulting segmentations is reduced with AI assistance.Materials and methodsThe study retrospectively analyzed 126 lymph nodes and 135 soft tissue metastases from 55 patients with stage IV melanoma. Three radiologists from two institutions performed both AI-assisted and manual segmentation, and the results were statistically analyzed and compared to a manual segmentation reference standard.ResultsAI-assisted segmentation reduced user interaction time significantly by 33% (222 s vs. 336 s), achieved similar Dice scores (0.80-0.84 vs. 0.81-0.82) and decreased inter-reader variability (median Dice 0.85-1.0 vs. 0.80-0.82; ICC 0.84 vs. 0.80), compared to manual segmentation.ConclusionThe findings of this study support the use of AI-assisted registration and volumetric segmentation for lymph node and soft tissue metastases in follow-up CT scans. The AI-assisted workflow achieved significant time savings, similar segmentation quality, and reduced inter-reader variability compared to manual segmentation.
Project description:Bone metastasis, emerging oncological therapies, and osteoporosis represent some of the distinct clinical contexts which can result in morphological alterations in bone structure. The visual assessment of these changes through anatomical images is considered suboptimal, emphasizing the importance of precise skeletal segmentation as a valuable aid for its evaluation. In the present study, a neural network model for automatic skeleton segmentation from bidimensional computerized tomography (CT) slices is proposed. A total of 77 CT images and their semimanual skeleton segmentation from two acquisition protocols (whole-body and femur-to-head) are used to form a training group and a testing group. Preprocessing of the images includes four main steps: stretcher removal, thresholding, image clipping, and normalization (with two different techniques: interpatient and intrapatient). Subsequently, five different sets are created and arranged in a randomized order for the training phase. A neural network model based on U-Net architecture is implemented with different values of the number of channels in each feature map and number of epochs. The model with the best performance obtains a Jaccard index (IoU) of 0.959 and a Dice index of 0.979. The resultant model demonstrates the potential of deep learning applied in medical images and proving its utility in bone segmentation.
Project description:Middle- and inner-ear surgery is a vital treatment option in hearing loss, infections, and tumors of the lateral skull base. Segmentation of otologic structures from computed tomography (CT) has many potential applications for improving surgical planning but can be an arduous and time-consuming task. We propose an end-to-end solution for the automated segmentation of temporal bone CT using convolutional neural networks (CNN). Using 150 manually segmented CT scans, a comparison of 3 CNN models (AH-Net, U-Net, ResNet) was conducted to compare Dice coefficient, Hausdorff distance, and speed of segmentation of the inner ear, ossicles, facial nerve and sigmoid sinus. Using AH-Net, the Dice coefficient was 0.91 for the inner ear; 0.85 for the ossicles; 0.75 for the facial nerve; and 0.86 for the sigmoid sinus. The average Hausdorff distance was 0.25, 0.21, 0.24 and 0.45 mm, respectively. Blinded experts assessed the accuracy of both techniques, and there was no statistical difference between the ratings for the two methods (p = 0.93). Objective and subjective assessment confirm good correlation between automated segmentation of otologic structures and manual segmentation performed by a specialist. This end-to-end automated segmentation pipeline can help to advance the systematic application of augmented reality, simulation, and automation in otologic procedures.
Project description:Here, we have developed an automated image processing algorithm for segmenting lungs and individual lung tumors in in vivo micro-computed tomography (micro-CT) scans of mouse models of non-small cell lung cancer and lung fibrosis. Over 3000 scans acquired across multiple studies were used to train/validate a 3D U-net lung segmentation model and a Support Vector Machine (SVM) classifier to segment individual lung tumors. The U-net lung segmentation algorithm can be used to estimate changes in soft tissue volume within lungs (primarily tumors and blood vessels), whereas the trained SVM is able to discriminate between tumors and blood vessels and identify individual tumors. The trained segmentation algorithms (1) significantly reduce time required for lung and tumor segmentation, (2) reduce bias and error associated with manual image segmentation, and (3) facilitate identification of individual lung tumors and objective assessment of changes in lung and individual tumor volumes under different experimental conditions.