Project description:IntroductionComputed Tomography is an essential diagnostic tool in the management of COVID-19. Considering the large amount of examinations in high case-load scenarios, an automated tool could facilitate and save critical time in the diagnosis and risk stratification of the disease.MethodsA novel deep learning derived machine learning (ML) classifier was developed using a simplified programming approach and an open source dataset consisting of 6868 chest CT images from 418 patients which was split into training and validation subsets. The diagnostic performance was then evaluated and compared to experienced radiologists on an independent testing dataset. Diagnostic performance metrics were calculated using Receiver Operating Characteristics (ROC) analysis. Operating points with high positive (>10) and low negative (<0.01) likelihood ratios to stratify the risk of COVID-19 being present were identified and validated.ResultsThe model achieved an overall accuracy of 0.956 (AUC) on an independent testing dataset of 90 patients. Both rule-in and rule out thresholds were identified and tested. At the rule-in operating point, sensitivity and specificity were 84.4 % and 93.3 % and did not differ from both radiologists (p > 0.05). At the rule-out threshold, sensitivity (100 %) and specificity (60 %) differed significantly from the radiologists (p < 0.05). Likelihood ratios and a Fagan nomogram provide prevalence independent test performance estimates.ConclusionAccurate diagnosis of COVID-19 using a basic deep learning approach is feasible using open-source CT image data. In addition, the machine learning classifier provided validated rule-in and rule-out criteria could be used to stratify the risk of COVID-19 being present.
Project description:Close contacts with infectious tuberculosis (TB) are persons at high risk for developing active disease. We preliminarily introduced submillisievert chest computed tomography (CT) scan (effective dose, 0.19-0.25 millisievert) in a contact investigation of multi-drug resistant (MDR)-TB. Baseline CT scan showed minimal nodules or branching opacities in two of six contacts. A two-month follow-up examination revealed a radiologic progression in contact 1, subsequently having the microbiologic diagnosis of MDR-TB at an asymptomatic early stage, whereas nodules transiently increased after 3 months in contact 2, followed by a decrease after one year. Contact 1 was cured after 1.5-year of anti-MDR-TB treatment. In conclusion, early identification of secondary MDR-TB is feasible with submillisievert chest CT scans in contact investigations of MDR-TB, minimizing of MDR-TB transmission and offering a favorable treatment outcome. This was a clinical trial study and was registered at www.ClinicalTrials.gov (Identifier: NCT02454738).
Project description:PurposeTo compare the diagnostic performance of standalone deep learning (DL) algorithms and human experts in lung cancer detection on chest computed tomography (CT) scans.Materials and methodsThis study searched for studies on PubMed, Embase, and Web of Science from their inception until November 2023. We focused on adult lung cancer patients and compared the efficacy of DL algorithms and expert radiologists in disease diagnosis on CT scans. Quality assessment was performed using QUADAS-2, QUADAS-C, and CLAIM. Bivariate random-effects and subgroup analyses were performed for tasks (malignancy classification vs invasiveness classification), imaging modalities (CT vs low-dose CT [LDCT] vs high-resolution CT), study region, software used, and publication year.ResultsWe included 20 studies on various aspects of lung cancer diagnosis on CT scans. Quantitatively, DL algorithms exhibited superior sensitivity (82%) and specificity (75%) compared to human experts (sensitivity 81%, specificity 69%). However, the difference in specificity was statistically significant, whereas the difference in sensitivity was not statistically significant. The DL algorithms' performance varied across different imaging modalities and tasks, demonstrating the need for tailored optimization of DL algorithms. Notably, DL algorithms matched experts in sensitivity on standard CT, surpassing them in specificity, but showed higher sensitivity with lower specificity on LDCT scans.ConclusionDL algorithms demonstrated improved accuracy over human readers in malignancy and invasiveness classification on CT scans. However, their performance varies by imaging modality, underlining the importance of continued research to fully assess DL algorithms' diagnostic effectiveness in lung cancer.Clinical relevance statementDL algorithms have the potential to refine lung cancer diagnosis on CT, matching human sensitivity and surpassing in specificity. These findings call for further DL optimization across imaging modalities, aiming to advance clinical diagnostics and patient outcomes.Key pointsLung cancer diagnosis by CT is challenging and can be improved with AI integration. DL shows higher accuracy in lung cancer detection on CT than human experts. Enhanced DL accuracy could lead to improved lung cancer diagnosis and outcomes.
Project description:Triage is essential for the early diagnosis and reporting of neurologic emergencies. Herein, we report the development of an anomaly detection algorithm (ADA) with a deep generative model trained on brain computed tomography (CT) images of healthy individuals that reprioritizes radiology worklists and provides lesion attention maps for brain CT images with critical findings. In the internal and external validation datasets, the ADA achieved area under the curve values (95% confidence interval) of 0.85 (0.81-0.89) and 0.87 (0.85-0.89), respectively, for detecting emergency cases. In a clinical simulation test of an emergency cohort, the median wait time was significantly shorter post-ADA triage than pre-ADA triage by 294 s (422.5 s [interquartile range, IQR 299] to 70.5 s [IQR 168]), and the median radiology report turnaround time was significantly faster post-ADA triage than pre-ADA triage by 297.5 s (445.0 s [IQR 298] to 88.5 s [IQR 179]) (all p < 0.001).
Project description:BackgroundOsteoporosis, a disease stemming from bone metabolism irregularities, affects approximately 200 million people worldwide. Timely detection of osteoporosis is pivotal in grappling with this public health challenge. Deep learning (DL), emerging as a promising methodology in the field of medical imaging, holds considerable potential for the assessment of bone mineral density (BMD). This study aimed to propose an automated DL framework for BMD assessment that integrates localization, segmentation, and ternary classification using various dominant convolutional neural networks (CNNs).MethodsIn this retrospective study, a cohort of 2,274 patients underwent chest computed tomography (CT) was enrolled from January 2022 to June 2023 for the development of the integrated DL system. The study unfolded in 2 phases. Initially, 1,025 patients were selected based on specific criteria to develop an automated segmentation model, utilizing 2 VB-Net networks. Subsequently, a distinct cohort of 902 patients was employed for the development and testing of classification models for BMD assessment. Then, 3 distinct DL network architectures, specifically DenseNet, ResNet-18, and ResNet-50, were applied to formulate the 3-classification BMD assessment model. The performance of both phases was evaluated using an independent test set consisting of 347 individuals. Segmentation performance was evaluated using the Dice similarity coefficient; classification performance was appraised using the receiver operating characteristic (ROC) curve. Furthermore, metrics such as the area under the curve (AUC), accuracy, and precision were meticulously calculated.ResultsIn the first stage, the automatic segmentation model demonstrated excellent segmentation performance, with mean Dice surpassing 0.93 in the independent test set. In the second stage, both the DenseNet and ResNet-18 demonstrated excellent diagnostic performance in detecting bone status. For osteoporosis, and osteopenia, the AUCs were as follows: DenseNet achieved 0.94 [95% confidence interval (CI): 0.91-0.97], and 0.91 (95% CI: 0.87-0.94), respectively; ResNet-18 attained 0.96 (95% CI: 0.92-0.98), and 0.91 (95% CI: 0.87-0.94), respectively. However, the ResNet-50 model exhibited suboptimal diagnostic performance for osteopenia, with an AUC value of only 0.76 (95% CI: 0.69-0.80). Alterations in tube voltage had a more pronounced impact on the performance of the DenseNet. In the independent test set with tube voltage at 100 kVp images, the accuracy and precision of DenseNet decreased on average by approximately 14.29% and 18.82%, respectively, whereas the accuracy and precision of ResNet-18 decreased by about 8.33% and 7.14%, respectively.ConclusionsThe state-of-the-art DL framework model offers an effective and efficient approach for opportunistic osteoporosis screening using chest CT, without incurring additional costs or radiation exposure.
Project description:PurposeThe novel coronavirus COVID-19, which spread globally in late December 2019, is a global health crisis. Chest computed tomography (CT) has played a pivotal role in providing useful information for clinicians to detect COVID-19. However, segmenting COVID-19-infected regions from chest CT results is challenging. Therefore, it is desirable to develop an efficient tool for automated segmentation of COVID-19 lesions using chest CT. Hence, we aimed to propose 2D deep-learning algorithms to automatically segment COVID-19-infected regions from chest CT slices and evaluate their performance.Material and methodsHerein, 3 known deep learning networks: U-Net, U-Net++, and Res-Unet, were trained from scratch for automated segmenting of COVID-19 lesions using chest CT images. The dataset consists of 20 labelled COVID-19 chest CT volumes. A total of 2112 images were used. The dataset was split into 80% for training and validation and 20% for testing the proposed models. Segmentation performance was assessed using Dice similarity coefficient, average symmetric surface distance (ASSD), mean absolute error (MAE), sensitivity, specificity, and precision.ResultsAll proposed models achieved good performance for COVID-19 lesion segmentation. Compared with Res-Unet, the U-Net and U-Net++ models provided better results, with a mean Dice value of 85.0%. Compared with all models, U-Net gained the highest segmentation performance, with 86.0% sensitivity and 2.22 mm ASSD. The U-Net model obtained 1%, 2%, and 0.66 mm improvement over the Res-Unet model in the Dice, sensitivity, and ASSD, respectively. Compared with Res-Unet, U-Net++ achieved 1%, 2%, 0.1 mm, and 0.23 mm improvement in the Dice, sensitivity, ASSD, and MAE, respectively.ConclusionsOur data indicated that the proposed models achieve an average Dice value greater than 84.0%. Two-dimensional deep learning models were able to accurately segment COVID-19 lesions from chest CT images, assisting the radiologists in faster screening and quantification of the lesion regions for further treatment. Nevertheless, further studies will be required to evaluate the clinical performance and robustness of the proposed models for COVID-19 semantic segmentation.
Project description:The rapid evolution of the novel coronavirus disease (COVID-19) pandemic has resulted in an urgent need for effective clinical tools to reduce transmission and manage severe illness. Numerous teams are quickly developing artificial intelligence approaches to these problems, including using deep learning to predict COVID-19 diagnosis and prognosis from chest computed tomography (CT) imaging data. In this work, we assess the value of aggregated chest CT data for COVID-19 prognosis compared to clinical metadata alone. We develop a novel patient-level algorithm to aggregate the chest CT volume into a 2D representation that can be easily integrated with clinical metadata to distinguish COVID-19 pneumonia from chest CT volumes from healthy participants and participants with other viral pneumonia. Furthermore, we present a multitask model for joint segmentation of different classes of pulmonary lesions present in COVID-19 infected lungs that can outperform individual segmentation models for each task. We directly compare this multitask segmentation approach to combining feature-agnostic volumetric CT classification feature maps with clinical metadata for predicting mortality. We show that the combination of features derived from the chest CT volumes improve the AUC performance to 0.80 from the 0.52 obtained by using patients' clinical data alone. These approaches enable the automated extraction of clinically relevant features from chest CT volumes for risk stratification of COVID-19 patients.
Project description:The use of computed tomography (CT) to triage suspected scaphoid fractures is appealing because it is more readily available and less expensive than magnetic resonance imaging (MRI). Twenty-eight patients with suspected scaphoid fractures (defined as tenderness in the area of the scaphoid and initial scaphoid-specific radiographs interpreted as normal) were enrolled in a prospective protocol evaluating triage with CT. Twenty patients reached an endpoint consisting of either (1) identification of a fracture accounting for the patient's symptoms on CT or (2) normal radiographs 6 weeks or more from the time of injury. Only 2 of 28 patients (7%) were diagnosed with a nondisplaced fracture of the scaphoid waist. CT revealed an avulsion fracture of the distal pole of the scaphoid in two patients, nondisplaced fractures of the distal radius in six patients, and nondisplaced fractures of other carpal bones in four patients. Radiographs of the scaphoid taken 6 weeks or greater from the time of injury were interpreted as normal in the six patients with normal CT scans that completed the study. True scaphoid waist fractures are uncommon among patients with suspected scaphoid fractures. CT scans are useful for triage of suspected scaphoid waist fractures because alternative, less-troublesome fractures were identified in 43% of patients and no fractures were missed or undertreated. Immediate triage of suspected scaphoid fractures using CT in the emergency room has the potential to reduce unnecessary immobilization and diminish overall costs associated with treatment.
Project description:ObjectivesThe subtype classification of lung adenocarcinoma is important for treatment decision. This study aimed to investigate the deep learning and radiomics networks for predicting histologic subtype classification and survival of lung adenocarcinoma diagnosed through computed tomography (CT) images.MethodsA dataset of 1222 patients with lung adenocarcinoma were retrospectively enrolled from three medical institutions. The anonymised preoperative CT images and pathological labels of atypical adenomatous hyperplasia, adenocarcinoma in situ, minimally invasive adenocarcinoma, invasive adenocarcinoma (IAC) with five predominant components were obtained. These pathological labels were divided into 2-category classification (IAC; non-IAC), 3-category and 8-category. We modeled the classification task of histological subtypes based on modified ResNet-34 deep learning network, radiomics strategies and deep radiomics combined algorithm. Then we established the prognostic models in lung adenocarcinoma patients with survival outcomes. The accuracy (ACC), area under ROC curves (AUCs) and C-index were primarily performed to evaluate the algorithms.ResultsThis study included a training set (n = 802) and two validation cohorts (internal, n = 196; external, n = 224). The ACC of deep radiomics algorithm in internal validation achieved 0.8776, 0.8061 in the 2-category, 3-category classification, respectively. Even in 8 classifications, the AUC ranged from 0.739 to 0.940 in internal set. Further, we constructed a prognosis model that C-index was 0.892(95% CI: 0.846-0.937) in internal validation set.ConclusionsThe automated deep radiomics based triage system has achieved the great performance in the subtype classification and survival predictability in patients with CT-detected lung adenocarcinoma nodules, providing the clinical guide for treatment strategies.