Project description:PurposePrevious literature has reported contradicting results regarding the relationship between tumor volume changes during radiotherapy treatment for non-small cell lung cancer (NSCLC) patients and locoregional recurrence-free rate or overall survival. The aim of this study is to validate the results from a previous study by using a different volume extraction procedure and evaluating an external validation dataset.MethodsFor two datasets of 94 and 141 NSCLC patients, gross tumor volumes were determined manually to investigate the relationship between tumor volume regression and locoregional control using Kaplan-Meier curves. For both datasets, different subgroups of patients based on histology and chemotherapy regimens were also investigated. For the first dataset (n = 94), automatically determined tumor volumes were available from a previously published study to further compare their correlation with updated clinical data.ResultsA total of 70 out of 94 patients were classified into the same group as in the previous publication, splitting the dataset based on median tumor regression calculated by the two volume extraction methods. Non-adenocarcinoma patients receiving concurrent chemotherapy with large tumor regression show reduced locoregional recurrence-free rates in both datasets (p < 0.05 in dataset 2). For dataset 2, the opposite behavior is observed for patients not receiving chemotherapy, which was significant for overall survival (p = 0.01) but non-significant for locoregional recurrence-free rate (p = 0.13).ConclusionThe tumor regression pattern observed during radiotherapy is not only influenced by irradiation but depends largely on the delivered chemotherapy schedule, so it follows that the relationship between patient outcome and the degree of tumor regression is also largely determined by the chemotherapy schedule. This analysis shows that the relationship between tumor regression and outcome is complex, and indicates factors that could explain previously reported contradicting findings. This, in turn, will help guide future studies to fully understand the relationship between tumor regression and outcome.
Project description:To improve image quality and CT number accuracy of fast-scan low-dose cone-beam computed tomography (CBCT) through a deep-learning convolutional neural network (CNN) methodology for head-and-neck (HN) radiotherapy. Fifty-five paired CBCT and CT images from HN patients were retrospectively analysed. Among them, 15 patients underwent adaptive replanning during treatment, thus had same-day CT/CBCT pairs. The remaining 40 patients (post-operative) had paired planning CT and 1st fraction CBCT images with minimal anatomic changes. A 2D U-Net architecture with 27-layers in 5 depths was built for the CNN. CNN training was performed using data from 40 post-operative HN patients with 2080 paired CT/CBCT slices. Validation and test datasets include 5 same-day datasets with 260 slice pairs and 10 same-day datasets with 520 slice pairs, respectively. To examine the impact of differences in training dataset selection and network performance as a function of training data size, additional networks were trained using 30, 40 and 50 datasets. Image quality of enhanced CBCT images were quantitatively compared against the CT image using mean absolute error (MAE) of Hounsfield units (HU), signal-to-noise ratio (SNR) and structural similarity (SSIM). Enhanced CBCT images reduced artifact distortion and improved soft tissue contrast. Networks trained with 40 datasets had imaging performance comparable to those trained with 50 datasets and outperformed those trained with 30 datasets. Comparison of CBCT and enhanced CBCT images demonstrated improvement in average MAE from 172.73 to 49.28 HU, SNR from 8.27 to 14.25 dB, and SSIM from 0.42 to 0.85. The image processing time is 2 s per patient using a NVIDIA GeForce GTX 1080 Ti GPU. The proposed deep-leaning methodology was fast and effective for image quality enhancement of fast-scan low-dose CBCT. This method has potential to support fast online-adaptive re-planning for HN cancer patients.
Project description:Background and purpose Adaptive radiotherapy based on cone-beam computed tomography (CBCT) requires high CT number accuracy to ensure accurate dose calculations. Recently, deep learning has been proposed for fast CBCT artefact corrections on single anatomical sites. This study investigated the feasibility of applying a single convolutional network to facilitate dose calculation based on CBCT for head-and-neck, lung and breast cancer patients. Materials and Methods Ninety-nine patients diagnosed with head-and-neck, lung or breast cancer undergoing radiotherapy with CBCT-based position verification were included in this study. The CBCTs were registered to planning CT according to clinical procedures. Three cycle-consistent generative adversarial networks (cycle-GANs) were trained in an unpaired manner on 15 patients per anatomical site generating synthetic-CTs (sCTs). Another network was trained with all the anatomical sites together. Performances of all four networks were compared and evaluated for image similarity against rescan CT (rCT). Clinical plans were recalculated on rCT and sCT and analysed through voxel-based dose differences and γ -analysis. Results A sCT was generated in 10 s. Image similarity was comparable between models trained on different anatomical sites and a single model for all sites. Mean dose differences <0.5% were obtained in high-dose regions. Mean gamma (3%, 3 mm) pass-rates >95% were achieved for all sites. Conclusion Cycle-GAN reduced CBCT artefacts and increased similarity to CT, enabling sCT-based dose calculations. A single network achieved CBCT-based dose calculation generating synthetic CT for head-and-neck, lung, and breast cancer patients with similar performance to a network specifically trained for each anatomical site.
Project description:Background and purposeAdaptive radiotherapy (ART) in locally advanced cervical cancer (LACC) has shown promising outcomes. This study investigated the feasibility of cone-beam computed tomography (CBCT)-guided online ART (oART) for the treatment of LACC.Material and methodsThe quality of the automated radiotherapy treatment plans and artificial intelligence (AI)-driven contour delineation for LACC on a novel CBCT-guided oART system were assessed. Dosimetric analysis of 200 simulated oART sessions were compared with standard treatment. Feasibility of oART was assessed from the delivery of 132 oART fractions for the first five clinical LACC patients. The simulated and live oART sessions compared a fixed planning target volume (PTV) margin of 1.5 cm around the uterus-cervix clinical target volume (CTV) with an internal target volume-based approach. Workflow timing measurements were recorded.ResultsThe automatically-generated 12-field intensity-modulated radiotherapy plans were comparable to manually generated plans. The AI-driven organ-at-risk (OAR) contouring was acceptable requiring, on average, 12.3 min to edit, with the bowel performing least well and rated as unacceptable in 16 % of cases. The treated patients demonstrated a mean PTV D98% (+/-SD) of 96.7 (+/- 0.2)% for the adapted plans and 94.9 (+/- 3.7)% for the non-adapted scheduled plans (p<10-5). The D2cc (+/-SD) for the bowel, bladder and rectum were reduced by 0.07 (+/- 0.03)Gy, 0.04 (+/-0.05)Gy and 0.04 (+/-0.03)Gy per fraction respectively with the adapted plan (p <10-5). In the live.setting, the mean oART session (+/-SD) from CBCT acquisition to beam-on was 29 +/- 5 (range 21-44) minutes.ConclusionCBCT-guided oART was shown to be feasible with dosimetric benefits for patients with LACC. Further work to analyse potential reductions in PTV margins is ongoing.
Project description:Background and purposeAccurate and automated segmentation of targets and organs-at-risk (OARs) is crucial for the successful clinical application of online adaptive radiotherapy (ART). Current methods for cone-beam computed tomography (CBCT) auto-segmentation face challenges, resulting in segmentations often failing to reach clinical acceptability. Current approaches for CBCT auto-segmentation overlook the wealth of information available from initial planning and prior adaptive fractions that could enhance segmentation precision.Materials and methodsWe introduce a novel framework that incorporates data from a patient's initial plan and previous adaptive fractions, harnessing this additional temporal context to significantly refine the segmentation accuracy for the current fraction's CBCT images. We present LSTM-UNet, an innovative architecture that integrates Long Short-Term Memory (LSTM) units into the skip connections of the traditional U-Net framework to retain information from previous fractions. The models underwent initial pre-training with simulated data followed by fine-tuning on a clinical dataset.ResultsOur proposed model's segmentation predictions yield an average Dice similarity coefficient of 79% from 8 Head & Neck organs and targets, compared to 52% from a baseline model without prior knowledge and 78% from a baseline model with prior knowledge but no memory.ConclusionsOur proposed model excels beyond baseline segmentation frameworks by effectively utilizing information from prior fractions, thus reducing the effort of clinicians to revise the auto-segmentation results. Moreover, it works together with registration-based methods that offer better prior knowledge. Our model holds promise for integration into the online ART workflow, offering precise segmentation capabilities on synthetic CT images.
Project description:Low dose and accessibility have increased the application of cone beam computed tomography (CBCT). Often serial images are captured for patients to diagnose and plan treatment in the craniofacial region. However, CBCT images are highly variable and lack harmonious reproduction, especially in the head's orientation. Though user-defined orientation methods have been suggested, the reproducibility remains controversial. Here, we propose a landmark-free reorientation methodology based on principal component analysis (PCA) for harmonious orientation of serially captured CBCTs. We analyzed three serial CBCT scans collected for 29 individuals who underwent orthognathic surgery. We first defined a region of interest with the proposed protocol by combining 2D rendering and 3D convex hull method, and identified an intermediary arrangement point. PCA identified the y-axis (anterioposterior) followed by the secondary x-axis (transverse). Finally, by defining the perpendicular z-axis, a new global orientation was assigned. The goodness of alignment (Hausdorff distance) showed a marked improvement (> 50%). Furthermore, we clustered cases based on clinical asymmetry and validated that the protocol was unaffected by the severity of the skeletal deformity. Therefore, it could be suggested that integrating the proposed algorithm as the preliminary step in CBCT evaluation will address a fundamental step towards harmonizing the craniofacial imaging records.
Project description:Background and purposeTo improve cone-beam computed tomography (CBCT), deep-learning (DL)-models are being explored to generate synthetic CTs (sCT). The sCT evaluation is mainly focused on image quality and CT number accuracy. However, correct representation of daily anatomy of the CBCT is also important for sCTs in adaptive radiotherapy. The aim of this study was to emphasize the importance of anatomical correctness by quantitatively assessing sCT scans generated from CBCT scans using different paired and unpaired dl-models.Materials and methodsPlanning CTs (pCT) and CBCTs of 56 prostate cancer patients were included to generate sCTs. Three different dl-models, Dual-UNet, Single-UNet and Cycle-consistent Generative Adversarial Network (CycleGAN), were evaluated on image quality and anatomical correctness. The image quality was assessed using image metrics, such as Mean Absolute Error (MAE). The anatomical correctness between sCT and CBCT was quantified using organs-at-risk volumes and average surface distances (ASD).ResultsMAE was 24 Hounsfield Unit (HU) [range:19-30 HU] for Dual-UNet, 40 HU [range:34-56 HU] for Single-UNet and 41HU [range:37-46 HU] for CycleGAN. Bladder ASD was 4.5 mm [range:1.6-12.3 mm] for Dual-UNet, 0.7 mm [range:0.4-1.2 mm] for Single-UNet and 0.9 mm [range:0.4-1.1 mm] CycleGAN.ConclusionsAlthough Dual-UNet performed best in standard image quality measures, such as MAE, the contour based anatomical feature comparison with the CBCT showed that Dual-UNet performed worst on anatomical comparison. This emphasizes the importance of adding anatomy based evaluation of sCTs generated by dl-models. For applications in the pelvic area, direct anatomical comparison with the CBCT may provide a useful method to assess the clinical applicability of dl-based sCT generation methods.
Project description:PURPOSE:Scatter is a major factor degrading the image quality of cone beam computed tomography (CBCT). Conventional scatter correction strategies require handcrafted analytical models with ad hoc assumptions, which often leads to less accurate scatter removal. This study aims to develop an effective scatter correction method using a residual convolutional neural network (CNN). METHODS:A U-net based 25-layer CNN was constructed for CBCT scatter correction. The establishment of the model consists of three steps: model training, validation, and testing. For model training, a total of 1800 pairs of x-ray projection and the corresponding scatter-only distribution in nonanthropomorphic phantoms taken in full-fan scan were generated using Monte Carlo simulation of a CBCT scanner installed with a proton therapy system. An end-to-end CNN training was implemented with two major loss functions for 100 epochs with a mini-batch size of 10. Image rotations and flips were randomly applied to augment the training datasets during training. For validation, 200 projections of a digital head phantom were collected. The proposed CNN-based method was compared to a conventional projection-domain scatter correction method named fast adaptive scatter kernel superposition (fASKS) method using 360 projections of an anthropomorphic head phantom. Two different loss functions were applied for the same CNN to evaluate the impact of loss functions on the final results. Furthermore, the CNN model trained with full-fan projections was fine-tuned for scatter correction in half-fan scan by using transfer learning with additional 360 half-fan projection pairs of nonanthropomorphic phantoms. The tuned-CNN model for half-fan scan was compared with the fASKS method as well as the CNN-based method without the fine-tuning using additional lung phantom projections. RESULTS:The CNN-based method provides projections with significantly reduced scatter and CBCT images with more accurate Hounsfield Units (HUs) than that of the fASKS-based method. Root mean squared error of the CNN-corrected projections was improved to 0.0862 compared to 0.278 for uncorrected projections or 0.117 for the fASKS-corrected projections. The CNN-corrected reconstruction provided better HU quantification, especially in regions near the air or bone interfaces. All four image quality measures, which include mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM), indicated that the CNN-corrected images were significantly better than that of the fASKS-corrected images. Moreover, the proposed transfer learning technique made it possible for the CNN model trained with full-fan projections to be applicable to remove scatters in half-fan projections after fine-tuning with only a small number of additional half-fan training datasets. SSIM value of the tuned-CNN-corrected images was 0.9993 compared to 0.9984 for the non-tuned-CNN-corrected images or 0.9990 for the fASKS-corrected images. Finally, the CNN-based method is computationally efficient - the correction time for the 360 projections only took less than 5 s in the reported experiments on a PC (4.20 GHz Intel Core-i7 CPU) with a single NVIDIA GTX 1070 GPU. CONCLUSIONS:The proposed deep learning-based method provides an effective tool for CBCT scatter correction and holds significant value for quantitative imaging and image-guided radiation therapy.
Project description:BackgroundSex estimation is the first stage in the identification of an individual in the forensic context, and can be carried out from bone structures like the mandible. The aim of this study was to estimate sex from metric analysis of the mandible in cone beam computed tomography images (CBCT) of adult Chilean individuals.MethodsSix mandibular measurements were analysed, five linear and one angular, in CBCT of adult Chilean individuals of both sexes. ROC Curve analysis was performed, with cut-off points, and of the overall model quality. Univariate discriminant function analysis was used to determine the accuracy of each measurement for sex estimation. Multivariate discriminant function analysis, both directly and by steps, was used to obtain the predictive value of the mandible including all the measurements.ResultsThe data included were 155 CBCT, 105 of females and 50 of males. The mandible presented great sexual dimorphism, with the mandibular ramus presenting greater predictive power than the mandibular body. When each mandibular measurement was analysed separately, the maximum height of the mandibular ramus presented the greatest predictive power (76.5%), while the mandibular angle was the least accurate parameter for sex estimation (58.1%). Direct method analysis presented 87.1% accuracy for sex identification of adult Chilean individuals, and joint analysis of maximum mandibular ramus height (MRH), corono-condylar distance and bigonial breadth presented 86.5% accuracy. In ROC Curve analysis the MRH was the variable with the greatest discriminating capacity (AUC = 0.833), MA was the only variable which presented no discriminating capacity (AUC = 0.386) and also presented low quality in model quality analysis.ConclusionMetric analysis of the mandible in CBCT images presents an acceptable accuracy for sex estimation in Chilean individuals, and its use for that purpose in forensic practice is recommended.
Project description:Adaptive radiotherapy (ART) was introduced in the late 1990s to improve the accuracy and efficiency of therapy and minimize radiation-induced toxicities. ART combines multiple tools for imaging, assessing the need for adaptation, treatment planning, quality assurance, and has been utilized to monitor inter- or intra-fraction anatomical variations of the target and organs-at-risk (OARs). Ethos™ (Varian Medical Systems, Palo Alto, CA), a cone beam computed tomography (CBCT) based radiotherapy treatment system that uses artificial intelligence (AI) and machine learning to perform ART, was introduced in 2020. Since then, numerous studies have been done to examine the potential benefits of Ethos™ CBCT-guided ART compared to non-adaptive radiotherapy. This review will explore the current trends of Ethos™, including improved CBCT image quality, a feasible clinical workflow, daily automated contouring and treatment planning, and motion management. Nevertheless, evidence of clinical improvements with the use of Ethos™ are limited and is currently under investigation via clinical trials.