Project description:In recent years, generative adversarial networks (GANs) have gained tremendous popularity for various imaging related tasks such as artificial image generation to support AI training. GANs are especially useful for medical imaging-related tasks where training datasets are usually limited in size and heavily imbalanced against the diseased class. We present a systematic review, following the PRISMA guidelines, of recent GAN architectures used for medical image analysis to help the readers in making an informed decision before employing GANs in developing medical image classification and segmentation models. We have extracted 54 papers that highlight the capabilities and application of GANs in medical imaging from January 2015 to August 2020 and inclusion criteria for meta-analysis. Our results show four main architectures of GAN that are used for segmentation or classification in medical imaging. We provide a comprehensive overview of recent trends in the application of GANs in clinical diagnosis through medical image segmentation and classification and ultimately share experiences for task-based GAN implementations.
Project description:Recently, deep learning with generative adversarial networks (GANs) has been applied in multi-domain image-to-image translation. This study aims to improve the image quality of cone-beam computed tomography (CBCT) by generating synthetic CT (sCT) that maintains the patient's anatomy as in CBCT, while having the image quality of CT. As CBCT and CT are acquired at different time points, it is challenging to obtain paired images with aligned anatomy for supervised training. To address this limitation, the study incorporated a registration network (RegNet) into GAN during training. RegNet can dynamically estimate the correct labels, allowing supervised learning with noisy labels. The study developed and evaluated the approach using imaging data from 146 patients with head and neck cancer. The results showed that GAN trained with RegNet performed better than those trained without RegNet. Specifically, in the UNIT model trained with RegNet, the mean absolute error (MAE) was reduced from 40.46 to 37.21, the root mean-square error (RMSE) was reduced from 119.45 to 108.86, the peak signal-to-noise ratio (PSNR) was increased from 28.67 to 29.55, and the structural similarity index (SSIM) was increased from 0.8630 to 0.8791. The sCT generated from the model had fewer artifacts and retained the anatomical information as in CBCT.
Project description:Clinical data sharing can facilitate data-driven scientific research, allowing a broader range of questions to be addressed and thereby leading to greater understanding and innovation. However, sharing biomedical data can put sensitive personal information at risk. This is usually addressed by data anonymization, which is a slow and expensive process. An alternative to anonymization is construction of a synthetic dataset that behaves similar to the real clinical data but preserves patient privacy. As part of a collaboration between Novartis and the Oxford Big Data Institute, a synthetic dataset was generated based on images from COSENTYX® (secukinumab) ankylosing spondylitis (AS) clinical studies. An auxiliary classifier Generative Adversarial Network (ac-GAN) was trained to generate synthetic magnetic resonance images (MRIs) of vertebral units (VUs), conditioned on the VU location (cervical, thoracic and lumbar). Here, we present a method for generating a synthetic dataset and conduct an in-depth analysis on its properties along three key metrics: image fidelity, sample diversity and dataset privacy.
Project description:The rapid development of deep learning in medical imaging has significantly enhanced the capabilities of artificial intelligence while simultaneously introducing challenges, including the need for vast amounts of training data and the labor-intensive tasks of labeling and segmentation. Generative adversarial networks (GANs) have emerged as a solution, offering synthetic image generation for data augmentation and streamlining medical image processing tasks through models such as cGAN, CycleGAN, and StyleGAN. These innovations not only improve the efficiency of image augmentation, reconstruction, and segmentation, but also pave the way for unsupervised anomaly detection, markedly reducing the reliance on labeled datasets. Our investigation into GANs in medical imaging addresses their varied architectures, the considerations for selecting appropriate GAN models, and the nuances of model training and performance evaluation. This paper aims to provide radiologists who are new to GAN technology with a thorough understanding, guiding them through the practical application and evaluation of GANs in brain imaging with two illustrative examples using CycleGAN and pixel2style2pixel (pSp)-combined StyleGAN. It offers a comprehensive exploration of the transformative potential of GANs in medical imaging research. Ultimately, this paper strives to equip radiologists with the knowledge to effectively utilize GANs, encouraging further research and application within the field.
Project description:Neoantigens are abnormal proteins produced by genetic mutations in somatic cells. Because tumour neoantigens are expressed only in tumour cells and have immunogenicity, they may represent specific targets for precision immunotherapy. With the reduction in sequencing cost, continuous advances in artificial intelligence technology and an increased understanding of tumour immunity, neoantigen vaccines and adoptive cell therapy (ACT) targeting neoantigens have become research hotspots. Approximately 900,000 patients worldwide are diagnosed with head and neck squamous cell carcinoma (HNSCC) each year. Due to its high mutagenicity and abundant lymphocyte infiltration, HNSCC naturally generates a variety of potential new antigen targets that may be used for HNSCC immunotherapies. Currently, the main immunotherapy for HNSCC is use of immune checkpoint inhibitors(ICIs). Neoantigen vaccines and adoptive cell therapy targeting neoantigens are extensions of immunotherapy for HNSCC, and a large number of early clinical trials are underway in combination with immune checkpoint inhibitors for the treatment of recurrent or metastatic head and neck squamous cell carcinoma (R/M HNSCC). In this paper, we review recent neoantigen vaccine trials related to the treatment of HNSCC, introduce adoptive cell therapy targeting neoantigens, and propose a potential treatment for HNSCC. The clinical application of immune checkpoint inhibitor therapy and its combination with neoantigen vaccines in the treatment of HNSCC are summarized, and the prospect of using neoantigen to treat HNSCC is discussed and proposed.
Project description:PurposeTo evaluate pix2pix and CycleGAN and to assess the effects of multiple combination strategies on accuracy for patch-based synthetic computed tomography (sCT) generation for magnetic resonance (MR)-only treatment planning in head and neck (HN) cancer patients.Materials and methodsTwenty-three deformably registered pairs of CT and mDixon FFE MR datasets from HN cancer patients treated at our institution were retrospectively analyzed to evaluate patch-based sCT accuracy via the pix2pix and CycleGAN models. To test effects of overlapping sCT patches on estimations, we (a) trained the models for three orthogonal views to observe the effects of spatial context, (b) we increased effective set size by using per-epoch data augmentation, and (c) we evaluated the performance of three different approaches for combining overlapping Hounsfield unit (HU) estimations for varied patch overlap parameters. Twelve of twenty-three cases corresponded to a curated dataset previously used for atlas-based sCT generation and were used for training with leave-two-out cross-validation. Eight cases were used for independent testing and included previously unseen image features such as fused vertebrae, a small protruding bone, and tumors large enough to deform normal body contours. We analyzed the impact of MR image preprocessing including histogram standardization and intensity clipping on sCT generation accuracy. Effects of mDixon contrast (in-phase vs water) differences were tested with three additional cases. The sCT generation accuracy was evaluated using mean absolute error (MAE) and mean error (ME) in HU between the plan CT and sCT images. Dosimetric accuracy was evaluated for all clinically relevant structures in the independent testing set and digitally reconstructed radiographs (DRRs) were evaluated with respect to the plan CT images.ResultsThe cross-validated MAEs for the whole-HN region using pix2pix and CycleGAN were 66.9 ± 7.3 vs 82.3 ± 6.4 HU, respectively. On the independent testing set with additional artifacts and previously unseen image features, whole-HN region MAEs were 94.0 ± 10.6 and 102.9 ± 14.7 HU for pix2pix and CycleGAN, respectively. For patients with different tissue contrast (water mDixon MR images), the MAEs increased to 122.1 ± 6.3 and 132.8 ± 5.5 HU for pix2pix and CycleGAN, respectively. Our results suggest that combining overlapping sCT estimations at each voxel reduced both MAE and ME compared to single-view non-overlapping patch results. Absolute percent mean/max dose errors were 2% or less for the PTV and all clinically relevant structures in our independent testing set, including structures with image artifacts. Quantitative DRR comparison between planning CTs and sCTs showed agreement of bony region positions to <1 mm.ConclusionsThe dosimetric and MAE based accuracy, along with the similarity between DRRs from sCTs, indicate that pix2pix and CycleGAN are promising methods for MR-only treatment planning for HN cancer. Our methods investigated for overlapping patch-based HU estimations also indicate that combining transformation estimations of overlapping patches is a potential method to reduce generation errors while also providing a tool to potentially estimate the MR to CT aleatoric model transformation uncertainty. However, because of small patient sample sizes, further studies are required.
Project description:Diabetic retinopathy (DR) is a diabetic complication affecting the eyes, which is the main cause of blindness in young and middle-aged people. In order to speed up the diagnosis of DR, a mass of deep learning methods have been used for the detection of this disease, but they failed to attain excellent results due to unbalanced training data, i.e., the lack of DR fundus images. To address the problem of data imbalance, this paper proposes a method dubbed retinal fundus images generative adversarial networks (RF-GANs), which is based on generative adversarial network, to synthesize retinal fundus images. RF-GANs is composed of two generation models, RF-GAN1 and RF-GAN2. Firstly, RF-GAN1 is employed to translate retinal fundus images from source domain (the domain of semantic segmentation datasets) to target domain (the domain of EyePACS dataset connected to Kaggle (EyePACS)). Then, we train the semantic segmentation models with the translated images, and employ the trained models to extract the structural and lesion masks (hereafter, we refer to it as Masks) of EyePACS. Finally, we employ RF-GAN2 to synthesize retinal fundus images using the Masks and DR grading labels. This paper verifies the effectiveness of the method: RF-GAN1 can narrow down the domain gap between different datasets to improve the performance of the segmentation models. RF-GAN2 can synthesize realistic retinal fundus images. Adopting the synthesized images for data augmentation, the accuracy and quadratic weighted kappa of the state-of-the-art DR grading model on the testing set of EyePACS increase by 1.53% and 1.70%, respectively.
Project description:MotivationThe application of machine learning approaches in phylogenetics has been impeded by the vast model space associated with inference. Supervised machine learning approaches require data from across this space to train models. Because of this, previous approaches have typically been limited to inferring relationships among unrooted quartets of taxa, where there are only three possible topologies. Here, we explore the potential of generative adversarial networks (GANs) to address this limitation. GANs consist of a generator and a discriminator: at each step, the generator aims to create data that is similar to real data, while the discriminator attempts to distinguish generated and real data. By using an evolutionary model as the generator, we use GANs to make evolutionary inferences. Since a new model can be considered at each iteration, heuristic searches of complex model spaces are possible. Thus, GANs offer a potential solution to the challenges of applying machine learning in phylogenetics.ResultsWe developed phyloGAN, a GAN that infers phylogenetic relationships among species. phyloGAN takes as input a concatenated alignment, or a set of gene alignments, and infers a phylogenetic tree either considering or ignoring gene tree heterogeneity. We explored the performance of phyloGAN for up to 15 taxa in the concatenation case and 6 taxa when considering gene tree heterogeneity. Error rates are relatively low in these simple cases. However, run times are slow and performance metrics suggest issues during training. Future work should explore novel architectures that may result in more stable and efficient GANs for phylogenetics.Availability and implementationphyloGAN is available on github: https://github.com/meganlsmith/phyloGAN/.
Project description:The constant demand for novel functional materials calls for efficient strategies to accelerate the materials discovery, and crystal structure prediction is one of the most fundamental tasks along that direction. In addressing this challenge, generative models can offer new opportunities since they allow for the continuous navigation of chemical space via latent spaces. In this work, we employ a crystal representation that is inversion-free based on unit cell and fractional atomic coordinates and build a generative adversarial network for crystal structures. The proposed model is applied to generate the Mg-Mn-O ternary materials with the theoretical evaluation of their photoanode properties for high-throughput virtual screening (HTVS). The proposed generative HTVS framework predicts 23 new crystal structures with reasonable calculated stability and band gap. These findings suggest that the generative model can be an effective way to explore hidden portions of the chemical space, an area that is usually unreachable when conventional substitution-based discovery is employed.
Project description:Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties with high spatial resolution. However, previous attempts to solve the optical inverse problem with supervised machine learning were hampered by the absence of labeled reference data. While this bottleneck has been tackled by simulating training data, the domain gap between real and simulated images remains an unsolved challenge. We propose a novel approach to PAT image synthesis that involves subdividing the challenge of generating plausible simulations into two disjoint problems: (1) Probabilistic generation of realistic tissue morphology, and (2) pixel-wise assignment of corresponding optical and acoustic properties. The former is achieved with Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data. According to a validation study on a downstream task our approach yields more realistic synthetic images than the traditional model-based approach and could therefore become a fundamental step for deep learning-based quantitative PAT (qPAT).