Project description:With the rapid and significant cost reduction of next-generation sequencing, low-coverage whole-genome sequencing (lcWGS), followed by genotype imputation, is becoming a cost-effective alternative to single-nucleotide polymorphism (SNP)-array genotyping. The objectives of this study were 2-fold: (1) construct a haplotype reference panel for genotype imputation from lcWGS data in rainbow trout (Oncorhynchus mykiss); and (2) evaluate the concordance between imputed genotypes and SNP-array genotypes in 2 breeding populations. Medium-coverage (12×) whole-genome sequences were obtained from a total of 410 fish representing 5 breeding populations with various spawning dates. The short-read sequences were mapped to the rainbow trout reference genome, and genetic variants were identified using GATK. After data filtering, 20,434,612 biallelic SNPs were retained. The reference panel was phased with SHAPEIT5 and was used as a reference to impute genotypes from lcWGS data employing GLIMPSE2. A total of 90 fish from the Troutlodge November breeding population were sequenced with an average coverage of 1.3×, and these fish were also genotyped with the Axiom 57K rainbow trout SNP array. The concordance between array-based genotypes and imputed genotypes was 99.1%. After downsampling the coverage to 0.5×, 0.2×, and 0.1×, the concordance between array-based genotypes and imputed genotypes was 98.7, 97.8, and 96.7%, respectively. In the USDA odd-year breeding population, the concordance between array-based genotypes and imputed genotypes was 97.8% for 109 fish downsampled to 0.5× coverage. Therefore, the reference haplotype panel reported in this study can be used to accurately impute genotypes from lcWGS data in rainbow trout breeding populations.
Project description:Structural variations (SVs) have a great impact on various biological processes and influence physical traits in many species. Here, we present a protocol for applying the low-coverage next-generation sequencing data of Rhipicephalus microplus to detect high-differentiated SVs accurately. We also outline its use to investigate population/species-specific genetic structures, local adaptation, and transcriptional function. We describe steps for constructing variation maps and SV annotation. We then detail population genetic analysis and differential gene expression analysis. For complete details on the usage and execution of this protocol, please refer to Liu et al. (2023).
Project description:Genotype imputation is the term used to describe the process of inferring unobserved genotypes in a sample of individuals. It is a key step prior to a genome-wide association study (GWAS) or genomic prediction. The imputation accuracy will directly influence the results from subsequent analyses. In this simulation-based study, we investigate the accuracy of genotype imputation in relation to some factors characterizing SNP chip or low-coverage whole-genome sequencing (LCWGS) data. The factors included the imputation reference population size, the proportion of target markers /SNP density, the genetic relationship (distance) between the target population and the reference population, and the imputation method. Simulations of genotypes were based on coalescence theory accounting for the demographic history of pigs. A population of simulated founders diverged to produce four separate but related populations of descendants. The genomic data of 20,000 individuals were simulated for a 10-Mb chromosome fragment. Our results showed that the proportion of target markers or SNP density was the most critical factor affecting imputation accuracy under all imputation situations. Compared with Minimac4, Beagle5.1 reproduced higher-accuracy imputed data in most cases, more notably when imputing from the LCWGS data. Compared with SNP chip data, LCWGS provided more accurate genotype imputation. Our findings provided a relatively comprehensive insight into the accuracy of genotype imputation in a realistic population of domestic animals.
Project description:The cost of whole-genome bisulfite sequencing (WGBS) remains a bottleneck for many studies and it is therefore imperative to extract as much information as possible from a given dataset. This is particularly important because even at the recommend 30X coverage for reference methylomes, up to 50% of high-resolution features such as differentially methylated positions (DMPs) cannot be called with current methods as determined by saturation analysis. To address this limitation, we have developed a tool that dynamically segments WGBS methylomes into blocks of comethylation (COMETs) from which lost information can be recovered in the form of differentially methylated COMETs (DMCs). Using this tool, we demonstrate recovery of ∼30% of the lost DMP information content as DMCs even at very low (5X) coverage. This constitutes twice the amount that can be recovered using an existing method based on differentially methylated regions (DMRs). In addition, we explored the relationship between COMETs and haplotypes in lymphoblastoid cell lines of African and European origin. Using best fit analysis, we show COMETs to be correlated in a population-specific manner, suggesting that this type of dynamic segmentation may be useful for integrated (epi)genome-wide association studies in the future.
Project description:To evaluate whether low coverage whole genome sequencing is suitable for the detection of malignant pelvic mass and compare its diagnostic value with traditional tumor markers. We enrolled 63 patients with a pelvic mass suspicious for ovarian malignancy. Each patient underwent low coverage whole genome sequencing (LCWGS) and traditional tumor markers test. The pelvic masses were finally confirmed via pathological examination. The copy number variants (CNVs) of whole genome were detected and the Stouffers Z-scores for each CNV was extracted. The risk of malignancy (RM) of each suspicious sample was calculated based on the CNV counts and Z-scores, which was subsequently compared with ovarian cancer markers CA125 and HE4, and the risk of ovarian malignancy algorithm (ROMA). Receiver Operating Characteristic Curve (ROC) were used to access the diagnostic value of variables. As confirmed by pathological diagnosis, 44 (70%) patients with malignancy and 19 patients with benign mass were identified. Our results showed that CA125 and HE4, the CNV, the mean of Z-scores (Zmean), the max of Z-scores (Zmax), the RM and the ROMA were significantly different between patients with malignant and benign masses. The area under curve (AUC) of CA125, HE4, CNV, Zmax, and Zmean was 0.775, 0.866, 0.786, 0.685 and 0.725 respectively. ROMA and RM showed similar AUC (0.876 and 0.837), but differed in sensitivity and specificity. In the validation cohort, the AUC of RM was higher than traditional serum markers. In conclusion, we develop a LCWGS based method for the identification of pelvic mass of suspicious ovarian cancer. LCWGS shows accurate result and could be complementary with the existing diagnostic methods.
Project description:BackgroundDetection of copy number variations (CNVs) from high-throughput next-generation whole-genome sequencing (WGS) data has become a widely used research method during the recent years. However, only a little is known about the applicability of the developed algorithms to ultra-low-coverage (0.0005-0.8×) data that is used in various research and clinical applications, such as digital karyotyping and single-cell CNV detection.ResultHere, the performance of six popular read-depth based CNV detection algorithms (BIC-seq2, Canvas, CNVnator, FREEC, HMMcopy, and QDNAseq) was studied using ultra-low-coverage WGS data. Real-world array- and karyotyping kit-based validation were used as a benchmark in the evaluation. Additionally, ultra-low-coverage WGS data was simulated to investigate the ability of the algorithms to identify CNVs in the sex chromosomes and the theoretical minimum coverage at which these tools can accurately function. Our results suggest that while all the methods were able to detect large CNVs, many methods were susceptible to producing false positives when smaller CNVs (< 2 Mbp) were detected. There was also significant variability in their ability to identify CNVs in the sex chromosomes. Overall, BIC-seq2 was found to be the best method in terms of statistical performance. However, its significant drawback was by far the slowest runtime among the methods (> 3 h) compared with FREEC (~ 3 min), which we considered the second-best method.ConclusionsOur comparative analysis demonstrates that CNV detection from ultra-low-coverage WGS data can be a highly accurate method for the detection of large copy number variations when their length is in millions of base pairs. These findings facilitate applications that utilize ultra-low-coverage CNV detection.
Project description:Mutational signatures accumulate in somatic cells as an admixture of endogenous and exogenous processes that occur during an individual's lifetime. Since dividing cells release cell-free DNA (cfDNA) fragments into the circulation, we hypothesize that plasma cfDNA might reflect mutational signatures. Point mutations in plasma whole genome sequencing (WGS) are challenging to identify through conventional mutation calling due to low sequencing coverage and low mutant allele fractions. In this proof of concept study of plasma WGS at 0.3-1.5x coverage from 215 patients and 227 healthy individuals, we show that both pathological and physiological mutational signatures may be identified in plasma. By applying machine learning to mutation profiles, patients with stage I-IV cancer can be distinguished from healthy individuals with an Area Under the Curve of 0.96. Interrogating mutational processes in plasma may enable earlier cancer detection, and might enable the assessment of cancer risk and etiology.
Project description:The detection of ancient gene flow between human populations is an important issue in population genetics. A common tool for detecting ancient admixture events is the D-statistic. The D-statistic is based on the hypothesis of a genetic relationship that involves four populations, whose correctness is assessed by evaluating specific coincidences of alleles between the groups. When working with high-throughput sequencing data, calling genotypes accurately is not always possible; therefore, the D-statistic currently samples a single base from the reads of one individual per population. This implies ignoring much of the information in the data, an issue especially striking in the case of ancient genomes. We provide a significant improvement to overcome the problems of the D-statistic by considering all reads from multiple individuals in each population. We also apply type-specific error correction to combat the problems of sequencing errors, and show a way to correct for introgression from an external population that is not part of the supposed genetic relationship, and how this leads to an estimate of the admixture rate. We prove that the D-statistic is approximated by a standard normal distribution. Furthermore, we show that our method outperforms the traditional D-statistic in detecting admixtures. The power gain is most pronounced for low and medium sequencing depth (1-10×), and performances are as good as with perfectly called genotypes at a sequencing depth of 2×. We show the reliability of error correction in scenarios with simulated errors and ancient data, and correct for introgression in known scenarios to estimate the admixture rates.