Project description:PurposeLow-rank denoising of MRSI data results in an apparent increase in spectral SNR. However, it is not clear if this translates to a lower uncertainty in metabolite concentrations after spectroscopic fitting. Estimation of the true uncertainty after denoising is desirable for downstream analysis in spectroscopy. In this work, the uncertainty reduction from low-rank denoising methods based on spatiotemporal separability and linear predictability in MRSI are assessed. A new method for estimating metabolite concentration uncertainty after denoising is proposed. Automatic rank threshold selection methods are also assessed in simulated low SNR regimes.MethodsAssessment of denoising methods is conducted using Monte Carlo simulation of proton MRSI data and by reproducibility of repeated in vivo acquisitions in 5 subjects.ResultsIn simulated and in vivo data, spatiotemporal based denoising is shown to reduce the concentration uncertainty, but linear prediction denoising increases uncertainty. Uncertainty estimates provided by fitting algorithms after denoising consistently underestimate actual metabolite uncertainty. However, the proposed uncertainty estimation, based on an analytical expression for entry-wise variance after denoising, is more accurate. It is also shown automated rank threshold selection using Marchenko-Pastur distribution can bias the data in low SNR conditions. An alternative soft-thresholding function is proposed.ConclusionLow-rank denoising methods based on spatiotemporal separability do reduce uncertainty in MRS(I) data. However, thorough assessment is needed as assessment by SNR measured from residual baseline noise is insufficient given the presence of non-uniform variance. It is also important to select the right rank thresholding method in low SNR cases.
Project description:BackgroundMany gene expression normalization algorithms exist for Affymetrix GeneChip microarrays. The most popular of these is RMA, primarily due to the precision and low noise produced during the process. A significant strength of this and similar approaches is the use of the entire set of arrays during both normalization and model-based estimation of signal. However, this leads to differing estimates of expression based on the starting set of arrays, and estimates can change when a single, additional chip is added to the set. Additionally, outlier chips can impact the signals of other arrays, and can themselves be skewed by the majority of the population.ResultsWe developed an approach, termed IRON, which uses the best-performing techniques from each of several popular processing methods while retaining the ability to incrementally renormalize data without altering previously normalized expression. This combination of approaches results in a method that performs comparably to existing approaches on artificial benchmark datasets (i.e. spike-in) and demonstrates promising improvements in segregating true signals within biologically complex experiments.ConclusionsBy combining approaches from existing normalization techniques, the IRON method offers several advantages. First, IRON normalization occurs pair-wise, thereby avoiding the need for all chips to be normalized together, which can be important for large data analyses. Secondly, the technique does not require similarity in signal distribution across chips for normalization, which can be important for maintaining biologically relevant differences in a heterogeneous background. Lastly, IRON introduces fewer post-processing artifacts, particularly in data whose behavior violates common assumptions. Thus, the IRON method provides a practical solution to common needs of expression analysis. A software implementation of IRON is available at [http://gene.moffitt.org/libaffy/].
Project description:Multimodal single-cell profiling methods that measure protein expression with oligo-conjugated antibodies hold promise for comprehensive dissection of cellular heterogeneity, yet the resulting protein counts have substantial technical noise that can mask biological variations. Here we integrate experiments and computational analyses to reveal two major noise sources and develop a method called "dsb" (denoised and scaled by background) to normalize and denoise droplet-based protein expression data. We discover that protein-specific noise originates from unbound antibodies encapsulated during droplet generation; this noise can thus be accurately estimated and corrected by utilizing protein levels in empty droplets. We also find that isotype control antibodies and the background protein population average in each cell exhibit significant correlations across single cells, we thus use their shared variance to correct for cell-to-cell technical noise in each cell. We validate these findings by analyzing the performance of dsb in eight independent datasets spanning multiple technologies, including CITE-seq, ASAP-seq, and TEA-seq. Compared to existing normalization methods, our approach improves downstream analyses by better unmasking biologically meaningful cell populations. Our method is available as an open-source R package that interfaces easily with existing single cell software platforms such as Seurat, Bioconductor, and Scanpy and can be accessed at "dsb [ https://cran.r-project.org/package=dsb ]".
Project description:Identifying subspace gene clusters from the gene expression data is useful for discovering novel functional gene interactions. In this paper, we propose to use low-rank representation (LRR) to identify the subspace gene clusters from microarray data. LRR seeks the lowest-rank representation among all the candidates that can represent the genes as linear combinations of the bases in the dataset. The clusters can be extracted based on the block diagonal representation matrix obtained using LRR, and they can well capture the intrinsic patterns of genes with similar functions. Meanwhile, the parameter of LRR can balance the effect of noise so that the method is capable of extracting useful information from the data with high level of background noise. Compared with traditional methods, our approach can identify genes with similar functions yet without similar expression profiles. Also, it could assign one gene into different clusters. Moreover, our method is robust to the noise and can identify more biologically relevant gene clusters. When applied to three public datasets, the results show that the LRR based method is superior to existing methods for identifying subspace gene clusters.
Project description:PurposeTo improve liver proton density fat fraction (PDFF) and R2*$$ {R}_2^{\ast } $$ quantification at 0.55 T by systematically validating the acquisition parameter choices and investigating the performance of locally low-rank denoising methods.MethodsA Monte Carlo simulation was conducted to design a protocol for PDFF and R2*$$ {R}_2^{\ast } $$ mapping at 0.55 T. Using this proposed protocol, we investigated the performance of robust locally low-rank (RLLR) and random matrix theory (RMT) denoising. In a reference phantom, we assessed quantification accuracy (concordance correlation coefficient [ ρc$$ {\rho}_c $$ ] vs. reference values) and precision (using SD) across scan repetitions. We performed in vivo liver scans (11 subjects) and used regions of interest to compare means and SDs of PDFF and R2*$$ {R}_2^{\ast } $$ measurements. Kruskal-Wallis and Wilcoxon signed-rank tests were performed (p < 0.05 considered significant).ResultsIn the phantom, RLLR and RMT denoising improved accuracy in PDFF and R2*$$ {R}_2^{\ast } $$ with ρc$$ {\rho}_c $$ >0.992 and improved precision with >67% decrease in SD across 50 scan repetitions versus conventional reconstruction (i.e., no denoising). For in vivo liver scans, the mean PDFF and mean R2*$$ {R}_2^{\ast } $$ were not significantly different between the three methods (conventional reconstruction; RLLR and RMT denoising). Without denoising, the SDs of PDFF and R2*$$ {R}_2^{\ast } $$ were 8.80% and 14.17 s-1. RLLR denoising significantly reduced the values to 1.79% and 5.31 s-1 (p < 0.001); RMT denoising significantly reduced the values to 2.00% and 4.81 s-1 (p < 0.001).ConclusionWe validated an acquisition protocol for improved PDFF and R2*$$ {R}_2^{\ast } $$ quantification at 0.55 T. Both RLLR and RMT denoising improved the accuracy and precision of PDFF and R2*$$ {R}_2^{\ast } $$ measurements.
Project description:Recent advances in large-scale gene expression profiling necessitate concurrent development of biostatistical approaches to reveal meaningful biological relationships. Most analyses rely on significance thresholds for identifying differentially expressed genes. We use an approach to compare gene expression datasets using 'threshold-free' comparisons. Significance cut-offs to identify genes shared between datasets may be too stringent and may miss concordant patterns of gene expression with potential biological relevance. A threshold-free approach gaining popularity in several research areas, including neuroscience, is Rank-Rank Hypergeometric Overlap (RRHO). Genes are ranked by their p-value and effect size direction, and ranked lists are compared to identify significantly overlapping genes across a continuous significance gradient rather than at a single arbitrary cut-off. We have updated the previous RRHO analysis by accurately detecting overlap of genes changed in the same and opposite directions between two datasets. Here, we use simulated and real data to show the drawbacks of the previous algorithm as well as the utility of our new algorithm. For example, we show the power of detecting discordant transcriptional patterns in the postmortem brain of subjects with psychiatric disorders. The new R package, RRHO2, offers a new, more intuitive visualization of concordant and discordant gene overlap.
Project description:The increasing number of genome-wide assays of gene expression available from public databases presents opportunities for computational methods that facilitate hypothesis generation and biological interpretation of these data. We present an unsupervised machine learning approach, ADAGE (analysis using denoising autoencoders of gene expression), and apply it to the publicly available gene expression data compendium for Pseudomonas aeruginosa. In this approach, the machine-learned ADAGE model contained 50 nodes which we predicted would correspond to gene expression patterns across the gene expression compendium. While no biological knowledge was used during model construction, cooperonic genes had similar weights across nodes, and genes with similar weights across nodes were significantly more likely to share KEGG pathways. By analyzing newly generated and previously published microarray and transcriptome sequencing data, the ADAGE model identified differences between strains, modeled the cellular response to low oxygen, and predicted the involvement of biological processes based on low-level gene expression differences. ADAGE compared favorably with traditional principal component analysis and independent component analysis approaches in its ability to extract validated patterns, and based on our analyses, we propose that these approaches differ in the types of patterns they preferentially identify. We provide the ADAGE model with analysis of all publicly available P. aeruginosa GeneChip experiments and open source code for use with other species and settings. Extraction of consistent patterns across large-scale collections of genomic data using methods like ADAGE provides the opportunity to identify general principles and biologically important patterns in microbial biology. This approach will be particularly useful in less-well-studied microbial species. IMPORTANCE The quantity and breadth of genome-scale data sets that examine RNA expression in diverse bacterial and eukaryotic species are increasing more rapidly than for curated knowledge. Our ADAGE method integrates such data without requiring gene function, gene pathway, or experiment labeling, making practical its application to any large gene expression compendium. We built a Pseudomonas aeruginosa ADAGE model from a diverse set of publicly available experiments without any prespecified biological knowledge, and this model was accurate and predictive. We provide ADAGE results for the complete P. aeruginosa GeneChip compendium for use by researchers studying P. aeruginosa and source code that facilitates ADAGE's application to other species and data types. Author Video: An author video summary of this article is available.
Project description:BackgroundLarge-scale biological data sets are often contaminated by noise, which can impede accurate inferences about underlying processes. Such measurement noise can arise from endogenous biological factors like cell cycle and life history variation, and from exogenous technical factors like sample preparation and instrument variation.ResultsWe describe a general method for automatically reducing noise in large-scale biological data sets. This method uses an interaction network to identify groups of correlated or anti-correlated measurements that can be combined or "filtered" to better recover an underlying biological signal. Similar to the process of denoising an image, a single network filter may be applied to an entire system, or the system may be first decomposed into distinct modules and a different filter applied to each. Applied to synthetic data with known network structure and signal, network filters accurately reduce noise across a wide range of noise levels and structures. Applied to a machine learning task of predicting changes in human protein expression in healthy and cancerous tissues, network filtering prior to training increases accuracy up to 43% compared to using unfiltered data.ConclusionsNetwork filters are a general way to denoise biological data and can account for both correlation and anti-correlation between different measurements. Furthermore, we find that partitioning a network prior to filtering can significantly reduce errors in networks with heterogenous data and correlation patterns, and this approach outperforms existing diffusion based methods. Our results on proteomics data indicate the broad potential utility of network filters to applications in systems biology.
Project description:BackgroundGene Co-expression Network Analysis (GCNA) helps identify gene modules with potential biological functions and has become a popular method in bioinformatics and biomedical research. However, most current GCNA algorithms use correlation to build gene co-expression networks and identify modules with highly correlated genes. There is a need to look beyond correlation and identify gene modules using other similarity measures for finding novel biologically meaningful modules.ResultsWe propose a new generalized gene co-expression analysis algorithm via subspace clustering that can identify biologically meaningful gene co-expression modules with genes that are not all highly correlated. We use low-rank representation to construct gene co-expression networks and local maximal quasi-clique merger to identify gene co-expression modules. We applied our method on three large microarray datasets and a single-cell RNA sequencing dataset. We demonstrate that our method can identify gene modules with different biological functions than current GCNA methods and find gene modules with prognostic values.ConclusionsThe presented method takes advantage of subspace clustering to generate gene co-expression networks rather than using correlation as the similarity measure between genes. Our generalized GCNA method can provide new insights from gene expression datasets and serve as a complement to current GCNA algorithms.
Project description:Diffusion weighted imaging (DWI) with multiple, high b-values is critical for extracting tissue microstructure measurements; however, high b-value DWI images contain high noise levels that can overwhelm the signal of interest and bias microstructural measurements. Here, we propose a simple denoising method that can be applied to any dataset, provided a low-noise, single-subject dataset is acquired using the same DWI sequence. The denoising method uses a one-dimensional convolutional neural network (1D-CNN) and deep learning to learn from a low-noise dataset, voxel-by-voxel. The trained model can then be applied to high-noise datasets from other subjects. We validated the 1D-CNN denoising method by first demonstrating that 1D-CNN denoising resulted in DWI images that were more similar to the noise-free ground truth than comparable denoising methods, e.g., MP-PCA, using simulated DWI data. Using the same DWI acquisition but reconstructed with two common reconstruction methods, i.e. SENSE1 and sum-of-square, to generate a pair of low-noise and high-noise datasets, we then demonstrated that 1D-CNN denoising of high-noise DWI data collected from human subjects showed promising results in three domains: DWI images, diffusion metrics, and tractography. In particular, the denoised images were very similar to a low-noise reference image of that subject, more than the similarity between repeated low-noise images (i.e. computational reproducibility). Finally, we demonstrated the use of the 1D-CNN method in two practical examples to reduce noise from parallel imaging and simultaneous multi-slice acquisition. We conclude that the 1D-CNN denoising method is a simple, effective denoising method for DWI images that overcomes some of the limitations of current state-of-the-art denoising methods, such as the need for a large number of training subjects and the need to account for the rectified noise floor.