A visual–omics foundation model to bridge histopathology image with transcriptomics
Ontology highlight
ABSTRACT: Artificial intelligence has significantly advanced computational biology. Recent developments in omics technologies, such as single-cell RNA sequencing (scRNA-seq) and spatial transcriptomics (ST), provide detailed genomic data alongside tissue histology. However, current computational models often focus on either omics- or image-based analysis, lacking integration of both. To address this, we developed OmiCLIP, a visual-omics foundation model linking hematoxylin and eosin (H&E) images and transcriptomics using tissue patches from Visium data. For transcriptomics, we created 'sentences' by concatenating top-expressed gene symbols from tissue patches. We curated a dataset of 2.2 million paired tissue images and transcriptomic data across 32 organs to train OmiCLIP integrating histology and transcriptomics. Building on OmiCLIP, we created the Loki platform, which offers five key functions: tissue alignment, tissue annotation based on bulk RNA-seq or marker genes, cell type decomposition, image–transcriptomics retrieval, and ST gene expression prediction from H&E images. Compared with 22 state-of-the-art models on 5 simulations, 19 public, and 4 in-house experimental datasets, Loki demonstrated consistent accuracy and robustness in all tasks.
ORGANISM(S): Homo sapiens
PROVIDER: GSE293199 | GEO | 2025/04/01
REPOSITORIES: GEO
ACCESS DATA