The HUDSEN Atlas: a three-dimensional (3D) spatial framework for studying gene expression in the developing human brain.
ABSTRACT: We are developing a three-dimensional (3D) atlas of the human embryonic brain using anatomical landmarks and gene expression data to define major subdivisions through 12 stages of development [Carnegie Stages (CS) 12-23; approximately 26-56 days post conception (dpc)]. Virtual 3D anatomical models are generated from intact specimens using optical projection tomography (OPT). Using MAPAINT software, selected gene expression data, gathered using standard methods of in situ hybridization and immunohistochemistry, are mapped to a representative 3D model for each chosen Carnegie stage. In these models, anatomical domains, defined on the basis of morphological landmarks and comparative knowledge of expression patterns in vertebrates, are linked to a developmental neuroanatomic ontology. Human gene expression patterns for genes with characteristic expression in different vertebrates (e.g. PAX6, GAD65 and OLIG2) are being used to confirm and/or refine the human anatomical domain boundaries. We have also developed interpolation software that digitally generates a full domain from partial data. Currently, the 3D models and a preliminary set of anatomical domains and ontology are available on the atlas pages along with gene expression data from approximately 100 genes in the HUDSEN Human Spatial Gene Expression Database (http://www.hudsen.org). The aim is that full 3D data will be generated from expression data used to define a more detailed set of anatomical domains linked to a more advanced anatomy ontology and all of these will be available online, contributing to the long-term goal of the atlas, which is to help maximize the effective use and dissemination of data wherever it is generated.
Project description:As development proceeds the human embryo attains an ever more complex three dimensional (3D) structure. Analyzing the gene expression patterns that underlie these changes and interpreting their significance depends on identifying the anatomical structures to which they map and following these patterns in developing 3D structures over time. The difficulty of this task greatly increases as more gene expression patterns are added, particularly in organs with complex 3D structures such as the brain. Optical Projection Tomography (OPT) is a new technology which has been developed for rapidly generating digital 3D models of intact specimens. We have assessed the resolution of unstained neuronal structures within a Carnegie Stage (CS)17 OPT model and tested its use as a framework onto which anatomical structures can be defined and gene expression data mapped.Resolution of the OPT models was assessed by comparison of digital sections with physical sections stained, either with haematoxylin and eosin (H&E) or by immunocytochemistry for GAP43 or PAX6, to identify specific anatomical features. Despite the 3D models being of unstained tissue, peripheral nervous system structures from the trigeminal ganglion (approximately 300 microm by approximately 150 microm) to the rootlets of cranial nerve XII (approximately 20 microm in diameter) were clearly identifiable, as were structures in the developing neural tube such as the zona limitans intrathalamica (core is approximately 30 microm thick). Fourteen anatomical domains have been identified and visualised within the CS17 model. Two 3D gene expression domains, known to be defined by Pax6 expression in the mouse, were clearly visible when PAX6 data from 2D sections were mapped to the CS17 model. The feasibility of applying the OPT technology to all stages from CS12 to CS23, which encompasses the major period of organogenesis for the human developing central nervous system, was successfully demonstrated.In the CS17 model considerable detail is visible within the developing nervous system at a minimum resolution of approximately 20 microm and 3D anatomical and gene expression domains can be defined and visualised successfully. The OPT models and accompanying technologies for manipulating them provide a powerful approach to visualising and analysing gene expression and morphology during early human brain development.
Project description:Modern high throughput brain wide profiling techniques for cells and their morphology, connectivity, and other properties, make the use of reference atlases with 3D coordinate frameworks essential. However, anatomical location of observations made in microscopic sectional images from rodent brains is typically determined by comparison with 2D anatomical reference atlases. A major challenge in this regard is that microscopic sections often are cut with orientations deviating from the standard planes used in the reference atlases, resulting in inaccuracies and a need for tedious correction steps. Overall, efficient tools for registration of large series of section images to reference atlases are currently not widely available. Here we present QuickNII, a stand-alone software tool for semi-automated affine spatial registration of sectional image data to a 3D reference atlas coordinate framework. A key feature in the tool is the capability to generate user defined cut planes through the reference atlas, matching the orientation of the cut plane of the sectional image data. The reference atlas is transformed to match anatomical landmarks in the corresponding experimental images. In this way, the spatial relationship between experimental image and atlas is defined, without introducing distortions in the original experimental images. Following anchoring of a limited number of sections containing key landmarks, transformations are propagated across the entire series of sectional images to reduce the amount of manual steps required. By having coordinates assigned to the experimental images, further analysis of the distribution of features extracted from the images is greatly facilitated.
Project description:The three-dimensional (3D) structure of neural circuits is commonly studied by reconstructing individual or small groups of neurons in separate preparations. Investigation of structural organization principles or quantification of dendritic and axonal innervation thus requires integration of many reconstructed morphologies into a common reference frame. Here we present a standardized 3D model of the rat vibrissal cortex and introduce an automated registration tool that allows for precise placement of single neuron reconstructions. We (1) developed an automated image processing pipeline to reconstruct 3D anatomical landmarks, i.e., the barrels in Layer 4, the pia and white matter surfaces and the blood vessel pattern from high-resolution images, (2) quantified these landmarks in 12 different rats, (3) generated an average 3D model of the vibrissal cortex and (4) used rigid transformations and stepwise linear scaling to register 94 neuron morphologies, reconstructed from in vivo stainings, to the standardized cortex model. We find that anatomical landmarks vary substantially across the vibrissal cortex within an individual rat. In contrast, the 3D layout of the entire vibrissal cortex remains remarkably preserved across animals. This allows for precise registration of individual neuron reconstructions with approximately 30 µm accuracy. Our approach could be used to reconstruct and standardize other anatomically defined brain areas and may ultimately lead to a precise digital reference atlas of the rat brain.
Project description:One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network learning. To address this problem, we present a two-stage task-oriented deep learning method to detect large-scale anatomical landmarks simultaneously in real time, using limited training data. Specifically, our method consists of two deep convolutional neural networks (CNN), with each focusing on one specific task. Specifically, to alleviate the problem of limited training data, in the first stage, we propose a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, in the second stage, we develop another CNN model, which includes a) a fully convolutional network that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. Importantly, our method can jointly detect large-scale (e.g., thousands of) landmarks in real time. We have conducted various experiments for detecting 1200 brain landmarks from the 3D T1-weighted magnetic resonance images of 700 subjects, and also 7 prostate landmarks from the 3D computed tomography images of 73 subjects. The experimental results show the effectiveness of our method regarding both accuracy and efficiency in the anatomical landmark detection.
Project description:Effective visualization is central to the exploration and comprehension of brain imaging data. While MRI data are acquired in three-dimensional space, the methods for visualizing such data have rarely taken advantage of three-dimensional stereoscopic technologies. We present here results of stereoscopic visualization of clinical data, as well as an atlas of whole-brain functional connectivity. In comparison with traditional 3D rendering techniques, we demonstrate the utility of stereoscopic visualizations to provide an intuitive description of the exact location and the relative sizes of various brain landmarks, structures and lesions. In the case of resting state fMRI, stereoscopic 3D visualization facilitated comprehension of the anatomical position of complex large-scale functional connectivity patterns. Overall, stereoscopic visualization improves the intuitive visual comprehension of image contents, and brings increased dimensionality to visualization of traditional MRI data, as well as patterns of functional connectivity.
Project description:Congenital malformations in facial bones significantly impact the overall representation of face. Establishing a correlation between gene expression and morphogenesis of craniofacial structures may lead to new discoveries of molecular mechanisms of craniofacial development. Thus in the present investigation we will generate gene expression profiles of different facial bones at different time intervals over a period of 5 years to establish their roles in regulating craniofacial development. To perform global gene expression profiling analysis of mandible and maxilla development and integrate these datasets with cell lineage and quantitative 3D dynamic imaging analyses. In collaboration with the ontology group within the FaceBase consortium, we will define anatomical landmarks and morphometric parameters of the developing mandible and maxilla.
Project description:The subesophageal zone (SEZ) of the Drosophila brain houses the circuitry underlying feeding behavior and is involved in many other aspects of sensory processing and locomotor control. Formed by the merging of four neuromeres, the internal architecture of the SEZ can be best understood by identifying segmentally reiterated landmarks emerging in the embryo and larva, and following the gradual changes by which these landmarks become integrated into the mature SEZ during metamorphosis. In previous works, the system of longitudinal fibers (connectives) and transverse axons (commissures) has been used as a scaffold that provides internal landmarks for the neuromeres of the larval ventral nerve cord. We have extended the analysis of this scaffold to the SEZ and, in addition, reconstructed the tracts formed by lineages and nerves in relationship to the connectives and commissures. As a result, we establish reliable criteria that define boundaries between the four neuromeres (tritocerebrum, mandibular neuromere, maxillary neuromere, labial neuromere) of the SEZ at all stages of development. Fascicles and lineage tracts also demarcate seven columnar neuropil domains (ventromedial, ventro-lateral, centromedial, central, centrolateral, dorsomedial, dorsolateral) identifiable throughout development. These anatomical subdivisions, presented in the form of an atlas including confocal sections and 3D digital models for the larval, pupal and adult stage, allowed us to describe the morphogenetic changes shaping the adult SEZ. Finally, we mapped MARCM-labeled clones of all secondary lineages of the SEZ to the newly established neuropil subdivisions. Our work will facilitate future studies of function and comparative anatomy of the SEZ.
Project description:Chick embryos are good models for vertebrate development due to their accessibility and manipulability. Recent large increases in available genomic data from both whole genome sequencing and EST projects provide opportunities for identifying many new developmentally important chicken genes. Traditional methods of documenting when and where specific genes are expressed in embryos using whole amount and section in-situ hybridisation do not readily allow appreciation of 3-dimensional (3D) patterns of expression, but this can be accomplished by the recently developed microscopy technique, Optical Projection Tomography (OPT). Here we show that OPT data on the developing chick wing from different labs can be reliably integrated into a common database, that OPT is efficient in capturing 3D gene expression domains and that such domains can be meaningfully compared. Novel protocols are used to compare 3D expression domains of 7 genes known to be involved in chick wing development. This reveals previously unappreciated relationships and demonstrates the potential, using modern genomic resources, for building a large scale 3D atlas of gene expression. Such an atlas could be extended to include other types of data, such as fate maps, and the approach is also more generally applicable to embryos, organs and tissues.
Project description:Highly differentiated brain structures with distinctly different phenotypes are closely correlated with the unique combination of gene expression patterns. Using a genome-wide in situ hybridization image dataset released by Allen Mouse Brain Atlas, we present a data-driven method of dictionary learning and sparse coding. Our results show that sparse coding can elucidate patterns of transcriptome organization of mouse brain. A collection of components obtained from sparse coding display robust region-specific molecular signatures corresponding to the canonical neuroanatomical subdivisions including fiber tracts and ventricular systems. Other components revealed finer anatomical delineation of domains previously considered homogeneous. We also build an open-access informatics portal that contains the detail of each component along with its ontology and expressed genes. This portal allows intuitive visualization, interpretation and explorations of the transcriptome architecture of a mouse brain.
Project description:The gray short-tailed opossum (Monodelphis domestica) is a small marsupial gaining recognition as a laboratory animal in biomedical research. Despite numerous studies on opossum neuroanatomy, a consistent and comprehensive neuroanatomical reference for this species is still missing. Here we present the first three-dimensional, multimodal atlas of the Monodelphis opossum brain. It is based on four complementary imaging modalities: high resolution ex vivo magnetic resonance images, micro-computed tomography scans of the cranium, images of the face of the cutting block, and series of sections stained with the Nissl method and for myelinated fibers. Individual imaging modalities were reconstructed into a three-dimensional form and then registered to the MR image by means of affine and deformable registration routines. Based on a superimposition of the 3D images, 113 anatomical structures were demarcated and the volumes of individual regions were measured. The stereotaxic coordinate system was defined using a set of cranial landmarks: interaural line, bregma, and lambda, which allows for easy expression of any location within the brain with respect to the skull. The atlas is released under the Creative Commons license and available through various digital atlasing web services.