Project description:The adoption of low-dose computed tomography (LDCT) as the standard of care for lung cancer screening results in decreased mortality rates in high-risk population while increasing false-positive rate. Convolutional neural networks provide an ideal opportunity to improve malignant nodule detection; however, due to the lack of large adjudicated medical datasets these networks suffer from poor generalizability and overfitting. Using computed tomography images of the thorax from the National Lung Screening Trial (NLST), we compared discrete wavelet transforms (DWTs) against convolutional layers found in a CNN in order to evaluate their ability to classify suspicious lung nodules as either malignant or benign. We explored the use of the DWT as an alternative to the convolutional operations within CNNs in order to decrease the number of parameters to be estimated during training and reduce the risk of overfitting. We found that multi-level DWT performed better than convolutional layers when multiple kernel resolutions were utilized, yielding areas under the receiver-operating curve (AUC) of 94% and 92%, respectively. Furthermore, we found that multi-level DWT reduced the number of network parameters requiring evaluation when compared to a CNN and had a substantially faster convergence rate. We conclude that utilizing multi-level DWT composition in place of early convolutional layers within a DNN may improve for image classification in data-limited domains.
Project description:Object recognition improves with training. This training effect only partially generalizes to untrained images of the trained objects (new exemplars, orientation,…). The aim of this study is to investigate whether and to what extent the learning transfer improves when participants are trained with more exemplars of an object. Participants were trained to recognize two sets of stimuli using a backward masking paradigm. During training with the first set, only one exemplar of each object was presented. The second set was trained using four exemplars of each object. After 3 days of training, participants were tested on all the trained exemplars and a completely new exemplar of the same objects. In addition, recognition performance was compared to a set of completely new objects. For the objects of which four exemplars were used during training, participants showed more generalization toward new exemplars compared to when they were only trained with one exemplar. Part of the generalization effect extended to completely new objects. In conclusion, more variation during training leads to more generalization toward new visual stimuli.
Project description:Prior knowledge structures (or schemas) confer multiple behavioral benefits. First, when we encounter information that fits with prior knowledge structures, this information is generally better learned and remembered. Second, prior knowledge can support prospective planning. In humans, memory enhancements related to prior knowledge have been suggested to be supported, in part, by computations in prefrontal and medial temporal lobe (MTL) cortex. Moreover, animal studies further implicate a role for the hippocampus in schema-based facilitation and in the emergence of prospective planning signals following new learning. To date, convergence across the schema-enhanced learning and memory literature may be constrained by the predominant use of hippocampally dependent spatial navigation paradigms in rodents, and non-spatial list-based learning paradigms in humans. Here, we targeted this missing link by examining the effects of prior knowledge on human navigational learning in a hippocampally dependent virtual navigation paradigm that closely relates to foundational studies in rodents. Outside the scanner, participants overlearned Old Paired Associates (OPA- item-location associations) in multiple spatial environments, and they subsequently learned New Paired Associates (NPA-new item-location associations) in the environments while undergoing fMRI. We hypothesized that greater OPA knowledge precision would positively affect NPA learning, and that the hippocampus would be instrumental in translating this new learning into prospective planning of navigational paths to NPA locations. Behavioral results revealed that OPA knowledge predicted one-shot learning of NPA locations, and neural results indicated that one-shot learning was predicted by the rapid emergence of performance-predictive prospective planning signals in hippocampus. Prospective memory relationships were not significant in parahippocampal cortex and were marginally dissociable from the primary hippocampal effect. Collectively, these results extend understanding of how schemas impact learning and performance, showing that the precision of prior spatial knowledge is important for future learning in humans, and that the hippocampus is involved in translating this knowledge into new goal-directed behaviors.
Project description:A benchmark set of bottom-up proteomics data for training deep learning networks. It has data from 51 organisms and includes nearly 1 million peptides.
Project description:PurposeTo create an unsupervised cross-domain segmentation algorithm for segmenting intraretinal fluid and retinal layers on normal and pathologic macular OCT images from different manufacturers and camera devices.DesignWe sought to use generative adversarial networks (GANs) to generalize a segmentation model trained on one OCT device to segment B-scans obtained from a different OCT device manufacturer in a fully unsupervised approach without labeled data from the latter manufacturer.ParticipantsA total of 732 OCT B-scans from 4 different OCT devices (Heidelberg Spectralis, Topcon 1000, Maestro2, and Zeiss Plex Elite 9000).MethodsWe developed an unsupervised GAN model, GANSeg, to segment 7 retinal layers and intraretinal fluid in Topcon 1000 OCT images (domain B) that had access only to labeled data on Heidelberg Spectralis images (domain A). GANSeg was unsupervised because it had access only to 110 Heidelberg labeled OCTs and 556 raw and unlabeled Topcon 1000 OCTs. To validate GANSeg segmentations, 3 masked graders manually segmented 60 OCTs from an external Topcon 1000 test dataset independently. To test the limits of GANSeg, graders also manually segmented 3 OCTs from Zeiss Plex Elite 9000 and Topcon Maestro2. A U-Net was trained on the same labeled Heidelberg images as baseline. The GANSeg repository with labeled annotations is at https://github.com/uw-biomedical-ml/ganseg.Main outcome measuresDice scores comparing segmentation results from GANSeg and the U-Net model with the manual segmented images.ResultsAlthough GANSeg and U-Net achieved comparable Dice scores performance as human experts on the labeled Heidelberg test dataset, only GANSeg achieved comparable Dice scores with the best performance for the ganglion cell layer plus inner plexiform layer (90%; 95% confidence interval [CI], 68%-96%) and the worst performance for intraretinal fluid (58%; 95% CI, 18%-89%), which was statistically similar to human graders (79%; 95% CI, 43%-94%). GANSeg significantly outperformed the U-Net model. Moreover, GANSeg generalized to both Zeiss and Topcon Maestro2 swept-source OCT domains, which it had never encountered before.ConclusionsGANSeg enables the transfer of supervised deep learning algorithms across OCT devices without labeled data, thereby greatly expanding the applicability of deep learning algorithms.
Project description:Hematoxylin and eosin (H&E) stained slides are widely used in disease diagnosis. Remarkable advances in deep learning have made it possible to detect complex molecular patterns in these histopathology slides, suggesting automated approaches could help inform pathologists' decisions. Multiple instance learning (MIL) algorithms have shown promise in this context, outperforming transfer learning (TL) methods for various tasks, but their implementation and usage remains complex. We introduce HistoMIL, a Python package designed to streamline the implementation, training and inference process of MIL-based algorithms for computational pathologists and biomedical researchers. It integrates a self-supervised learning module for feature encoding, and a full pipeline encompassing TL and three MIL algorithms: ABMIL, DSMIL, and TransMIL. The PyTorch Lightning framework enables effortless customization and algorithm implementation. We illustrate HistoMIL's capabilities by building predictive models for 2,487 cancer hallmark genes on breast cancer histology slides, achieving AUROC performances of up to 85%.
Project description:A theoretical framework for the function of the medial temporal lobe system in memory defines differential contributions of the hippocampal subregions with regard to pattern recognition retrieval processes and encoding of new information. To investigate molecular programs of relevance, we designed a spatial learning protocol to engage a pattern separation function to encode new information. After background training, two groups of animals experienced the same new training in a novel environment, however only one group was provided spatial information and demonstrated spatial memory in a retention test. Global transcriptional analysis of the microdissected subregions of the hippocampus exposed a CA3 pattern that was sufficient to clearly segregate spatial learning animals from control. Individual gene and functional group analysis anchored these results to previous work in neural plasticity. From a multitude of expression changes, increases in camk2a, rasgrp1 and nlgn1 were confirmed by in situ hybridization. Furthermore, siRNA inhibition of nlgn1 within the CA3 subregion impaired spatial memory performance, pointing to mechanisms of synaptic remodeling as a basis for rapid encoding of new information in long-term memory. Experiment Overall Design: RNA samples from animals subjected to a spatial learning paradigm were compared to controls using Affymetirx RAE230a chips. An N of 7 was used in each of the two experimental conditions.
Project description:Hippocampal cognitive map-a neuronal representation of the spatial environment-is widely discussed in the computational neuroscience literature for decades. However, more recent studies point out that hippocampus plays a major role in producing yet another cognitive framework-the memory space-that incorporates not only spatial, but also non-spatial memories. Unlike the cognitive maps, the memory spaces, broadly understood as "networks of interconnections among the representations of events," have not yet been studied from a theoretical perspective. Here we propose a mathematical approach that allows modeling memory spaces constructively, as epiphenomena of neuronal spiking activity and thus to interlink several important notions of cognitive neurophysiology. First, we suggest that memory spaces have a topological nature-a hypothesis that allows treating both spatial and non-spatial aspects of hippocampal function on equal footing. We then model the hippocampal memory spaces in different environments and demonstrate that the resulting constructions naturally incorporate the corresponding cognitive maps and provide a wider context for interpreting spatial information. Lastly, we propose a formal description of the memory consolidation process that connects memory spaces to the Morris' cognitive schemas-heuristic representations of the acquired memories, used to explain the dynamics of learning and memory consolidation in a given environment. The proposed approach allows evaluating these constructs as the most compact representations of the memory space's structure.
Project description:Spinal Wistar Hannover rats trained to step bipedally on a treadmill with manual assistance of the hindlimbs have been shown to improve their stepping ability. Given the improvement in motor performance with practice and the ability of the spinal cord circuitry to learn to step more effectively when the mode of training allows variability, we examined why this intrinsic variability is an important factor. Intramuscular EMG electrodes were implanted to monitor and compare the patterns of activation of flexor (tibialis anterior) and extensor (soleus) muscles associated with a fixed-trajectory and assist-as-needed (AAN) step training paradigms in rats after a complete midthoracic (T8-T9) spinal cord transection. Both methods involved a robotic arm attached to each ankle of the rat to provide guidance during stepping. The fixed trajectory allowed little variance between steps, and the AAN provided guidance only when the ankle deviated a specified distance from the programmed trajectory. We hypothesized that an AAN paradigm would impose fewer disruptions of the control strategies intrinsic to the spinal locomotor circuitry compared with a fixed trajectory. Intrathecal injections of quipazine were given to each rat to facilitate stepping. Analysis confirmed that there were more corrections within a fixed-trajectory step cycle and consequently there was less coactivation of agonist and antagonist muscles during the AAN paradigm. These data suggest that some critical level of variation in the specific circuitry activated and the resulting kinematics reflect a fundamental feature of the neural control mechanisms even in a highly repetitive motor task.
Project description:Why are spatial metaphors, like the use of "high" to describe a musical pitch, so common? This study tested one hundred and fifty-four 3- to 5-year-old English-learning children on their ability to learn a novel adjective in the domain of space or pitch and to extend this adjective to the untrained dimension. Children were more proficient at learning the word when it described a spatial attribute compared to pitch. However, once children learned the word, they extended it to the untrained dimension without feedback. Thus, children leveraged preexisting associations between space and pitch to spontaneously understand new metaphors. These results suggest that spatial metaphors may be common across languages in part because they scaffold children's acquisition of word meanings that are otherwise difficult to learn.