Project description:Many natural and man-made systems are prone to critical transitions-abrupt and potentially devastating changes in dynamics. Deep learning classifiers can provide an early warning signal for critical transitions by learning generic features of bifurcations from large simulated training data sets. So far, classifiers have only been trained to predict continuous-time bifurcations, ignoring rich dynamics unique to discrete-time bifurcations. Here, we train a deep learning classifier to provide an early warning signal for the five local discrete-time bifurcations of codimension-one. We test the classifier on simulation data from discrete-time models used in physiology, economics and ecology, as well as experimental data of spontaneously beating chick-heart aggregates that undergo a period-doubling bifurcation. The classifier shows higher sensitivity and specificity than commonly used early warning signals under a wide range of noise intensities and rates of approach to the bifurcation. It also predicts the correct bifurcation in most cases, with particularly high accuracy for the period-doubling, Neimark-Sacker and fold bifurcations. Deep learning as a tool for bifurcation prediction is still in its nascence and has the potential to transform the way we monitor systems for critical transitions.
Project description:Biological measurements are often contaminated with large amounts of non-stationary noise which require effective noise reduction techniques. We present a new real-time deep learning algorithm which produces adaptively a signal opposing the noise so that destructive interference occurs. As a proof of concept, we demonstrate the algorithm's performance by reducing electromyogram noise in electroencephalograms with the usage of a custom, flexible, 3D-printed, compound electrode. With this setup, an average of 4dB and a maximum of 10dB improvement of the signal-to-noise ratio of the EEG was achieved by removing wide band muscle noise. This concept has the potential to not only adaptively improve the signal-to-noise ratio of EEG but can be applied to a wide range of biological, industrial and consumer applications such as industrial sensing or noise cancelling headphones.
Project description:The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.
Project description:Characterizing the human leukocyte antigen (HLA) bound ligandome by mass spectrometry (MS) holds great promise for developing vaccines and drugs for immune oncology. Still, the identification of such non-tryptic peptides presents substantial computational challenges. To address these, we synthesized >300,000 peptides within the ProteomeTools project representing HLA class I & II ligands and products of the proteases AspN and LysN and analyzed these by multi-modal LC-MS/MS. The resulting data enabled training of a single model using the deep learning framework Prosit that shows outstanding prediction accuracy of fragment ion spectra for tryptic and non-tryptic peptides. Applying Prosit demonstrates that the identification of HLA peptides can be improved by 50-300% on average, that proteasomal HLA peptide splicing may not exist and that additional neo-epitopes that elicit an immune response can be identified from patient tumors. Together, the provided peptides, spectra and computational tools substantially expand the scope of immunopeptidomics workflows.
Project description:Single-pixel cameras capture images without the requirement for a multi-pixel sensor, enabling the use of state-of-the-art detector technologies and providing a potentially low-cost solution for sensing beyond the visible spectrum. One limitation of single-pixel cameras is the inherent trade-off between image resolution and frame rate, with current compressive (compressed) sensing techniques being unable to support real-time video. In this work we demonstrate the application of deep learning with convolutional auto-encoder networks to recover real-time 128 × 128 pixel video at 30 frames-per-second from a single-pixel camera sampling at a compression ratio of 2%. In addition, by training the network on a large database of images we are able to optimise the first layer of the convolutional network, equivalent to optimising the basis used for scanning the image intensities. This work develops and implements a novel approach to solving the inverse problem for single-pixel cameras efficiently and represents a significant step towards real-time operation of computational imagers. By learning from examples in a particular context, our approach opens up the possibility of high resolution for task-specific adaptation, with importance for applications in gas sensing, 3D imaging and metrology.
Project description:During animal development, embryos undergo complex morphological changes over time. Differences in developmental tempo between species are emerging as principal drivers of evolutionary novelty, but accurate description of these processes is very challenging. To address this challenge, we present here an automated and unbiased deep learning approach to analyze the similarity between embryos of different timepoints. Calculation of similarities across stages resulted in complex phenotypic fingerprints, which carry characteristic information about developmental time and tempo. Using this approach, we were able to accurately stage embryos, quantitatively determine temperature-dependent developmental tempo, detect naturally occurring and induced changes in the developmental progression of individual embryos, and derive staging atlases for several species de novo in an unsupervised manner. Our approach allows us to quantify developmental time and tempo objectively and provides a standardized way to analyze early embryogenesis.
Project description:Real-time X-ray tomography pipelines, such as implemented by RECAST3D, compute and visualize tomographic reconstructions in milliseconds, and enable the observation of dynamic experiments in synchrotron beamlines and laboratory scanners. For extending real-time reconstruction by image processing and analysis components, Deep Neural Networks (DNNs) are a promising technology, due to their strong performance and much faster run-times compared to conventional algorithms. DNNs may prevent experiment repetition by simplifying real-time steering and optimization of the ongoing experiment. The main challenge of integrating DNNs into real-time tomography pipelines, however, is that they need to learn their task from representative data before the start of the experiment. In scientific environments, such training data may not exist, and other uncertain and variable factors, such as the set-up configuration, reconstruction parameters, or user interaction, cannot easily be anticipated beforehand, either. To overcome these problems, we developed just-in-time learning, an online DNN training strategy that takes advantage of the spatio-temporal continuity of consecutive reconstructions in the tomographic pipeline. This allows training and deploying comparatively small DNNs during the experiment. We provide software implementations, and study the feasibility and challenges of the approach by training the self-supervised Noise2Inverse denoising task with X-ray data replayed from real-world dynamic experiments.
Project description:The enormous computational requirements and unsustainable resource consumption associated with massive parameters of large language models and large vision models have given rise to challenging issues. Here, we propose an interpretable 'small model' framework characterized by only a single core-neuron, i.e. the one-core-neuron system (OCNS), to significantly reduce the number of parameters while maintaining performance comparable to the existing 'large models' in time-series forecasting. With multiple delay feedback designed in this single neuron, our OCNS is able to convert one input feature vector/state into one-dimensional time-series/sequence, which is theoretically ensured to fully represent the states of the observed dynamical system. Leveraging the spatiotemporal information transformation, the OCNS shows excellent and robust performance in forecasting tasks, in particular for short-term high-dimensional systems. The results collectively demonstrate that the proposed OCNS with a single core neuron offers insights into constructing deep learning frameworks with a small model, presenting substantial potential as a new way for achieving efficient deep learning.
Project description:Deep learning has achieved spectacular performance in image and speech recognition and synthesis. It outperforms other machine learning algorithms in problems where large amounts of data are available. In the area of measurement technology, instruments based on the photonic time stretch have established record real-time measurement throughput in spectroscopy, optical coherence tomography, and imaging flow cytometry. These extreme-throughput instruments generate approximately 1 Tbit/s of continuous measurement data and have led to the discovery of rare phenomena in nonlinear and complex systems as well as new types of biomedical instruments. Owing to the abundance of data they generate, time-stretch instruments are a natural fit to deep learning classification. Previously we had shown that high-throughput label-free cell classification with high accuracy can be achieved through a combination of time-stretch microscopy, image processing and feature extraction, followed by deep learning for finding cancer cells in the blood. Such a technology holds promise for early detection of primary cancer or metastasis. Here we describe a new deep learning pipeline, which entirely avoids the slow and computationally costly signal processing and feature extraction steps by a convolutional neural network that directly operates on the measured signals. The improvement in computational efficiency enables low-latency inference and makes this pipeline suitable for cell sorting via deep learning. Our neural network takes less than a few milliseconds to classify the cells, fast enough to provide a decision to a cell sorter for real-time separation of individual target cells. We demonstrate the applicability of our new method in the classification of OT-II white blood cells and SW-480 epithelial cancer cells with more than 95% accuracy in a label-free fashion.
Project description:An immediate report of the source focal mechanism with full automation after a destructive earthquake is crucial for timely characterizing the faulting geometry, evaluating the stress perturbation, and assessing the aftershock patterns. Advanced technologies such as Artificial Intelligence (AI) has been introduced to solve various problems in real-time seismology, but the real-time source focal mechanism is still a challenge. Here we propose a novel deep learning method namely Focal Mechanism Network (FMNet) to address this problem. The FMNet trained with 787,320 synthetic samples successfully estimates the focal mechanisms of four 2019 Ridgecrest earthquakes with magnitude larger than Mw 5.4. The network learns the global waveform characteristics from theoretical data, thereby allowing the extensive applications of the proposed method to regions of potential seismic hazards with or without historical earthquake data. After receiving data, the network takes less than two hundred milliseconds for predicting the source focal mechanism reliably on a single CPU.