Project description:The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.
Project description:Zoonotic diseases threaten human health worldwide and are often associated with anthropogenic disturbance. Predicting how disturbance influences spillover risk is critical for effective disease intervention but difficult to achieve at fine spatial scales. Here, we develop a method that learns the spatial distribution of a reservoir species from aerial imagery. Our approach uses neural networks to extract features of known or hypothesized importance from images. The spatial distribution of these features is then summarized and linked to spatially explicit reservoir presence/absence data using boosted regression trees. We demonstrate the utility of our method by applying it to the reservoir of Lassa virus, Mastomys natalensis, within the West African nations of Sierra Leone and Guinea. We show that, when trained using reservoir trapping data and publicly available aerial imagery, our framework learns relationships between environmental features and reservoir occurrence and accurately ranks areas according to the likelihood of reservoir presence.
Project description:This work presents a machine learning approach for the computer vision-based recognition of materials inside vessels in the chemistry lab and other settings. In addition, we release a data set associated with the training of the model for further model development. The task to learn is finding the region, boundaries, and category for each material phase and vessel in an image. Handling materials inside mostly transparent containers is the main activity performed by human and robotic chemists in the laboratory. Visual recognition of vessels and their contents is essential for performing this task. Modern machine-vision methods learn recognition tasks by using data sets containing a large number of annotated images. This work presents the Vector-LabPics data set, which consists of 2187 images of materials within mostly transparent vessels in a chemistry lab and other general settings. The images are annotated for both the vessels and the individual material phases inside them, and each instance is assigned one or more classes (liquid, solid, foam, suspension, powder, ...). The fill level, labels, corks, and parts of the vessel are also annotated. Several convolutional nets for semantic and instance segmentation were trained on this data set. The trained neural networks achieved good accuracy in detecting and segmenting vessels and material phases, and in classifying liquids and solids, but relatively low accuracy in segmenting multiphase systems such as phase-separating liquids.
Project description:Multi-scale methods that separate different time or spatial scales are among the most powerful techniques in physics, especially in applications that study nonlinear systems with noise. When the time scales (noise and perturbation) are of the same order, the scales separation becomes impossible. Thus, the multi-scale approach has to be modified to characterise a variety of noise-induced phenomena. Here, based on stochastic modelling and analytical study, we demonstrate in terms of the fluctuation-induced phenomena and Hurst R/S analysis metrics that the matching scales of random birefringence and pump-signal states of polarisation interaction in a fibre Raman amplifier results in a new random birefringence-mediated phenomenon, which is similar to stochastic anti-resonance. The observed phenomenon, apart from the fundamental interest, provides a base for advancing multi-scale methods with application to different coupled nonlinear systems ranging from lasers (multimode, mode-locked, random, etc.) to nanostructures (light-mediated conformation of molecules and chemical reactions, Brownian motors, etc.).
Project description:Synthetic polyolefinic plastics comprise one of the largest shares of global plastic waste, which is being targeted for chemical recycling by depolymerization to monomers and small molecules. One promising method of chemical recycling is solid-state depolymerization under ambient conditions in a ball-mill reactor. In this paper, we elucidate kinetic phenomena in the mechanochemical depolymerization of poly(styrene). Styrene is produced in this process at a constant rate and selectivity alongside minor products, including oxygenates like benzaldehyde, via mechanisms analogous to those involved in thermal and oxidative pyrolysis. Continuous monomer removal during reactor operation is critical for avoiding repolymerization, and promoting effects are exhibited by iron surfaces and molecular oxygen. Kinetic independence between depolymerization and molecular weight reduction was observed, despite both processes originating from the same driving force of mechanochemical collisions. Phenomena across multiple length scales are shown to be responsible for differences in reactivity due to differences in grinding parameters and reactant composition.
Project description:PurposeTo evaluate the prevalence of computer vision syndrome (CVS)-related symptoms in a presbyopic population using the computer as the main work tool, as well as the relationship of CVS with the electronic device use habits and the ergonomic factors.MethodsA sample of 198 presbyopic participants (aged 45-65 years) who regularly work with a computer completed a customised questionnaire divided into: general demographics, optical correction commonly used and for work, habits of electronic devices use, ergonomic conditions during the working hours and CVS-related symptoms during work performance. A total of 10 CVS-related symptoms were questioned indicating the severity with which they occurred (0-4) and the median total symptom score (MTSS) was calculated as the sum of the symptoms.ResultsThe MTSS in this presbyopic population is 7 ± 5 symptoms. The most common symptoms reported by participants are dry eyes, tired eyes and difficulties in refocusing. MTSS is higher in women (p < 0.05), in laptop computer users (p < 0.05) and in teleworkers compared to office workers (p < 0.05). Regarding ergonomic conditions, MTSS is higher in participants who do not take breaks while working (p < 0.05), who have an inadequately lighting in the workspace (p < 0.05) and in the participants reporting neck (p < 0.01) or back pain (p < 0.001).ConclusionThere is a relationship between CVS-related symptoms, the use of electronic devices and the ergonomic factors, which indicates the importance of adapting workplaces, especially for home-based teleworkers, and following basic visual ergonomics rules.
Project description:PurposeTo quantify the levels of performance (symptom severity) of the computer-vision symptom scale (CVSS17), confirm its bifactorial structure as detected in an exploratory factor analysis, and validate its factors as subscales.MethodsBy partial credit model (PCM), we estimated CVSS17 measures and the standard error for every possible raw score, and used these data to determine the number of different performance levels in the CVSS17. In addition, through discriminant analysis, we checked that the scale's two main factors could classify subjects according to these determined levels of performance. Finally, a separate Rasch analysis was performed for each CVSS17 factor to assess their measurement properties when used as isolated scales.ResultsWe identified 5.8 different levels of performance. Discriminant functions obtained from sample data indicated that the scale's main factors correctly classified 98.4% of the cases. The main factors: Internal symptom factor (ISF) and external symptom factor (ESF) showed good measurement properties and can be considered as subscales.ConclusionCVSS17 scores defined five different levels of performance. In addition, two main factors (ESF and ISF) were identified and these confirmed by discriminant analysis. These subscales served to assess either the visual or the ocular symptoms attributable to computer use.
Project description:Extracting meaning from a dynamic and variable flow of incoming information is a major goal of both natural and artificial intelligence. Computer vision (CV) guided by deep learning (DL) has made significant strides in recognizing a specific identity despite highly variable attributes. This is the same challenge faced by the nervous system and partially addressed by the concept cells-neurons exhibiting selective firing in response to specific persons/places, described in the human medial temporal lobe (MTL) . Yet, access to neurons representing a particular concept is limited due to these neurons' sparse coding. It is conceivable, however, that the information required for such decoding is present in relatively small neuronal populations. To evaluate how well neuronal populations encode identity information in natural settings, we recorded neuronal activity from multiple brain regions of nine neurosurgical epilepsy patients implanted with depth electrodes, while the subjects watched an episode of the TV series "24". First, we devised a minimally supervised CV algorithm (with comparable performance against manually-labeled data) to detect the most prevalent characters (above 1% overall appearance) in each frame. Next, we implemented DL models that used the time-varying population neural data as inputs and decoded the visual presence of the four main characters throughout the episode. This methodology allowed us to compare "computer vision" with "neuronal vision"-footprints associated with each character present in the activity of a subset of neurons-and identify the brain regions that contributed to this decoding process. We then tested the DL models during a recognition memory task following movie viewing where subjects were asked to recognize clip segments from the presented episode. DL model activations were not only modulated by the presence of the corresponding characters but also by participants' subjective memory of whether they had seen the clip segment, and by the associative strengths of the characters in the narrative plot. The described approach can offer novel ways to probe the representation of concepts in time-evolving dynamic behavioral tasks. Further, the results suggest that the information required to robustly decode concepts is present in the population activity of only tens of neurons even in brain regions beyond MTL.
Project description:Monitoring the drinking behavior of animals can provide important information for livestock farming, including the health and well-being of the animals. Measuring drinking time is labor-demanding and, thus, it is still a challenge in most livestock production systems. Computer vision technology using a low-cost camera system can be useful in overcoming this issue. The aim of this research was to develop a computer vision system for monitoring beef cattle drinking behavior. A data acquisition system, including an RGB camera and an ultrasonic sensor, was developed to record beef cattle drinking actions. We developed an algorithm for tracking the beef cattle's key body parts, such as head-ear-neck position, using a state-of-the-art deep learning architecture DeepLabCut. The extracted key points were analyzed using a long short-term memory (LSTM) model to classify drinking and non-drinking periods. A total of 70 videos were used to train and test the model and 8 videos were used for validation purposes. During the testing, the model achieved 97.35% accuracy. The results of this study will guide us to meet immediate needs and expand farmers' capability in monitoring animal health and well-being by identifying drinking behavior.