Project description:Citizen science approaches have grown in popularity over the years, partly due to their ability to reach a wider audience and produce more generalizable samples. In dogs, these studies, though, have been limited in their controls over materials or experimental protocols, with guardians typically reporting results without researcher supervision. Over two studies, we explored and validated a synchronous citizen science approach. We had dog guardians act as experimenters while being supervised by a researcher over Zoom. In study 1, we demonstrated that synchronous citizen science produced equivalent levels of performance to in-lab designs in a choice task. Consistent with past in-lab research, dogs selected a treat (vs. an empty plate) in a two-alternative forced-choice task. In study 2, we showed that Zoom methods are also appropriate for studies utilizing looking time measures. We explored dogs' looking behaviors when a bag of treats was placed in an unreachable location, and dogs' guardians were either attentive or inattentive while dogs attempted to retrieve the treats. Consistent with past work, dogs in the attentive condition looked at their guardian for longer periods and had a shorter latency to first look than dogs in the inattentive condition. Overall, we have demonstrated that synchronous citizen science studies with dogs are feasible and produce valid results consistent with those found in a typical lab setting.
Project description:It is well established that most of the plastic pollution found in the oceans is transported via rivers. Unfortunately, the main processes contributing to plastic and debris displacement through riparian systems is still poorly understood. The Marine Litter Drifter project from the Arno River aims at using modern consumer software and hardware technologies to track the movements of real anthropogenic marine debris (AMD) from rivers. The innovative "Marine Litter Trackers" (MLT) were utilized as they are reliable, robust, self-powered and they present almost no maintenance costs. Furthermore, they can be built not only by those trained in the field but also by those with no specific expertise, including high school students, simply by following the instructions. Five dispersion experiments were successfully conducted from April 2021 to December 2021, using different types of trackers in different seasons and weather conditions. The maximum distance tracked was 2845 km for a period of 94 days. The activity at sea was integrated by use of Lagrangian numerical models that also assisted in planning the deployments and the recovery of drifters. The observed tracking data in turn were used for calibration and validation, recursively improving their quality. The dynamics of marine litter (ML) dispersion in the Tyrrhenian Sea is also discussed, along with the potential for open-source approaches including the "citizen science" perspective for both improving big data collection and educating/awareness-raising on AMD issues.
Project description:The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches-Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims.
Project description:Citizen science and automated collection methods increasingly depend on image recognition to provide the amounts of observational data research and management needs. Recognition models, meanwhile, also require large amounts of data from these sources, creating a feedback loop between the methods and tools. Species that are harder to recognize, both for humans and machine learning algorithms, are likely to be under-reported, and thus be less prevalent in the training data. As a result, the feedback loop may hamper training mostly for species that already pose the greatest challenge. In this study, we trained recognition models for various taxa, and found evidence for a 'recognizability bias', where species that are more readily identified by humans and recognition models alike are more prevalent in the available image data. This pattern is present across multiple taxa, and does not appear to relate to differences in picture quality, biological traits or data collection metrics other than recognizability. This has implications for the expected performance of future models trained with more data, including such challenging species.
Project description:A citation study of a sample of earth science projects in citizen science from the FedCats Catalog was undertaken to assess whether citizen science projects are as productive and as impactful as conventional research that does not employ volunteer participation as a part of their data gathering and analysis protocols. From the 783 peer-reviewed papers produced by 48 projects identified from project bibliographies, 12,380 citations were identified using the Web of Science archive and their citation search engine to the end of 2018. Various conventional productivity and impact measures were applied including the Impact Factor, H and M-indices, and entry into the Top-1000 papers in cited research. The earth science projects tend to under-perform in terms of Impact Factor (IF = 14-20) and the M-index (M<0.5) but perform at the level of a 'tenured professor' with <H> = 23. When compared to non-citizen science research in general, there is a ten-fold higher probability of the earth science papers reaching the Top-1000 threshold of most-cited papers in natural science research. Some of the reasons for the lower performance by some indicators may have to do with the down-turn in published papers after 2010 for the majority of the earth science projects, which itself could be related to the fact that 52% of these projects only became operational after 2010 compared to the more successful 'Top-3' projects, whose impacts resemble the general population of non-citizen science research.
Project description:Citizen science involves a range of practices involving public participation in scientific knowledge production, but outcomes evaluation is complicated by the diversity of the goals and forms of citizen science. Publications and citations are not adequate metrics to describe citizen-science productivity. We address this gap by contributing a science products inventory (SPI) tool, iteratively developed through an expert panel and case studies, intended to support general-purpose planning and evaluation of citizen-science projects with respect to science productivity. The SPI includes a collection of items for tracking the production of science outputs and data practices, which are described and illustrated with examples. Several opportunities for further development of the initial inventory are highlighted, as well as potential for using the inventory as a tool to guide project management, funding, and research on citizen science.
Project description:Citizen science involves a range of practices involving public participation in scientific knowledge production, but outcomes evaluation is complicated by the diversity of the goals and forms of citizen science. Publications and citations are not adequate metrics to describe citizen-science productivity. We address this gap by contributing a science products inventory (SPI) tool, iteratively developed through an expert panel and case studies, intended to support general-purpose planning and evaluation of citizen-science projects with respect to science productivity. The SPI includes a collection of items for tracking the production of science outputs and data practices, which are described and illustrated with examples. Several opportunities for further development of the initial inventory are highlighted, as well as potential for using the inventory as a tool to guide project management, funding, and research on citizen science.
Project description:Biomedical literature represents one of the largest and fastest growing collections of unstructured biomedical knowledge. Finding critical information buried in the literature can be challenging. To extract information from free-flowing text, researchers need to: 1. identify the entities in the text (named entity recognition), 2. apply a standardized vocabulary to these entities (normalization), and 3. identify how entities in the text are related to one another (relationship extraction). Researchers have primarily approached these information extraction tasks through manual expert curation and computational methods. We have previously demonstrated that named entity recognition (NER) tasks can be crowdsourced to a group of non-experts via the paid microtask platform, Amazon Mechanical Turk (AMT), and can dramatically reduce the cost and increase the throughput of biocuration efforts. However, given the size of the biomedical literature, even information extraction via paid microtask platforms is not scalable. With our web-based application Mark2Cure (http://mark2cure.org), we demonstrate that NER tasks also can be performed by volunteer citizen scientists with high accuracy. We apply metrics from the Zooniverse Matrices of Citizen Science Success and provide the results here to serve as a basis of comparison for other citizen science projects. Further, we discuss design considerations, issues, and the application of analytics for successfully moving a crowdsourcing workflow from a paid microtask platform to a citizen science platform. To our knowledge, this study is the first application of citizen science to a natural language processing task.