Project description:The number and diversity of phenological studies has increased rapidly in recent years. Innovative experiments, field studies, citizen science projects, and analyses of newly available historical data are contributing insights that advance our understanding of ecological and evolutionary responses to the environment, particularly climate change. However, many phenological data sets have peculiarities that are not immediately obvious and can lead to mistakes in analyses and interpretation of results. This paper aims to help researchers, especially those new to the field of phenology, understand challenges and practices that are crucial for effective studies. For example, researchers may fail to account for sampling biases in phenological data, struggle to choose or design a volunteer data collection strategy that adequately fits their project's needs, or combine data sets in inappropriate ways. We describe ten best practices for designing studies of plant and animal phenology, evaluating data quality, and analyzing data. Practices include accounting for common biases in data, using effective citizen or community science methods, and employing appropriate data when investigating phenological mismatches. We present these best practices to help researchers entering the field take full advantage of the wealth of available data and approaches to advance our understanding of phenology and its implications for ecology.
Project description:As phylogenetic data becomes increasingly available, along with associated data on species' genomes, traits, and geographic distributions, the need to ensure data availability and reuse become more and more acute. In this paper, we provide ten "simple rules" that we view as best practices for data sharing in phylogenetic research. These rules will help lead towards a future phylogenetics where data can easily be archived, shared, reused, and repurposed across a wide variety of projects.
Project description:Implementation research (IR) focuses on understanding how and why interventions produce their effects in a given context. This often requires engaging a broad array of stakeholders at multiple levels of the health system. Whereas a variety of tools and approaches exist to facilitate stakeholder engagement at the national or institutional level, there is a substantial gap in the IR literature about how best to do this at the local or community level. Similarly, although there is extensive guidance on community engagement within the context of clinical trials-for HIV/AIDS in particular-the same cannot be said for IR. We identified a total of 59 resources by using a combination of online searches of the peer-reviewed and grey literature, as well as crowd-sourcing through the Health Systems Global platform. The authors then completed two rounds of rating the resources to identify the '10 best'. The resources were rated based on considerations of their relevance to IR, existence of an underlying conceptual framework, comprehensiveness of guidance, ease of application, and evidence of successful application in low- or middle-income countries or relevant contexts. These 10 resources can help implementation researchers think strategically and practically about how best to engage community stakeholders to improve the quality, meaningfulness, and application of their results in order to improve health and health systems outcomes. Building on the substantial work that has already been done in the context of clinical trials, there is a need for clearer and more specific guidance on how to incorporate relevant and effective community engagement approaches into IR project planning and implementation.
Project description:BackgroundFacilitation is an implementation strategy that supports the uptake of evidence-based practices. Recently, use of virtual facilitation (VF), or the application of facilitation using primarily video-based conferencing technologies, has become more common, especially since the COVID-19 pandemic. Thorough assessment of the literature on VF, however, is lacking. This scoping review aimed to identify and describe conceptual definitions of VF, evaluate the consistency of terminology, and recommend "best" practices for its use as an implementation strategy.MethodsWe conducted a scoping review to identify literature on VF following the PRISMA-ScR guidance. A search of PubMed, Embase, Web of Science, and CINAHL databases was conducted in June 2022 for English language articles published from January 2012 through May 2022 and repeated in May 2023 for articles published from January 2012 through April 2023. Identified articles, including studies and conference abstracts describing VF, were uploaded into Covidence and screened independently by two reviewers. Data extraction was done by two reviewers in Microsoft Excel; additionally, studies were evaluated based on the Proctor et al. (2013) reporting guidelines for specifying details of implementation strategies.ResultsThe search strategy identified 19 articles. After abstract and full-text screening, eight studies described by 10 articles/abstracts were included in analysis. Best practices summarized across studies included (1) stakeholder engagement, (2) understanding the recipient's organization, (3) facilitator training, (4) piloting, (5) evaluating facilitation, (6) use of group facilitation to encourage learning, and (7) integrating novel tools for virtual interaction. Three papers reported all or nearly all components of the Proctor et al. reporting guidelines; justification for use of VF was the most frequently omitted.ConclusionsThis scoping review evaluated available literature on use of VF as a primary implementation strategy and identified significant variability on how VF is reported, including inconsistent terminology, lack of details about how and why it was conducted, and limited adherence to published reporting guidelines. These inconsistencies impact generalizability of these methods by preventing replicability and full understanding of this emerging methodology. More work is needed to develop and evaluate best practices for effective VF to promote uptake of evidence-based interventions.Trial registrationN/A.
Project description:Academic Core Facilities are optimally situated to improve the quality of preclinical research by implementing quality control measures and offering these to their users.
Project description:Microbiome data predictive analysis within a machine learning (ML) workflow presents numerous domain-specific challenges involving preprocessing, feature selection, predictive modeling, performance estimation, model interpretation, and the extraction of biological information from the results. To assist decision-making, we offer a set of recommendations on algorithm selection, pipeline creation and evaluation, stemming from the COST Action ML4Microbiome. We compared the suggested approaches on a multi-cohort shotgun metagenomics dataset of colorectal cancer patients, focusing on their performance in disease diagnosis and biomarker discovery. It is demonstrated that the use of compositional transformations and filtering methods as part of data preprocessing does not always improve the predictive performance of a model. In contrast, the multivariate feature selection, such as the Statistically Equivalent Signatures algorithm, was effective in reducing the classification error. When validated on a separate test dataset, this algorithm in combination with random forest modeling, provided the most accurate performance estimates. Lastly, we showed how linear modeling by logistic regression coupled with visualization techniques such as Individual Conditional Expectation (ICE) plots can yield interpretable results and offer biological insights. These findings are significant for clinicians and non-experts alike in translational applications.
Project description:Scientific research relies on computer software, yet software is not always developed following practices that ensure its quality and sustainability. This manuscript does not aim to propose new software development best practices, but rather to provide simple recommendations that encourage the adoption of existing best practices. Software development best practices promote better quality software, and better quality software improves the reproducibility and reusability of research. These recommendations are designed around Open Source values, and provide practical suggestions that contribute to making research software and its source code more discoverable, reusable and transparent. This manuscript is aimed at developers, but also at organisations, projects, journals and funders that can increase the quality and sustainability of research software by encouraging the adoption of these recommendations.
Project description:Specimens and associated data in natural history collections (NHCs) foster substantial scientific progress. In this paper, we explore recent contributions of NHCs to the study of systematics and biogeography, genomics, morphology, stable isotope ecology, and parasites and pathogens of mammals. To begin to assess the magnitude and scope of these contributions, we analyzed publications in the Journal of Mammalogy over the last decade, as well as recent research supported by a single university mammal collection (Museum of Southwestern Biology, Division of Mammals). Using these datasets, we also identify weak links that may be hindering the development of crucial NHC infrastructure. Maintaining the vitality and growth of this foundation of mammalogy depends on broader engagement and support from across the scientific community and is both an ethical and scientific imperative given the rapidly changing environmental conditions on our planet.