Project description:Reproducible science requires transparent reporting. The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) were originally developed in 2010 to improve the reporting of animal research. They consist of a checklist of information to include in publications describing in vivo experiments to enable others to scrutinise the work adequately, evaluate its methodological rigour, and reproduce the methods and results. Despite considerable levels of endorsement by funders and journals over the years, adherence to the guidelines has been inconsistent, and the anticipated improvements in the quality of reporting in animal research publications have not been achieved. Here, we introduce ARRIVE 2.0. The guidelines have been updated and information reorganised to facilitate their use in practice. We used a Delphi exercise to prioritise and divide the items of the guidelines into 2 sets, the "ARRIVE Essential 10," which constitutes the minimum requirement, and the "Recommended Set," which describes the research context. This division facilitates improved reporting of animal research by supporting a stepwise approach to implementation. This helps journal editors and reviewers verify that the most important items are being reported in manuscripts. We have also developed the accompanying Explanation and Elaboration document, which serves (1) to explain the rationale behind each item in the guidelines, (2) to clarify key concepts, and (3) to provide illustrative examples. We aim, through these changes, to help ensure that researchers, reviewers, and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.
Project description:Reproducible science requires transparent reporting. The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) were originally developed in 2010 to improve the reporting of animal research. They consist of a checklist of information to include in publications describing in vivo experiments to enable others to scrutinise the work adequately, evaluate its methodological rigour, and reproduce the methods and results. Despite considerable levels of endorsement by funders and journals over the years, adherence to the guidelines has been inconsistent, and the anticipated improvements in the quality of reporting in animal research publications have not been achieved. Here, we introduce ARRIVE 2.0. The guidelines have been updated and information reorganised to facilitate their use in practice. We used a Delphi exercise to prioritise and divide the items of the guidelines into 2 sets, the "ARRIVE Essential 10," which constitutes the minimum requirement, and the "Recommended Set," which describes the research context. This division facilitates improved reporting of animal research by supporting a stepwise approach to implementation. This helps journal editors and reviewers verify that the most important items are being reported in manuscripts. We have also developed the accompanying Explanation and Elaboration document, which serves (1) to explain the rationale behind each item in the guidelines, (2) to clarify key concepts, and (3) to provide illustrative examples. We aim, through these changes, to help ensure that researchers, reviewers, and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.
Project description:Reproducible science requires transparent reporting. The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) were originally developed in 2010 to improve the reporting of animal research. They consist of a checklist of information to include in publications describing in vivo experiments to enable others to scrutinise the work adequately, evaluate its methodological rigour, and reproduce the methods and results. Despite considerable levels of endorsement by funders and journals over the years, adherence to the guidelines has been inconsistent, and the anticipated improvements in the quality of reporting in animal research publications have not been achieved. Here, we introduce ARRIVE 2.0. The guidelines have been updated and information reorganised to facilitate their use in practice. We used a Delphi exercise to prioritise and divide the items of the guidelines into 2 sets, the "ARRIVE Essential 10," which constitutes the minimum requirement, and the "Recommended Set," which describes the research context. This division facilitates improved reporting of animal research by supporting a stepwise approach to implementation. This helps journal editors and reviewers verify that the most important items are being reported in manuscripts. We have also developed the accompanying Explanation and Elaboration (E&E) document, which serves (1) to explain the rationale behind each item in the guidelines, (2) to clarify key concepts, and (3) to provide illustrative examples. We aim, through these changes, to help ensure that researchers, reviewers, and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.
Project description:Reproducible science requires transparent reporting. The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) were originally developed in 2010 to improve the reporting of animal research. They consist of a checklist of information to include in publications describing in vivo experiments to enable others to scrutinise the work adequately, evaluate its methodological rigour, and reproduce the methods and results. Despite considerable levels of endorsement by funders and journals over the years, adherence to the guidelines has been inconsistent, and the anticipated improvements in the quality of reporting in animal research publications have not been achieved. Here, we introduce ARRIVE 2.0. The guidelines have been updated and information reorganised to facilitate their use in practice. We used a Delphi exercise to prioritise and divide the items of the guidelines into 2 sets, the "ARRIVE Essential 10," which constitutes the minimum requirement, and the "Recommended Set," which describes the research context. This division facilitates improved reporting of animal research by supporting a stepwise approach to implementation. This helps journal editors and reviewers verify that the most important items are being reported in manuscripts. We have also developed the accompanying Explanation and Elaboration (E&E) document, which serves (1) to explain the rationale behind each item in the guidelines, (2) to clarify key concepts, and (3) to provide illustrative examples. We aim, through these changes, to help ensure that researchers, reviewers, and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.
Project description:Reproducible science requires transparent reporting. The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) were originally developed in 2010 to improve the reporting of animal research. They consist of a checklist of information to include in publications describing in vivo experiments to enable others to scrutinise the work adequately, evaluate its methodological rigour, and reproduce the methods and results. Despite considerable levels of endorsement by funders and journals over the years, adherence to the guidelines has been inconsistent, and the anticipated improvements in the quality of reporting in animal research publications have not been achieved. Here, we introduce ARRIVE 2.0. The guidelines have been updated and information reorganised to facilitate their use in practice. We used a Delphi exercise to prioritise and divide the items of the guidelines into 2 sets, the 'ARRIVE Essential 10,' which constitutes the minimum requirement, and the 'Recommended Set,' which describes the research context. This division facilitates improved reporting of animal research by supporting a stepwise approach to implementation. This helps journal editors and reviewers verify that the most important items are being reported in manuscripts. We have also developed the accompanying Explanation and Elaboration document, which serves (1) to explain the rationale behind each item in the guidelines, (2) to clarify key concepts, and (3) to provide illustrative examples. We aim, through these changes, to help ensure that researchers, reviewers, and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.
Project description:Research in toxicology relies on in vitro models such as cell lines. These living models are prone to change and may be described in publications with insufficient information or quality control testing. This article sets out recommendations to improve the reliability of cell-based research.
Project description:Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.We conducted a three-part project: (1) a systematic review of the literature (including "Instructions to Authors" from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n?=?165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%).There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research.
Project description:The particularly interdisciplinary nature of human microbiome research makes the organization and reporting of results spanning epidemiology, biology, bioinformatics, translational medicine and statistics a challenge. Commonly used reporting guidelines for observational or genetic epidemiology studies lack key features specific to microbiome studies. Therefore, a multidisciplinary group of microbiome epidemiology researchers adapted guidelines for observational and genetic studies to culture-independent human microbiome studies, and also developed new reporting elements for laboratory, bioinformatics and statistical analyses tailored to microbiome studies. The resulting tool, called 'Strengthening The Organization and Reporting of Microbiome Studies' (STORMS), is composed of a 17-item checklist organized into six sections that correspond to the typical sections of a scientific publication, presented as an editable table for inclusion in supplementary materials. The STORMS checklist provides guidance for concise and complete reporting of microbiome studies that will facilitate manuscript preparation, peer review, and reader comprehension of publications and comparative analysis of published results.
Project description:Many reports of health research omit important information needed to assess their methodological robustness and clinical relevance. Without clear and complete reporting, it is not possible to identify flaws or biases, reproduce successful interventions, or use the findings in systematic reviews or meta-analyses. The EQUATOR Network (http://www.equator-network.org/) promotes responsible reporting and the use of reporting guidelines to improve the accuracy, completeness, and transparency of health research. EQUATOR supports researchers by providing online resources and training. EQUATOR Oncology, a project funded by Cancer Research UK, aims to support cancer researchers reporting their research through the provision of online resources. In this article, our objective is to highlight reporting issues related to oncology research publications and to introduce reporting guidelines that are designed to aid high-quality reporting. We describe generic reporting guidelines for the main study types, and explain how these guidelines should and should not be used. We also describe 37 oncology-specific reporting guidelines, covering different clinical areas (e.g., haematology or urology) and sections of the report (e.g., methods or study characteristics); most of these are little-used. We also provide some background information on EQUATOR Oncology, which focuses on addressing the reporting needs of the oncology research community.
Project description:There is increasing research interest in the risk stratification of emergency department (ED) syncope patients. A major barrier to comparing and synthesizing existing research is wide variation in the conduct and reporting of studies. The authors wanted to create standardized reporting guidelines for ED syncope risk-stratification research using an expert consensus process. In that pursuit, a panel of syncope researchers was convened and a literature review was performed to identify candidate reporting guideline elements. Candidate elements were grouped into four sections: eligibility criteria, outcomes, electrocardiogram (ECG) findings, and predictors. A two-round, modified Delphi consensus process was conducted using an Internet-based survey application. In the first round, candidate elements were rated on a five-point Likert scale. In the second round, panelists rerated items after receiving information about group ratings from the first round. Items that were rated by >80% of the panelists at the two highest levels of the Likert scale were included in the final guidelines. There were 24 panelists from eight countries who represented five clinical specialties. The panel identified an initial set of 183 candidate elements. After two survey rounds, the final reporting guidelines included 92 items that achieved >80% consensus. These included 10 items for study eligibility, 23 items for outcomes, nine items for ECG abnormalities, and 50 items for candidate predictors. Adherence to these guidelines should facilitate comparison of future research in this area.