Preliminary Checklist for Reporting Observational Studies in Sports Areas: Content Validity.
ABSTRACT: Observational studies are based on systematic observation, understood as an organized recording and quantification of behavior in its natural context. Applied to the specific area of sports, observational studies present advantages when comparing studies based on other designs, such as the flexibility for adapting to different contexts and the possibility of using non-standardized instruments as well as a high degree of development in specific software and data analysis. Although the importance and usefulness of sports-related observational studies have been widely shown, there is no checklist to report these studies. Consequently, authors do not have a guide to follow in order to include all of the important elements in an observational study in sports areas, and reviewers do not have a reference tool for assessing this type of work. To resolve these issues, this article aims to develop a checklist to measure the quality of sports-related observational studies based on a content validity study. The participants were 22 judges with at least 3 years of experience in observational studies, sports areas, and methodology. They evaluated a list of 60 items systematically selected and classified into 12 dimensions. They were asked to score four aspects of each item on 5-point Likert scales to measure the following dimensions: representativeness, relevance, utility, and feasibility. The judges also had an open-format section for comments. The Osterlind index was calculated for each item and for each of the four aspects. Items were considered appropriate when obtaining a score of at least 0.5 in the four assessed aspects. After considering these inclusion criteria and all of the open-format comments, the resultant checklist consisted of 54 items grouped into the same initial 12 dimensions. Finally, we highlight the strengths of this work. We also present its main limitation: the need to apply the resultant checklist to obtain data and, thus, increase quality indicators of its psychometric properties. For this reason, as relevant actions for further development, we encourage expert readers to use it and provide feedback; we plan to apply it to different sport areas.
Project description:<h4>Background</h4>Public or patient versions of guidelines (PVGs) are derivative documents that "translate" recommendations and their rationale from clinical guidelines for health professionals into a more easily understandable and usable format for patients and the public. PVGs from different groups and organizations vary considerably in terms of quality of their reporting. In order to address this issue, we aimed to develop a reporting checklist for developers of PVGs and other potential users.<h4>Methods</h4>First, we collected a list of potential items through reviewing a sample of PVGs, existing guidance for developing and reporting PVGs or other similar evidence-based patient tools, as well as qualitative studies on original studies of patients' needs about the content and/or reporting of information in PVGs or similar evidence-based patient tools. Second, we conducted a two-round Delphi consultation to determine the level of consensus on the items to be included in the final reporting checklist. Third, we invited two external reviewers to provide comments on the checklist.<h4>Results</h4>We generated the initial list of 45 reporting items based on a review of a sample of 30 PVGs, four PVG guidance documents, and 46 relevant studies. After the two-round Delphi consultation, we formed a checklist of 17 items grouped under 12 topics for reporting PVGs.<h4>Conclusion</h4>The RIGHT-PVG reporting checklist provides an international consensus on the important criteria for reporting PVGs.
Project description:<h4>Background</h4>Not all physical activity (PA) questionnaires (PAQ) gather information regarding PA intensity, duration, and modes and only a few were developed specifically for children. We assessed children's comprehensibility of items derived from two published PAQs used in children along with three items designed to ascertain PA intensity in order to assess comprehensibility of items and identify response errors. We modified items to create a new PAQ for children (ASCeND). We hypothesized that children would have comprehension difficulties with some original PAQ items and that ASCeND would be easier to comprehend, and would improve recall and reporting of PA.<h4>Methods</h4>For this qualitative study, we recruited 30 Swedish children [ages 10-16?years; mean age?=?13.0 (SD?=?1.8)]; median disease activity score?=?4.5 (IQR 2.2-9.0); median disease duration?=?5.0 (IQR 2.6-10.8) with juvenile idiopathic arthritis (JIA) from a children's hospital-based rheumatology clinic. We conducted cognitive interviews to identify children's comprehension of PAQ items. Interviews were audiotaped, transcribed, and independently analyzed. In phase one, 10 children were interviewed and items modified based on feedback. In phase two, an additional 20 children were interviewed to gather more feedback and further refine the modified items, to create the ASCeND.<h4>Results</h4>The median interview time was 41?min (IQR 36-56). In phase one, 219 comments were generated regarding directions for recording PA duration, and transportation use, walking, dancing, weight-bearing exercise and cardio fitness. Based on feedback we modified the survey layout, clarified directions and collapsed or defined items to reduce redundancy. In phase two, 95 comments were generated. Most comments related to aerobic fitness and strenuous PA. Children had difficulty recalling total walking and other activities per day. Children used the weather on a particular day, sports practice, or gym schedules to recall time performing activities. The most comments regarding comprehension were generated about the 3-item PA intensity survey, suggesting children had problems responding to intensity items.<h4>Conclusions</h4>The newer layout facilitated recall of directions or efficiency in answering items. The 3-item intensity survey was difficult to answer. Sports-specific items helped children more accurately recall the amount of daily PA. The ASCeND appeared to be easy to answer and to comprehend.
Project description:BACKGROUND:Acupuncture is widely used worldwide, and systematic reviews on acupuncture are increasingly being published. Although acupuncture systematic reviews share several essential elements with other systematic reviews, some essential information for the application of acupuncture is not covered by the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement. Considering this, we aimed to develop an extension of the PRISMA statement for acupuncture systematic reviews. METHODS:We used the PRISMA statement as a starting point, and conducted this study referring to the development strategy recommended by the EQUATOR network. The initial items were collected through a wide survey among evidence users and a review of relevant studies. We conducted a three-round Delphi survey and one-day face-to-face meeting to select items and formulate the checklist. After the consensus meeting, we drafted the manuscript (including the checklist) and sent it to our advisory experts for comments, following which the checklist was refined and circulated to a group of acupuncture systematic review authors for pilot test. We also selected a sample of acupuncture systematic reviews published in 2017 to test the checklist. RESULTS:A checklist of five new sub-items (including sub items) and six modified items was formulated, involving content related to title, rationale, eligibility criteria, literature search, data extraction, and study characteristics. We clarified the rationales of the items and provided examples for each item for additional guidance. CONCLUSION:The PRISMA for Acupuncture checklist is developed for improving the reporting of systematic reviews of acupuncture interventions. TRIAL REGISTRATION:We have registered the study on the EQUATOR network ( http://www.equator-network.org/library/reporting-guidelines-under-development/#91 ).
Project description:Routinely collected health data, obtained for administrative and clinical purposes without specific a priori research goals, are increasingly used for research. The rapid evolution and availability of these data have revealed issues not addressed by existing reporting guidelines, such as Strengthening the Reporting of Observational Studies in Epidemiology (STROBE). The REporting of studies Conducted using Observational Routinely collected health Data (RECORD) statement was created to fill these gaps. RECORD was created as an extension to the STROBE statement to address reporting items specific to observational studies using routinely collected health data. RECORD consists of a checklist of 13 items related to the title, abstract, introduction, methods, results, and discussion section of articles, and other information required for inclusion in such research reports. This document contains the checklist and explanatory and elaboration information to enhance the use of the checklist. Examples of good reporting for each RECORD checklist item are also included herein. This document, as well as the accompanying website and message board (http://www.record-statement.org), will enhance the implementation and understanding of RECORD. Through implementation of RECORD, authors, journals editors, and peer reviewers can encourage transparency of research reporting.
Project description:Routinely collected health data, obtained for administrative and clinical purposes without specific a priori research goals, are increasingly used for research. The rapid evolution and availability of these data have revealed issues not addressed by existing reporting guidelines, such as Strengthening the Reporting of Observational Studies in Epidemiology (STROBE). The REporting of studies Conducted using Observational Routinely collected health Data (RECORD) statement was created to fill these gaps. RECORD was created as an extension to the STROBE statement to address reporting items specific to observational studies using routinely collected health data. RECORD consists of a checklist of 13 items related to the title, abstract, introduction, methods, results, and discussion section of articles, and other information required for inclusion in such research reports. This document contains the checklist as well as explanatory and elaboration information to enhance the use of the checklist. Examples of good reporting for each RECORD checklist item are also included. This document, as well as the accompanying website and message board (http://www.record-statement.org), will improve the implementation and understanding of RECORD. By implementing RECORD, authors, journals editors, and peer reviewers can enhance transparency of research reporting.
Project description:Objectives Appropriate reporting is central to the application of findings from research to clinical practice. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) recommendations consist of a checklist of 22 items that provide guidance on the reporting of cohort, case-control and cross-sectional studies, in order to facilitate critical appraisal and interpretation of results. STROBE was published in October 2007 in several journals including The Lancet, BMJ, Annals of Internal Medicine and PLoS Medicine. Within the framework of the revision of the STROBE recommendations, the authors examined the context and circumstances in which the STROBE statement was used in the past. Design The authors searched the Web of Science database in August 2010 for articles which cited STROBE and examined a random sample of 100 articles using a standardised, piloted data extraction form. The use of STROBE in observational studies and systematic reviews (including meta-analyses) was classified as appropriate or inappropriate. The use of STROBE to guide the reporting of observational studies was considered appropriate. Inappropriate uses included the use of STROBE as a tool to assess the methodological quality of studies or as a guideline on how to design and conduct studies. Results The authors identified 640 articles that cited STROBE. In the random sample of 100 articles, about half were observational studies (32%) or systematic reviews (19%). Comments, editorials and letters accounted for 15%, methodological articles for 8%, and recommendations and narrative reviews for 26% of articles. Of the 32 observational studies, 26 (81%) made appropriate use of STROBE, and three uses (10%) were considered inappropriate. Among 19 systematic reviews, 10 (53%) used STROBE inappropriately as a tool to assess study quality. Conclusions The STROBE reporting recommendations are frequently used inappropriately in systematic reviews and meta-analyses as an instrument to assess the methodological quality of observational studies.
Project description:<h4>Background</h4>Establishing a diagnosis is a complex, iterative process involving patient data gathering, integration and interpretation. Premature closure is a fallacious cognitive tendency of closing the diagnostic process before sufficient data have been gathered. A proposed strategy to minimize premature closure is the use of a checklist to trigger metacognition (the process of monitoring one's own thinking). A number of studies have suggested the effectiveness of this strategy in classroom settings. This qualitative study examined the perception of usability of a metacognitive mnemonic checklist called TWED checklist (where the letter "T = Threat", "W = What if I am wrong? What else?", "E = Evidence" and "D = Dispositional influence") in a real clinical setting.<h4>Method</h4>Two categories of participants, i.e., medical doctors (n =?11) and final year medical students (Group 1, n =?5; Group 2, n =?10) participated in four separate focus group discussions. Nielsen's 5 dimensions of usability (i.e. learnability, effectiveness, memorability, errors, and satisfaction) and Pentland's narrative network were adapted as the framework to study the usability and the implementation of the checklist in a real clinical setting respectively.<h4>Results</h4>Both categories (medical doctors and medical students) of participants found that the TWED checklist was easy to learn and effective in promoting metacognition. For medical student participants, items "T" and "W" were believed to be the two most useful aspects of the checklist, whereas for the doctor participants, it was item "D". Regarding its implementation, item "T" was applied iteratively, items "W" and "E" were applied when the outcomes did not turn out as expected, and item "D" was applied infrequently. The one checkpoint where all four items were applied was after the initial history taking and physical examination had been performed to generate the initial clinical impression.<h4>Conclusion</h4>A metacognitive checklist aimed to check cognitive errors may be a useful tool that can be implemented in the real clinical setting.
Project description:<h4>Background</h4>During the COVID-19 pandemic, the scientific world is in urgent need for new evidence on the treatment of COVID patients. The reporting quality is crucial for transparent scientific publication. Concerns of data integrity, methodology and transparency were raised. Here, we assessed the adherence of observational studies comparing treatments of COVID 19 to the STROBE checklist in 2020.<h4>Methods</h4>Design: We performed a retrospective, cross-sectional study.<h4>Setting</h4>We conducted a systematic literature search in the Medline database. This study was performed at the RWTH Aachen University Hospital, Department of Anaesthesiology Participants: We extracted all observational studies on the treatment of COVID-19 patients from the year 2020.<h4>Main outcome measures</h4>The adherence of each publication to the STROBE checklist items was analysed. The journals' impact factor (IF), the country of origin, the kind of investigated treatment and the month of publication were assessed.<h4>Results</h4>We analysed 147 observational studies and found a mean adherence of 45.6% to the STROBE checklist items. The percentage adherence per publication correlated significantly with the journals' IF (point estimate for the difference between 1<sup>st</sup> and 4<sup>th</sup> quartile 11.07%, 95% CI 5.12 to 17.02, p < 0.001). U.S. American authors gained significantly higher adherence to the checklist than Chinese authors, mean difference 9.10% (SD 2.85%, p = 0.023).<h4>Conclusions</h4>We conclude a poor reporting quality of observational studies on the treatment of COVID-19 throughout the year 2020. A considerable improvement is mandatory.
Project description:BACKGROUND:Systematic reviews based on the critical appraisal of observational and analytic studies on HIV prevalence and risk factors for HIV transmission among men having sex with men are very useful for health care decisions and planning. Such appraisal is particularly difficult, however, as the quality assessment tools available for use with observational and analytic studies are poorly established. METHODS:We reviewed the existing quality assessment tools for systematic reviews of observational studies and developed a concise quality assessment checklist to help standardise decisions regarding the quality of studies, with careful consideration of issues such as external and internal validity. RESULTS:A pilot version of the checklist was developed based on epidemiological principles, reviews of study designs, and existing checklists for the assessment of observational studies. The Quality Assessment Tool for Systematic Reviews of Observational Studies (QATSO) Score consists of five items: External validity (1 item), reporting (2 items), bias (1 item) and confounding factors (1 item). Expert opinions were sought and it was tested on manuscripts that fulfil the inclusion criteria of a systematic review. Like all assessment scales, QATSO may oversimplify and generalise information yet it is inclusive, simple and practical to use, and allows comparability between papers. CONCLUSION:A specific tool that allows researchers to appraise and guide study quality of observational studies is developed and can be modified for similar studies in the future.
Project description:<h4>Introduction</h4>Complete reporting assists readers in confirming the methodological rigor and validity of findings and allows replication. The reporting quality of observational functional magnetic resonance imaging (fMRI) studies involving clinical participants is unclear.<h4>Objectives</h4>We sought to determine the quality of reporting in observational fMRI studies involving clinical participants.<h4>Methods</h4>We searched OVID MEDLINE for fMRI studies in six leading journals between January 2010 and December 2011.Three independent reviewers abstracted data from articles using an 83-item checklist adapted from the guidelines proposed by Poldrack et al. (Neuroimage 2008; 40: 409-14). We calculated the percentage of articles reporting each item of the checklist and the percentage of reported items per article.<h4>Results</h4>A random sample of 100 eligible articles was included in the study. Thirty-one items were reported by fewer than 50% of the articles and 13 items were reported by fewer than 20% of the articles. The median percentage of reported items per article was 51% (ranging from 30% to 78%). Although most articles reported statistical methods for within-subject modeling (92%) and for between-subject group modeling (97%), none of the articles reported observed effect sizes for any negative finding (0%). Few articles reported justifications for fixed-effect inferences used for group modeling (3%) and temporal autocorrelations used to account for within-subject variances and correlations (18%). Other under-reported areas included whether and how the task design was optimized for efficiency (22%) and distributions of inter-trial intervals (23%).<h4>Conclusions</h4>This study indicates that substantial improvement in the reporting of observational clinical fMRI studies is required. Poldrack et al.'s guidelines provide a means of improving overall reporting quality. Nonetheless, these guidelines are lengthy and may be at odds with strict word limits for publication; creation of a shortened-version of Poldrack's checklist that contains the most relevant items may be useful in this regard.