Is the information of systematic reviews published in nursing journals up-to-date? a cross-sectional study.
ABSTRACT: An up-to-date systematic review is important for researchers to decide whether to embark on new research or continue supporting ongoing studies. The aim of this study is to examine the time taken between the last search, submission, acceptance and publication dates of systematic reviews published in nursing journals.Nursing journals indexed in Journal Citation Reports were first identified. Thereafter, systematic reviews published in these journals in 2014 were extracted from three databases. The quality of the systematic reviews were evaluated by the AMSTAR. The last search, submission, acceptance, online publication, full publication dates and other characteristics of the systematic reviews were recorded. The time taken between the five dates was then computed. Descriptive statistics were used to summarize the time differences; non-parametric statistics were used to examine the association between the time taken from the last search and full publication alongside other potential factors, including the funding support, submission during holiday periods, number of records retrieved from database, inclusion of meta-analysis, and quality of the review.A total of 107 nursing journals were included in this study, from which 1070 articles were identified through the database search. After screening for eligibility, 202 systematic reviews were included in the analysis. The quality of these reviews was low with the median score of 3 out of 11. A total of 172 (85.1%), 72 (35.6%), 153 (75.7%) and 149 (73.8%) systematic reviews provided their last search, submission, acceptance and online published dates respectively. The median numbers of days taken from the last search to acceptance and to full publication were, respectively, 393 (IQR: 212-609) and 669 (427-915) whereas that from submission to full publication was 365 (243-486). Moreover, the median number of days from the last search to submission and from submission to online publication were 167.5 (53.5-427) and 153 (92-212), respectively. No significant association were revealed between the time lag and those potential factors.The median time from the last search to acceptance for systematic reviews published in nursing journals was 393 days. Readers for systematic reviews are advised to check the time taken from the last search date of the reviews in order to ensure that up-to-date evidence is consulted for effective clinical decision-making.
Project description:<h4>Objective</h4>We assessed the extent of lag times in the publication and indexing of network meta-analyses (NMAs).<h4>Study design</h4>This was a survey of published NMAs on drug interventions.<h4>Setting</h4>NMAs indexed in PubMed (searches updated in May 2020).<h4>Primary and secondary outcome measures</h4>Lag times were measured as the time between the last systematic search and the article submission, acceptance, online publication, indexing and Medical Subject Headings (MeSH) allocation dates. Time-to-event analyses were performed considering independent variables (geographical origin, Journal Impact Factor, Scopus CiteScore, open access status) (SPSS V.24, R/RStudio).<h4>Results</h4>We included 1245 NMAs. The median time from last search to article submission was 6.8 months (204 days (IQR 95-381)), and to publication was 11.6 months. Only 5% of authors updated their search after first submission. There is a very slightly decreasing historical trend of acceptance (rho=-0.087; p=0.010), online publication (rho=-0.080; p=0.008) and indexing (rho=-0.080; p=0.007) lag times. Journal Impact Factor influenced the MeSH allocation process, but not the other lag times. The comparison between open access versus subscription journals confirmed meaningless differences in acceptance, online publication and indexing lag times.<h4>Conclusion</h4>Efforts by authors to update their search before submission are needed to reduce evidence production time. Peer reviewers and editors should ensure authors' compliance with NMA standards. The accuracy of these findings depends on the accuracy of the metadata used; as we evaluated only NMA on drug interventions, results may not be generalisable to all types of studies.
Project description:<h4>Background</h4>Scholarly publishing system relies on external peer review. However, the duration of publication process is a major concern for authors and funding bodies.<h4>Objective</h4>To evaluate the duration of the publication process in pharmacy practice journals compared with other biomedical journals indexed in PubMed.<h4>Methods</h4>All the articles published from 2009 to 2018 by the 33 pharmacy practice journals identified in Mendes et al. study and indexed in PubMed were gathered as study group. A comparison group was created through a random selection of 3000 PubMed PMIDs for each year of study period. Articles with publication dates outside the study period were excluded. Metadata of both groups of articles were imported from PubMed. The duration of editorial process was calculated with three periods: acceptance lag (days between 'submission date' and 'acceptance date'), lead lag (days between 'acceptance date' and 'online publication date'), and indexing lag (days between 'online publication date' and 'Entry date'). Null hypothesis significance tests and effect size measures were used to compare these periods between both groups.<h4>Results</h4>The 33 pharmacy practice journals published 26,256 articles between 2009 and 2018. Comparison group random selection process resulted in a pool of 23,803 articles published in 5,622 different journals. Acceptance lag was 105 days (IQR 57-173) for pharmacy practice journals and 97 days (IQR 56-155) for the comparison group with a null effect difference (Cohen's d 0.081). Lead lag was 13 (IQR 6-35) and 23 days (IQR 9-45) for pharmacy practice and comparison journals, respectively, which resulted in a small effect. Indexing lag was 5 days (IQR 2-46) and 4 days (IQR 2-12) for pharmacy practice and control journals, which also resulted in a small effect. Slight positive time trend was found in pharmacy practice acceptance lag, while slight negative trends were found for lead and indexing lags for both groups.<h4>Conclusions</h4>Publication process duration of pharmacy practice journals is similar to a general random sample of articles from all disciplines.
Project description:Publication bias compromises the validity of systematic reviews. This problem can be addressed in part through searching clinical trials registries to identify unpublished studies. This study aims to determine how often systematic reviews published in emergency medicine journals include clinical trials registry searches.We identified all systematic reviews published in the 6 highest-impact emergency medicine journals between January 1 and December 31, 2013. Systematic reviews that assessed the effects of an intervention were further examined to determine whether the authors described searching a clinical trials registry and whether this search identified relevant unpublished studies.Of 191 articles identified through PubMed search, 80 were confirmed to be systematic reviews. Our sample consisted of 41 systematic reviews that assessed a specific intervention. Eight of these 41 (20%) searched a clinical trials registry. For 4 of these 8 reviews, the registry search identified at least 1 relevant unpublished study.Systematic reviews published in emergency medicine journals do not routinely include searches of clinical trials registries. By helping authors identify unpublished trial data, the addition of registry searches may improve the validity of systematic reviews.
Project description:<h4>Objectives</h4>We audited a selection of systematic reviews published in 2013 and reported on the proportion of reviews that researched for unpublished data, included unpublished data in analysis and assessed for publication bias.<h4>Design</h4>Audit of systematic reviews.<h4>Data sources</h4>We searched PubMed and Ovid MEDLINE In-Process & Other Non-Indexed Citations between 1 January 2013 and 31 December 2013 for the following journals: <i>Journal of the American Medical Association</i>, <i>The British Medical Journal</i>, <i>Lancet</i>, <i>Annals of Internal Medicine</i> and the <i>Cochrane Database of Systematic Reviews</i>. We also searched the Cochrane Library and included 100 randomly selected Cochrane reviews.<h4>Eligibility criteria</h4>Systematic reviews published in 2013 in the selected journals were included. Methodological reviews were excluded.<h4>Data extraction and synthesis</h4>Two reviewers independently reviewed each included systematic review. The following data were extracted: whether the review searched for grey literature or unpublished data, the sources searched, whether unpublished data were included in analysis, whether publication bias was assessed and whether there was evidence of publication bias.<h4>Main findings</h4>203 reviews were included for analysis. 36% (73/203) of studies did not describe any attempt to obtain unpublished studies or to search grey literature. 89% (116/130) of studies that sought unpublished data found them. 33% (68/203) of studies included an assessment of publication bias, and 40% (27/68) of these found evidence of publication bias.<h4>Conclusion</h4>A significant fraction of systematic reviews included in our study did not search for unpublished data. Publication bias may be present in almost half the published systematic reviews that assessed for it. Exclusion of unpublished data may lead to biased estimates of efficacy or safety in systematic reviews.
Project description:<h4>Background</h4>Publication bias is a major threat to the validity of systematic reviews. Searches of clinical trials registries can help to identify unpublished trials, though little is known about how often these resources are utilized. We assessed the usage and results of registry searches reported in systematic reviews published in major general medical journals.<h4>Methods</h4>This cross-sectional analysis includes data from systematic reviews assessing medical interventions which were published in one of six major general medical journals between July 2012 and June 2013. Two authors independently examined each published systematic review and all available supplementary materials to determine whether at least one clinical trials registry was searched.<h4>Results</h4>Of the 117 included systematic reviews, 41 (35%) reported searching a trials registry. Of the 29 reviews which also provided detailed registry search results, 15 (52%) identified at least one completed trial and 18 (62%) identified at least one ongoing trial.<h4>Conclusions</h4>Clinical trials registry searches are not routinely included in systematic reviews published in major medical journals. Routine examination of registry databases may allow a more accurate characterization of publication and outcome reporting biases and improve the validity of estimated effects of medical treatments.
Project description:Background:Titles and abstracts are the most read sections of biomedical papers. It is therefore important that abstracts transparently report both the beneficial and adverse effects of health care interventions and do not mislead the reader. Misleading reporting, interpretation, or extrapolation of study results is called "spin". In this study, we will assess whether adverse effects of orthodontic interventions were reported or considered in the abstracts of both Cochrane and non-Cochrane reviews and whether spin was identified and what type of spin. Methods:Eligibility criteria were defined for the type of study designs, participants, interventions, outcomes, and settings. We will include systematic reviews of clinical orthodontic interventions published in the five leading orthodontic journals and in the Cochrane Database. Empty reviews will be excluded. We will manually search eligible reviews published between 1 August 2009 and 31 July 2019. Data collection forms were developed a priori. All study selection and data extraction procedures will be conducted by two reviewers independently. Our main outcomes will be the prevalence of reported or considered adverse effects of orthodontic interventions in the abstract of systematic reviews and the prevalence of "spin" related to these adverse effects. We will also record the prevalence of three subtypes of spin, i.e., misleading reporting, misleading interpretation, and misleading extrapolation-related spin. All statistics will be calculated for the following groups: (1) all journals individually, (2) all journals together, and (3) the five leading orthodontic journals and the Cochrane Database of Systematic Reviews separately. Generalized linear models will be developed to compare the various groups. Discussion:We expect that our results will raise the awareness of the importance of reporting and considering of adverse effects and the presence of the phenomenon of spin related to these effects in abstracts of systematic reviews of orthodontic interventions. This is important, because an incomplete and inadequate reporting, interpretation, or extrapolation of findings on adverse effects in abstracts of systematic reviews can mislead readers and could lead to inadequate clinical practice. Our findings could result in policy implications for making judgments about the acceptance for publication of systematic reviews of orthodontic interventions.
Project description:<h4>Background</h4>An a priori design is essential to reduce the risk of bias in systematic reviews (SRs). To this end, authors can register their SR with PROSPERO, and/or publish a SR protocol in an academic journal. The latter has the advantage that the manuscript for the SR protocol is usually peer-reviewed. However, since authors ought not to begin/continue the SR before their protocol has been accepted for publication, it is crucial that SR protocols are processed in a timely manner. Our main aim was to descriptively analyse the peer review process of SR protocols published in 'BMC Systematic Reviews' from 2012 to 2017.<h4>Methods</h4>We systematically searched MEDLINE via PubMed for all SR protocols published in 'BMC Systematic Reviews' between 2012 and 2017, except for protocols for overviews, scoping reviews or realist reviews. Data were extracted from the SR protocols and Open Peer Review reports. For each round of peer review, two researchers judged the extent of revision (minor/major) based on the reviewer reports. Their content was further investigated by two researchers in a random 10%-sample using PRISMA-P as a guideline. All data were analysed descriptively.<h4>Results</h4>We identified 544 eligible protocols published in 'BMC Systematic Reviews' between 2012 and 2017. Of those, 485 (89.2%) also registered the SR in PROSPERO, the majority (87.4%) before first submission of the manuscript for the SR protocol (median 49?days). The absolute number of published SR protocols increased from 2012 to 2017 (21 vs 145 protocols), as did the median processing time (61 vs 142?days from submission to acceptance) and the proportion of protocols requiring a major revision after first peer review (19.1% vs 52.4%). Reviewer comments most frequently addressed the PRISMA-P item 'Eligibility criteria'. Overall, 76.0% of the reviewer comments suggested more transparency.<h4>Conclusions</h4>The number of published SR protocols increased over the years, but so did the processing time. In 2017, it took several months from submission to acceptance, which is critical from an author's perspective. New models of peer review such as post publication peer review for SR protocols should be investigated. This could probably be realized with PROSPERO.
Project description:Systematic reviews and meta-analyses that do not include unpublished data in their analyses may be prone to publication bias, which in some cases has been shown to have deleterious consequences on determining the efficacy of interventions.We retrieved systematic reviews and meta-analyses published in the past 8 years (January 1, 2007-December 31, 2015) from the top 20 journals in the Pregnancy and Childbirth literature, as rated by Google Scholar's h5-index. A meta-epidemiologic analysis was performed to determine the frequency with which authors searched clinical trials registries for unpublished data.A PubMed search retrieved 372 citations, 297 of which were deemed to be either a systematic review or a meta-analysis and were included for analysis. Twelve (4 %) of these searched at least one WHO-approved clinical trials registry or clinicaltrials.gov.Systematic reviews and meta-analyses published in pregnancy and childbirth journals do not routinely report searches of clinical trials registries. Including these registries in systematic reviews may be a promising avenue to limit publication bias if registry searches locate unpublished trial data that could be used in the systematic review.
Project description:<h4>Background</h4>Wikipedia, the multilingual encyclopedia, was founded in 2001 and is the world's largest and most visited online general reference website. It is widely used by health care professionals and students. The inclusion of journal articles in Wikipedia is of scholarly interest, but the time taken for a journal article to be included in Wikipedia, from the moment of its publication to its incorporation into Wikipedia, is unclear.<h4>Objective</h4>We aimed to determine the ranking of the most cited journals by their representation in the English-language medical pages of Wikipedia. In addition, we evaluated the number of days between publication of journal articles and their citation in Wikipedia medical pages, treating this measure as a proxy for the information-diffusion rate.<h4>Methods</h4>We retrieved the dates when articles were included in Wikipedia and the date of journal publication from Crossref by using an application programming interface.<h4>Results</h4>From 11,325 Wikipedia medical articles, we identified citations to 137,889 journal articles from over 15,000 journals. There was a large spike in the number of journal articles published in or after 2002 that were cited by Wikipedia. The higher the importance of a Wikipedia article, the higher was the mean number of journal citations it contained (top article, 48.13 [SD 33.67]; lowest article, 6.44 [SD 9.33]). However, the importance of the Wikipedia article did not affect the speed of reference addition. The Cochrane Database of Systematic Reviews was the most cited journal by Wikipedia, followed by The New England Journal of Medicine and The Lancet. The multidisciplinary journals Nature, Science, and the Proceedings of the National Academy of Sciences were among the top 10 journals with the highest Wikipedia medical article citations. For the top biomedical journal papers cited in Wikipedia's medical pages in 2016-2017, it took about 90 days (3 months) for the citation to be used in Wikipedia.<h4>Conclusions</h4>We found evidence of "recentism," which refers to preferential citation of recently published journal articles in Wikipedia. Traditional high-impact medical and multidisciplinary journals were extensively cited by Wikipedia, suggesting that Wikipedia medical articles have robust underpinnings. In keeping with the Wikipedia policy of citing reviews/secondary sources in preference to primary sources, the Cochrane Database of Systematic Reviews was the most referenced journal.
Project description:<h4>Rationale, aims, and objectives</h4>COVID-19 has caused an ongoing public health crisis. Many systematic reviews and meta-analyses have been performed to synthesize evidence for better understanding this new disease. However, some concerns have been raised about rapid COVID-19 research. This meta-epidemiological study aims to methodologically assess the current systematic reviews and meta-analyses on COVID-19.<h4>Methods</h4>We searched in various databases for systematic reviews with meta-analyses published between 1 January 2020 and 31 October 2020. We extracted their basic characteristics, data analyses, evidence appraisal, and assessment of publication bias and heterogeneity.<h4>Results</h4>We identified 295 systematic reviews on COVID-19. The median time from submission to acceptance was 33 days. Among these systematic reviews, 73.9% evaluated clinical manifestations or comorbidities of COVID-19. Stata was the most used software programme (43.39%). The odds ratio was the most used effect measure (34.24%). Moreover, 28.14% of the systematic reviews did not present evidence appraisal. Among those reporting the risk of bias results, 14.64% of studies had a high risk of bias. Egger's test was the most used method for assessing publication bias (38.31%), while 38.66% of the systematic reviews did not assess publication bias. The I<sup>2</sup> statistic was widely used for assessing heterogeneity (92.20%); many meta-analyses had high values of I<sup>2</sup> . Among the meta-analyses using the random-effects model, 75.82% did not report the methods for model implementation; among those meta-analyses reporting implementation methods, the DerSimonian-Laird method was the most used one.<h4>Conclusions</h4>The current systematic reviews and meta-analyses on COVID-19 might suffer from low transparency, high heterogeneity, and suboptimal statistical methods. It is recommended that future systematic reviews on COVID-19 strictly follow well-developed guidelines. Sensitivity analyses may be performed to examine how the synthesized evidence might depend on different methods for appraising evidence, assessing publication bias, and implementing meta-analysis models.