A retrospective analysis of submissions, acceptance rate, open peer review operations, and prepublication bias of the multidisciplinary open access journal Head & Face Medicine.
ABSTRACT: BACKGROUND: Head & Face Medicine (HFM) was launched in August 2005 to provide multidisciplinary science in the field of head and face disorders with an open access and open peer review publication platform. The objective of this study is to evaluate the characteristics of submissions, the effectiveness of open peer reviewing, and factors biasing the acceptance or rejection of submitted manuscripts. METHODS: A 1-year period of submissions and all concomitant journal operations were retrospectively analyzed. The analysis included submission rate, reviewer rate, acceptance rate, article type, and differences in duration for peer reviewing, final decision, publishing, and PubMed inclusion. Statistical analysis included Mann-Whitney U test, Chi-square test, regression analysis, and binary logistic regression. RESULTS: HFM received 126 articles (10.5 articles/month) for consideration in the first year. Submissions have been increasing, but not significantly over time. Peer reviewing was completed for 82 articles and resulted in an acceptance rate of 48.8%. In total, 431 peer reviewers were invited (5.3/manuscript), of which 40.4% agreed to review. The mean peer review time was 37.8 days. The mean time between submission and acceptance (including time for revision) was 95.9 days. Accepted papers were published on average 99.3 days after submission. The mean time between manuscript submission and PubMed inclusion was 101.3 days. The main article types submitted to HFM were original research, reviews, and case reports. The article type had no influence on rejection or acceptance. The variable 'number of invited reviewers' was the only significant (p < 0.05) predictor for rejection of manuscripts. CONCLUSION: The positive trend in submissions confirms the need for publication platforms for multidisciplinary science. HFM's peer review time comes in shorter than the 6-weeks turnaround time the Editors set themselves as the maximum. Rejection of manuscripts was associated with the number of invited reviewers. None of the other parameters tested had any effect on the final decision. Thus, HFM's ethical policy, which is based on Open Access, Open Peer, and transparency of journal operations, is free of 'editorial bias' in accepting manuscripts. ORIGINAL DATA: Provided as a downloadable tab-delimited text file (URL and variable code available under section 'additional files').
Project description:Peer review is the gold standard for scientific communication, but its ability to guarantee the quality of published research remains difficult to verify. Recent modeling studies suggest that peer review is sensitive to reviewer misbehavior, and it has been claimed that referees who sabotage work they perceive as competition may severely undermine the quality of publications. Here we examine which aspects of suboptimal reviewing practices most strongly impact quality, and test different mitigating strategies that editors may employ to counter them. We find that the biggest hazard to the quality of published literature is not selfish rejection of high-quality manuscripts but indifferent acceptance of low-quality ones. Bypassing or blacklisting bad reviewers and consulting additional reviewers to settle disagreements can reduce but not eliminate the impact. The other editorial strategies we tested do not significantly improve quality, but pairing manuscripts to reviewers unlikely to selfishly reject them and allowing revision of rejected manuscripts minimize rejection of above-average manuscripts. In its current form, peer review offers few incentives for impartial reviewing efforts. Editors can help, but structural changes are more likely to have a stronger impact.
Project description:OBJECTIVE:To determine whether researchers are submitting manuscripts and peer reviews to BMJ journals out of hours and whether this has changed over time. DESIGN:Observational study of research manuscripts and peer reviews submitted between 2012 and 2019 for which an author's address could be geocoded. SETTING:Online BMJ submission systems for two large general medical journals. MAIN OUTCOME MEASURES:Manuscript and peer review submissions on weekends, on national holidays, and by hour of day (to determine early mornings and late nights). Logistic regression was used to estimate the probability of manuscript and peer review submissions on weekends or holidays. RESULTS:The analyses included more than 49?000 manuscript submissions and 76?000 peer reviews. Little change over time was seen in the average probability of manuscript or peer review submissions occurring on weekends or holidays. The levels of out of hours work were high, with average probabilities of 0.14 to 0.18 for work on the weekends and 0.08 to 0.13 for work on holidays compared with days in the same week. Clear and consistent differences were seen between countries. Chinese researchers most often worked at weekends and at midnight, whereas researchers in Scandinavian countries were among the most likely to submit during the week and the middle of the day. CONCLUSION:The differences between countries that are persistent over time show that a "culture of overwork" is a literal thing, not just a figure of speech.
Project description:The International Conference on Bioinformatics (InCoB), the annual conference of the Asia-Pacific Bioinformatics Network (APBioNet), is hosted in one of countries of the Asia-Pacific region. The 2010 conference was awarded to Japan and has attracted more than one hundred high-quality research paper submissions. Thorough peer reviewing resulted in 47 (43.5%) accepted papers out of 108 submissions. Submissions from Japan, R.O. Korea, P.R. China, Australia, Singapore and U.S.A totaled 43.8% and contributed to 57.4% of accepted papers. Manuscripts originating from Taiwan and India added up to 42.8% of submissions and 28.3% of acceptances. The fifteen articles published in this BMC Bioinformatics supplement cover disease informatics, structural bioinformatics and drug design, biological databases and software tools, signaling pathways, gene regulatory and biochemical networks, evolution and sequence analysis.
Project description:Background:Developing a comprehensive, reproducible literature search is the basis for a high-quality systematic review (SR). Librarians and information professionals, as expert searchers, can improve the quality of systematic review searches, methodology, and reporting. Likewise, journal editors and authors often seek to improve the quality of published SRs and other evidence syntheses through peer review. Health sciences librarians contribute to systematic review production but little is known about their involvement in peer reviewing SR manuscripts. Methods:This survey aimed to assess how frequently librarians are asked to peer review systematic review manuscripts and to determine characteristics associated with those invited to review. The survey was distributed to a purposive sample through three health sciences information professional listservs. Results:There were 291 complete survey responses. Results indicated that 22% (n = 63) of respondents had been asked by journal editors to peer review systematic review or meta-analysis manuscripts. Of the 78% (n = 228) of respondents who had not already been asked, 54% (n = 122) would peer review, and 41% (n = 93) might peer review. Only 4% (n = 9) would not review a manuscript. Respondents had peer reviewed manuscripts for 38 unique journals and believed they were asked because of their professional expertise. Of respondents who had declined to peer review (32%, n = 20), the most common explanation was "not enough time" (60%, n = 12) followed by "lack of expertise" (50%, n = 10).The vast majority of respondents (95%, n = 40) had "rejected or recommended a revision of a manuscript| after peer review. They based their decision on the "search methodology" (57%, n = 36), "search write-up" (46%, n = 29), or "entire article" (54%, n = 34). Those who selected "other" (37%, n = 23) listed a variety of reasons for rejection, including problems or errors in the PRISMA flow diagram; tables of included, excluded, and ongoing studies; data extraction; reporting; and pooling methods. Conclusions:Despite being experts in conducting literature searches and supporting SR teams through the review process, few librarians have been asked to review SR manuscripts, or even just search strategies; yet many are willing to provide this service. Editors should involve experienced librarians with peer review and we suggest some strategies to consider.
Project description:The timely publication of findings in peer-reviewed journals is a primary goal of clinical research. In clinical trials, the processes leading to publication can be complex from choice and prioritization of analytic topics through to journal submission and revisions. As little literature exists on the publication process for multicenter trials, we describe the development, implementation, and effectiveness of such a process in a multicenter trial.The Hepatitis C Antiviral Long-Term Treatment against Cirrhosis (HALT-C) trial included a data coordinating center (DCC) and clinical centers that recruited and followed more than 1,000 patients. Publication guidelines were approved by the steering committee, and the publications committee monitored the publication process from selection of topics to publication.A total of 73 manuscripts were published in 23 peer-reviewed journals. When manuscripts were closely tracked, the median time for analyses and drafting of manuscripts was 8 months. The median time for data analyses was 5 months and the median time for manuscript drafting was 3 months. The median time for publications committee review, submission, and journal acceptance was 7 months, and the median time from analytic start to journal acceptance was 18 months.Effective publication guidelines must be comprehensive, implemented early in a trial, and require active management by study investigators. Successful collaboration, such as in the HALT-C trial, can serve as a model for others involved in multidisciplinary and multicenter research programs.The HALT-C Trial was registered with clinicaltrials.gov (NCT00006164).
Project description:OBJECTIVES:During a public health crisis, it is important for medical journals to share information in a timely manner while maintaining a robust peer-review process. This review reports and analyzes The Laryngoscope's publication trends and practices during the COVID-19 pandemic, before the COVID-19 pandemic, and during previous pandemics. METHODS:Comprehensive review of two databases (PubMed and The Laryngoscope) was performed. COVID-19 manuscripts (published in The Laryngoscope during the first 4 months of the pandemic) were identified and compared to manuscripts pertaining to historic pandemics (published in The Laryngoscope during the first 2 years of each outbreak). Keywords included "The Laryngoscope," "flu," "pandemic," "influenza," "SARS," "severe acute respiratory syndrome," "coronavirus," "COVID-19," and "SARS-CoV-2." Data were obtained from The Laryngoscope to characterize publication trends during and before the COVID-19 pandemic. RESULTS:From March 1, 2020 to June 30, 2020, The Laryngoscope had 203 COVID-19 submissions. As of July 8, 2020, 20 (9.9%) were accepted, 117 (57.6%) under review, and 66 (32.5%) rejected. During the first 4 months of the pandemic, 18 COVID-19 manuscripts were published. Mean number of days from submission to online publication was 45, compared to 170 in 2018 and 196 in 2019. A total of 4 manuscripts concerning previous pandemics were published during the initial 2 years of each outbreak. CONCLUSIONS:The Laryngoscope rapidly disseminated quality publications during the COVID-19 pandemic by upholding a robust peer-review process while expediting editorial steps, highlighting relevant articles online, and providing open access to make COVID-19-related publications available as quickly as possible. This article is protected by copyright. All rights reserved.
Project description:Peer review is important to the scientific process. However, the present system has been criticised and accused of bias, lack of transparency, failure to detect significant breakthrough and error. At the British Journal of Surgery (BJS), after surveying authors' and reviewers' opinions on peer review, we piloted an open online forum with the aim of improving the peer review process.In December 2014, a web-based survey assessing attitudes towards open online review was sent to reviewers with a BJS account in Scholar One. From April to June 2015, authors were invited to allow their manuscripts to undergo online peer review in addition to the standard peer review process. The quality of each review was evaluated by editors and editorial assistants using a validated instrument based on a Likert scale.The survey was sent to 6635 reviewers. In all, 1454 (21.9%) responded. Support for online peer review was strong, with only 10% stating that they would not subject their manuscripts to online peer review. The most prevalent concern was about intellectual property, being highlighted in 118 of 284 comments (41.5%). Out of 265 eligible manuscripts, 110 were included in the online peer review trial. Around 7000 potential reviewers were invited to review each manuscript. In all, 44 of 110 manuscripts (40%) received 100 reviews from 59 reviewers, alongside 115 conventional reviews. The quality of the open forum reviews was lower than for conventional reviews (2.13 (± 0.75) versus 2.84 (± 0.71), P<0.001).Open online peer review is feasible in this setting, but it attracts few reviews, of lower quality than conventional peer reviews.
Project description:Peer review provides the foundation for the scholarly publishing system. The conventional peer review system consists of using authors of articles as reviewers for other colleagues' manuscripts in a collaborative-basis system. However, authors complain about a theoretical overwhelming number of invitations to peer review. It seems that authors feel that they are invited to review many more manuscripts than they should when taking into account their participation in the scholarly publishing system. The high number of scientific journals and the existence of predatory journals were reported as potential causes of this excessive number of reviews required. In this editorial, we demonstrate that the number of reviewers required to publish a given number of articles depends exclusively on the journals' rejection rate and the number of reviewers intended per manuscript. Several initiatives to overcome the peer review crises are suggested.
Project description:Peer review may be "single-blind," in which reviewers are aware of the names and affiliations of paper authors, or "double-blind," in which this information is hidden. Noting that computer science research often appears first or exclusively in peer-reviewed conferences rather than journals, we study these two reviewing models in the context of the 10th Association for Computing Machinery International Conference on Web Search and Data Mining, a highly selective venue (15.6% acceptance rate) in which expert committee members review full-length submissions for acceptance. We present a controlled experiment in which four committee members review each paper. Two of these four reviewers are drawn from a pool of committee members with access to author information; the other two are drawn from a disjoint pool without such access. This information asymmetry persists through the process of bidding for papers, reviewing papers, and entering scores. Reviewers in the single-blind condition typically bid for 22% fewer papers and preferentially bid for papers from top universities and companies. Once papers are allocated to reviewers, single-blind reviewers are significantly more likely than their double-blind counterparts to recommend for acceptance papers from famous authors, top universities, and top companies. The estimated odds multipliers are tangible, at 1.63, 1.58, and 2.10, respectively.
Project description:Importance:The use of machine learning applications related to health is rapidly increasing and may have the potential to profoundly affect the field of health care. Objective:To analyze submissions to a popular machine learning for health venue to assess the current state of research, including areas of methodologic and clinical focus, limitations, and underexplored areas. Design, Setting, and Participants:In this data-driven qualitative analysis, 166 accepted manuscript submissions to the Third Annual Machine Learning for Health workshop at the 32nd Conference on Neural Information Processing Systems on December 8, 2018, were analyzed to understand research focus, progress, and trends. Experts reviewed each submission against a rubric to identify key data points, statistical modeling and analysis of submitting authors was performed, and research topics were quantitatively modeled. Finally, an iterative discussion of topics common in submissions and invited speakers at the workshop was held to identify key trends. Main Outcomes and Measures:Frequency and statistical measures of methods, topics, goals, and author attributes were derived from an expert review of submissions guided by a rubric. Results:Of the 166 accepted submissions, 58 (34.9%) had clinician involvement and 83 submissions (50.0%) that focused on clinical practice included clinical collaborators. A total of 97 data sets (58.4%) used in submissions were publicly available or required a standard registration process. Clinical practice was the most common application area (70 manuscripts [42.2%]), with brain and mental health (25 [15.1%]), oncology (21 [12.7%]), and cardiovascular (19 [11.4%]) being the most common specialties. Conclusions and Relevance:Trends in machine learning for health research indicate the importance of well-annotated, easily accessed data and the benefit from greater clinician involvement in the development of translational applications.