Project description:Delays in peer reviewed publication may have consequences for both assessment of scientific prowess in academics as well as communication of important information to the knowledge receptor community. We present an analysis on the perspectives of authors publishing in conservation biology journals regarding their opinions on the importance of speed in peer-review as well as how to improve review times. Authors were invited to take part in an online questionnaire, of which the data was subjected to both qualitative (open coding, categorizing) and quantitative analyses (generalized linear models). We received 637 responses to a total of 6,547 e-mail invitations sent. Peer-review speed was generally perceived as slow, with authors experiencing a typical turnaround time of 14 weeks while their perceived optimal review time is six weeks. Male and younger respondents seem to have higher expectations of review speed than females and older respondents. Majority of participants attributed lengthy review times to the 'stress' on the peer-review system (i.e., reviewer and editor fatigue), while editor persistence and journal prestige were believed to speed up the review process. Negative consequences of lengthy review times appear to be greater for early career researchers and can also have impact on author morale (e.g. motivation or frustration). Competition among colleagues were also of concern to respondents. Incentivizing peer review was among the top suggested alterations to the system along with training graduate students in peer review, increased editorial persistence, and changes to the norms of peer-review such as opening the peer-review process to the public. It is clear that authors surveyed in this study view the peer-review system as under stress and we encourage scientists and publishers to push the envelope for new peer review models.
Project description:BackgroundTo inform training program development and curricular initiatives, quantitative descriptions of the disciplinary training of research teams publishing in top-tier clinical and epidemiological journals are needed. Our objective was to assess whether interdisciplinary academic training and teamwork of authors publishing original research in 15 top-tier journals varied by year of publication (2000/2010/2020), type of journal (epidemiological/general clinical/specialty clinical), corresponding author gender, and time since the corresponding author completed formal training relative to the article publication date (<5/≥5 years).Methods and findingsWe invited corresponding authors of original research articles to participate in an online survey (n = 103; response rate = 8.3% of 1240 invited authors). In bivariate analyses, year of publication, type of journal, gender, and recency of training were not significantly associated with interdisciplinary team composition, whether a co-author with epidemiological or biostatistical training was involved in any research stage (design/analysis/interpretation/reporting), or with participants' confidence in their own or their co-authors epidemiological or biostatistical expertise (p > 0.05 for each comparison). Exceptions were participants with more recent epidemiological training all had co-author(s) with epidemiological training contribute to study design and interpretation, and participants who published in 2020 were more likely to report being extremely confident in their epidemiological abilities.ConclusionsThis study was the first to quantify interdisciplinary training among research teams publishing in epidemiological and clinical journals. Our quantitative results show research published in top-tier journals generally represents interdisciplinary teamwork and that interdisciplinary training may provide publication type options. Our qualitative results show researchers view interdisciplinary training favorably.
Project description:BackgroundIn biomedical research, there have been numerous scandals highlighting conflicts of interest (COIs) leading to significant bias in judgment and questionable practices. Academic institutions, journals, and funding agencies have developed and enforced policies to mitigate issues related to COI, especially surrounding financial interests. After a case of editorial COI in a prominent bioethics journal, there is concern that the same level of oversight regarding COIs in the biomedical sciences may not apply to the field of bioethics. In this study, we examined the availability and comprehensiveness of COI policies for authors, peer reviewers, and editors of bioethics journals.MethodsAfter developing a codebook, we analyzed the content of online COI policies of 63 bioethics journals, along with policy information provided by journal editors that was not publicly available.ResultsJust over half of the bioethics journals had COI policies for authors (57%), and only 25% for peer reviewers and 19% for editors. There was significant variation among policies regarding definitions, the types of COIs described, the management mechanisms, and the consequences for noncompliance. Definitions and descriptions centered on financial COIs, followed by personal and professional relationships. Almost all COI policies required disclosure of interests for authors as the primary management mechanism. Very few journals outlined consequences for noncompliance with COI policies or provided additional resources.ConclusionCompared to other studies of biomedical journals, a much lower percentage of bioethics journals have COI policies and these vary substantially in content. The bioethics publishing community needs to develop robust policies for authors, peer reviewers, and editors and these should be made publicly available to enhance academic and public trust in bioethics scholarship.
Project description:To increase transparency in science, some scholarly journals are publishing peer review reports. But it is unclear how this practice affects the peer review process. Here, we examine the effect of publishing peer review reports on referee behavior in five scholarly journals involved in a pilot study at Elsevier. By considering 9,220 submissions and 18,525 reviews from 2010 to 2017, we measured changes both before and during the pilot and found that publishing reports did not significantly compromise referees' willingness to review, recommendations, or turn-around times. Younger and non-academic scholars were more willing to accept to review and provided more positive and objective recommendations. Male referees tended to write more constructive reports during the pilot. Only 8.1% of referees agreed to reveal their identity in the published report. These findings suggest that open peer review does not compromise the process, at least when referees are able to protect their anonymity.
Project description:ObjectivesTo determine the extent and content of academic publishers' and scientific journals' guidance for authors on the use of generative artificial intelligence (GAI).DesignCross sectional, bibliometric study.SettingWebsites of academic publishers and scientific journals, screened on 19-20 May 2023, with the search updated on 8-9 October 2023.ParticipantsTop 100 largest academic publishers and top 100 highly ranked scientific journals, regardless of subject, language, or country of origin. Publishers were identified by the total number of journals in their portfolio, and journals were identified through the Scimago journal rank using the Hirsch index (H index) as an indicator of journal productivity and impact.Main outcome measuresThe primary outcomes were the content of GAI guidelines listed on the websites of the top 100 academic publishers and scientific journals, and the consistency of guidance between the publishers and their affiliated journals.ResultsAmong the top 100 largest publishers, 24% provided guidance on the use of GAI, of which 15 (63%) were among the top 25 publishers. Among the top 100 highly ranked journals, 87% provided guidance on GAI. Of the publishers and journals with guidelines, the inclusion of GAI as an author was prohibited in 96% and 98%, respectively. Only one journal (1%) explicitly prohibited the use of GAI in the generation of a manuscript, and two (8%) publishers and 19 (22%) journals indicated that their guidelines exclusively applied to the writing process. When disclosing the use of GAI, 75% of publishers and 43% of journals included specific disclosure criteria. Where to disclose the use of GAI varied, including in the methods or acknowledgments, in the cover letter, or in a new section. Variability was also found in how to access GAI guidelines shared between journals and publishers. GAI guidelines in 12 journals directly conflicted with those developed by the publishers. The guidelines developed by top medical journals were broadly similar to those of academic journals.ConclusionsGuidelines by some top publishers and journals on the use of GAI by authors are lacking. Among those that provided guidelines, the allowable uses of GAI and how it should be disclosed varied substantially, with this heterogeneity persisting in some instances among affiliated publishers and journals. Lack of standardization places a burden on authors and could limit the effectiveness of the regulations. As GAI continues to grow in popularity, standardized guidelines to protect the integrity of scientific output are needed.
Project description:ObjectiveTo assess the accuracy of self-reported financial conflict-of-interest (COI) disclosures in the New England Journal of Medicine (NEJM) and the Journal of the American Medical Association (JAMA) within the requisite disclosure period prior to article submission.DesignCross-sectional investigation.Data sourcesOriginal clinical-trial research articles published in NEJM (n=206) or JAMA (n=188) from 1 January 2017 to 31 December 2017; self-reported COI disclosure forms submitted to NEJM or JAMA with the authors' published articles; Open Payments website (from database inception; latest search: August 2019).Main outcome measuresFinancial data reported to Open Payments from 2014 to 2016 (a time period that included all subjects' requisite disclosure windows) were compared with self-reported disclosure forms submitted to the journals. Payments selected for analysis were defined by Open Payments as 'general payments.' Payment types were categorised as 'disclosed,' 'undisclosed,' 'indeterminate' or 'unrelated'.ResultsThirty-one articles from NEJM and 31 articles from JAMA met inclusion criteria. The physician-authors (n=118) received a combined total of US$7.48 million. Of the 106 authors (89.8%) who received payments, 86 (81.1%) received undisclosed payments. The top 23 most highly compensated received US$6.32 million, of which US$3.00 million (47.6%) was undisclosed.ConclusionsHigh payment amounts, as well as high proportions of undisclosed financial compensation, regardless of amount received, comprised potential COIs for two influential US medical journals. Further research is needed to explain why such high proportions of general payments were undisclosed and whether journals that rely on self-reported COI disclosure need to reconsider their policies.
Project description:COVID-19-related (vs. non-related) articles appear to be more expeditiously processed and published in peer-reviewed journals. We aimed to evaluate: (i) whether COVID-19-related preprints were favored for publication, (ii) preprinting trends and public discussion of the preprints, and (iii) the relationship between the publication topic (COVID-19-related or not) and quality issues. Manuscripts deposited at bioRxiv and medRxiv between January 1 and September 27 2020 were assessed for the probability of publishing in peer-reviewed journals, and those published were evaluated for submission-to-acceptance time. The extent of public discussion was assessed based on Altmetric and Disqus data. The Retraction Watch Database and PubMed were used to explore the retraction of COVID-19 and non-COVID-19 articles and preprints. With adjustment for the preprinting server and number of deposited versions, COVID-19-related preprints were more likely to be published within 120 days since the deposition of the first version (OR = 1.96, 95% CI: 1.80-2.14) as well as over the entire observed period (OR = 1.39, 95% CI: 1.31-1.48). Submission-to-acceptance was by 35.85 days (95% CI: 32.25-39.45) shorter for COVID-19 articles. Public discussion of preprints was modest and COVID-19 articles were overrepresented in the pool of retracted articles in 2020. Current data suggest a preference for publication of COVID-19-related preprints over the observed period.Supplementary informationThe online version contains supplementary material available at 10.1007/s11192-021-04249-7.
Project description:BackgroundPredatory journals fail to fulfill the tenets of biomedical publication: peer review, circulation, and access in perpetuity. Despite increasing attention in the lay and scientific press, no studies have directly assessed the perceptions of the authors or editors involved.ObjectiveOur objective was to understand the motivation of authors in sending their work to potentially predatory journals. Moreover, we aimed to understand the perspective of journal editors at journals cited as potentially predatory.MethodsPotential online predatory journals were randomly selected among 350 publishers and their 2204 biomedical journals. Author and editor email information was valid for 2227 total potential participants. A survey for authors and editors was created in an iterative fashion and distributed. Surveys assessed attitudes and knowledge about predatory publishing. Narrative comments were invited.ResultsA total of 249 complete survey responses were analyzed. A total of 40% of editors (17/43) surveyed were not aware that they were listed as an editor for the particular journal in question. A total of 21.8% of authors (45/206) confirmed a lack of peer review. Whereas 77% (33/43) of all surveyed editors were at least somewhat familiar with predatory journals, only 33.0% of authors (68/206) were somewhat familiar with them (P<.001). Only 26.2% of authors (54/206) were aware of Beall's list of predatory journals versus 49% (21/43) of editors (P<.001). A total of 30.1% of authors (62/206) believed their publication was published in a predatory journal. After defining predatory publishing, 87.9% of authors (181/206) surveyed would not publish in the same journal in the future.ConclusionsAuthors publishing in suspected predatory journals are alarmingly uninformed in terms of predatory journal quality and practices. Editors' increased familiarity with predatory publishing did little to prevent their unwitting listing as editors. Some suspected predatory journals did provide services akin to open access publication. Education, research mentorship, and a realignment of research incentives may decrease the impact of predatory publishing.
Project description:Publishing peer review materials alongside research articles promises to make the peer review process more transparent as well as making it easier to recognise these contributions and give credit to peer reviewers. Traditionally, the peer review reports, editors letters and author responses are only shared between the small number of people in those roles prior to publication, but there is a growing interest in making some or all of these materials available. A small number of journals have been publishing peer review materials for some time, others have begun this practice more recently, and significantly more are now considering how they might begin. This article outlines the outcomes from a recent workshop among journals with experience in publishing peer review materials, in which the specific operation of these workflows, and the challenges, were discussed. Here, we provide a draft as to how to represent these materials in the JATS and Crossref data models to facilitate the coordination and discoverability of peer review materials, and seek feedback on these initial recommendations.
Project description:Because they do not rank highly in the hierarchy of evidence and are not frequently cited, case reports describing the clinical circumstances of single patients are seldom published by medical journals. However, many clinicians argue that case reports have significant educational value, advance medical knowledge, and complement evidence-based medicine. Over the last several years, a vast number (∼160) of new peer-reviewed journals have emerged that focus on publishing case reports. These journals are typically open access and have relatively high acceptance rates. However, approximately half of the publishers of case reports journals engage in questionable or "predatory" publishing practices. Authors of case reports may benefit from greater awareness of these new publication venues as well as an ability to discriminate between reputable and non-reputable journal publishers.