Stress testing journals: a quasi-experimental study of rejection rates of a previously published paper.
ABSTRACT: BACKGROUND:When a journal receives a duplicate publication, the ability to identify the submitted work as previously published, and reject it, is an assay to publication ethics best practices. The aim of this study was to evaluate how three different types of journals, namely open access (OA) journals, subscription-based journals, and presumed predatory journals, responded to receiving a previously published manuscript for review. METHODS:We performed a quasi-experimental study in which we submitted a previously published article to a random sample of 602 biomedical journals, roughly 200 journals from each journal type sampled: OA journals, subscription-based journals, and presumed predatory journals. Three hundred and three journals received a Word version in manuscript format, while 299 journals received the formatted publisher's PDF version of the published article. We then recorded responses to the submission received after approximately 1?month. Responses were reviewed, extracted, and coded in duplicate. Our primary outcome was the rate of rejection of the two types of submitted articles (PDF vs Word) within our three journal types. RESULTS:We received correspondence back from 308 (51.1%) journals within our study timeline (32?days); (N?=?46 predatory journals, N?=?127 OA journals, N?=?135 subscription-based journals). Of the journals that responded, 153 received the Word version of the paper, while 155 received the PDF version. Four journals (1.3%) accepted our paper, 291 (94.5%) journals rejected the paper, and 13 (4.2%) requested a revision. A chi-square test looking at journal type, and submission type, was significant (?2 (4)?=?23.50, p?
Project description:OBJECTIVES:To examine whether regional biomedical journals in Africa had policies on plagiarism and procedures to detect it; and to measure the extent of plagiarism in their original research articles and reviews. DESIGN:Cross sectional survey. SETTING AND PARTICIPANTS:We selected journals with an editor-in-chief in Africa, a publisher based in a low or middle income country and with author guidelines in English, and systematically searched the African Journals Online database. From each of the 100 journals identified, we randomly selected five original research articles or reviews published in 2016. OUTCOMES:For included journals, we examined the presence of plagiarism policies and whether they referred to text matching software. We submitted articles to Turnitin and measured the extent of plagiarism (copying of someone else's work) or redundancy (copying of one's own work) against a set of criteria we had developed and piloted. RESULTS:Of the 100 journals, 26 had a policy on plagiarism and 16 referred to text matching software. Of 495 articles, 313 (63%; 95%?CI 58 to 68) had evidence of plagiarism: 17% (83) had at least four linked copied or more than six individual copied sentences; 19% (96) had three to six copied sentences; and the remainder had one or two copied sentences. Plagiarism was more common in the introduction and discussion, and uncommon in the results. CONCLUSION:Plagiarism is common in biomedical research articles and reviews published in Africa. While wholesale plagiarism was uncommon, moderate text plagiarism was extensive. This could rapidly be eliminated if journal editors implemented screening strategies, including text matching software.
Project description:Open-access mega-journals (OAMJs) are characterized by their large scale, wide scope, open-access (OA) business model, and "soundness-only" peer review. The last of these controversially discounts the novelty, significance, and relevance of submitted articles and assesses only their "soundness." This article reports the results of an international survey of authors (n = 11,883), comparing the responses of OAMJ authors with those of other OA and subscription journals, and drawing comparisons between different OAMJs. Strikingly, OAMJ authors showed a low understanding of soundness-only peer review: two-thirds believed OAMJs took into account novelty, significance, and relevance, although there were marked geographical variations. Author satisfaction with OAMJs, however, was high, with more than 80% of OAMJ authors saying they would publish again in the same journal, although there were variations by title, and levels were slightly lower than subscription journals (over 90%). Their reasons for choosing to publish in OAMJs included a wide variety of factors, not significantly different from reasons given by authors of other journals, with the most important including the quality of the journal and quality of peer review. About half of OAMJ articles had been submitted elsewhere before submission to the OAMJ with some evidence of a "cascade" of articles between journals from the same publisher.
Project description:BACKGROUND:Open access (OA) journals are becoming a publication standard for health research, but it is not clear how they differ from traditional subscription journals in the quality of research reporting. We assessed the completeness of results reporting in abstracts of randomized controlled trials (RCTs) published in these journals. METHODS:We used the Consolidated Standards of Reporting Trials Checklist for Abstracts (CONSORT-A) to assess the completeness of reporting in abstracts of parallel-design RCTs published in subscription journals (n?=?149; New England Journal of Medicine, Journal of the American Medical Association, Annals of Internal Medicine, and Lancet) and OA journals (n?=?119; BioMedCentral series, PLoS journals) in 2016 and 2017. RESULTS:Abstracts in subscription journals completely reported 79% (95% confidence interval [CI], 77-81%) of 16 CONSORT-A items, compared with 65% (95% CI, 63-67%) of these items in abstracts from OA journals (P?<?0.001, chi-square test). The median number of completely reported CONSORT-A items was 13 (95% CI, 12-13) in subscription journal articles and 11 (95% CI, 10-11) in OA journal articles. Subscription journal articles had significantly more complete reporting than OA journal articles for nine CONSORT-A items and did not differ in reporting for items trial design, outcome, randomization, blinding (masking), recruitment, and conclusions. OA journals were better than subscription journals in reporting randomized study design in the title. CONCLUSION:Abstracts of randomized controlled trials published in subscription medical journals have greater completeness of reporting than abstracts published in OA journals. OA journals should take appropriate measures to ensure that published articles contain adequate detail to facilitate understanding and quality appraisal of research reports about RCTs.
Project description:This data aimed to audit the articles retracted from PubMed indexed dental journals from India. The PubMed indexed journals considered are Indian Journal of Dental research, Journal of Indian Society of Periodontology, Contemporary Clinical Dentistry, Journal of Indian Society of Pedodontics and Preventive Dentistry, Journal of Oral and Maxillofacial Pathology and Journal of Indian Prosthodontic Society for type of article, name of dental specialty, topic of individual dental specialties, causes for retraction of article and authorship trend of retracted articles using web-based search. Among PubMed indexed Indian dental journals,the number of retracted articles were as follows: Indian Journal of Dental research (4) followed by Journal of Indian Society of Periodontology (3), Contemporary Clinical Dentistry (3), Journal of Indian Society of Pedodontics and Preventive Dentistry (2), Journal of Oral and Maxillofacial Pathology (2) and Journal of Indian Prosthodontic Society (1). Out of 15 retracted articles from PubMed indexed Indian dental journals, case reports (7) form a major share followed by original articles(6) and review articles (2). Among the dental specialties of retracted articles, oral pathology and microbiology (5) constitute the major share followed by periodontics (4), pedodontics (4), oral medicine and radiology (1) and prosthodontics (1). Duplicate publication (7), plagiarism (5) and authorship dispute (3) are the causes for the retraction of article.
Project description:The companies publishing predatory journals are an emerging problem in the area of scientific literature as they only seek to drain money from authors without providing any customer service for the authors or their readership. These predatory journals try to attract new submissions by aggressive email advertising and high acceptance rates. But in turn, they do not provide proper peer review, and therefore, the scientific quality of submitted articles is questionable. This is important because more and more people, including patients, are reading such journals and rely on the information they provide. Consequently, predatory journals are a serious threat to the integrity of medical science, and it is crucial for scientists, physicians and even patients to be aware of this problem. In this review, we briefly summarize the history of the open access movement, as well as the rise of and roles played by predatory journals. In conclusion, young and inexperienced authors publishing in a predatory journal must be aware of the damage of their reputation, of inadequate peer review processes and that unprofitable journals might get closed and all published articles in that journal might be lost.
Project description:OBJECTIVES:To develop effective interventions to prevent publishing in presumed predatory journals (ie, journals that display deceptive characteristics, markers or data that cannot be verified), it is helpful to understand the motivations and experiences of those who have published in these journals. DESIGN:An online survey delivered to two sets of corresponding authors containing demographic information, and questions about researchers' perceptions of publishing in the presumed predatory journal, type of article processing fees paid and the quality of peer review received. The survey also asked six open-ended items about researchers' motivations and experiences. PARTICIPANTS:Using Beall's lists, we identified two groups of individuals who had published empirical articles in biomedical journals that were presumed to be predatory. RESULTS:Eighty-two authors partially responded (~14% response rate (11.4%[44/386] from the initial sample, 19.3%[38/197] from second sample) to our survey. The top three countries represented were India (n=21, 25.9%), USA (n=17, 21.0%) and Ethiopia (n=5, 6.2%). Three participants (3.9%) thought the journal they published in was predatory at the time of article submission. The majority of participants first encountered the journal via an email invitation to submit an article (n=32, 41.0%), or through an online search to find a journal with relevant scope (n=22, 28.2%). Most participants indicated their study received peer review (n=65, 83.3%) and that this was helpful and substantive (n=51, 79.7%). More than a third (n=32, 45.1%) indicated they did not pay fees to publish. CONCLUSIONS:This work provides some evidence to inform policy to prevent future research from being published in predatory journals. Our research suggests that common views about predatory journals (eg, no peer review) may not always be true, and that a grey zone between legitimate and presumed predatory journals exists. These results are based on self-reports and may be biased thus limiting their interpretation.
Project description:A team of stakeholders in biomedical publishing recently proposed a set of core competencies for journal editors, as a resource that can inform training programs for editors and ultimately improve the quality of the biomedical research literature. This initiative, still in its early stages, would benefit from additional sources of expert information. Based on our experiences as authors' editors, we offer two suggestions on how to strengthen these competencies so that they better respond to the needs of readers and authors - the main users of and contributors to research journals. First, journal editors should be able to ensure that authors are given useful feedback on the language and writing in submitted manuscripts, beyond a (possibly incorrect) blanket judgement of whether the English is "acceptable" or not. Second, journal editors should be able to deal effectively with inappropriate text re-use and plagiarism. These additional competencies would, we believe, be valued by other stakeholders in biomedical research publication as markers of editorial quality.
Project description:BACKGROUND:Predatory journals fail to fulfill the tenets of biomedical publication: peer review, circulation, and access in perpetuity. Despite increasing attention in the lay and scientific press, no studies have directly assessed the perceptions of the authors or editors involved. OBJECTIVE:Our objective was to understand the motivation of authors in sending their work to potentially predatory journals. Moreover, we aimed to understand the perspective of journal editors at journals cited as potentially predatory. METHODS:Potential online predatory journals were randomly selected among 350 publishers and their 2204 biomedical journals. Author and editor email information was valid for 2227 total potential participants. A survey for authors and editors was created in an iterative fashion and distributed. Surveys assessed attitudes and knowledge about predatory publishing. Narrative comments were invited. RESULTS:A total of 249 complete survey responses were analyzed. A total of 40% of editors (17/43) surveyed were not aware that they were listed as an editor for the particular journal in question. A total of 21.8% of authors (45/206) confirmed a lack of peer review. Whereas 77% (33/43) of all surveyed editors were at least somewhat familiar with predatory journals, only 33.0% of authors (68/206) were somewhat familiar with them (P<.001). Only 26.2% of authors (54/206) were aware of Beall's list of predatory journals versus 49% (21/43) of editors (P<.001). A total of 30.1% of authors (62/206) believed their publication was published in a predatory journal. After defining predatory publishing, 87.9% of authors (181/206) surveyed would not publish in the same journal in the future. CONCLUSIONS:Authors publishing in suspected predatory journals are alarmingly uninformed in terms of predatory journal quality and practices. Editors' increased familiarity with predatory publishing did little to prevent their unwitting listing as editors. Some suspected predatory journals did provide services akin to open access publication. Education, research mentorship, and a realignment of research incentives may decrease the impact of predatory publishing.
Project description:OBJECTIVE:To conduct a Delphi survey informing a consensus definition of predatory journals and publishers. DESIGN:This is a modified three-round Delphi survey delivered online for the first two rounds and in-person for the third round. Questions encompassed three themes: (1) predatory journal definition; (2) educational outreach and policy initiatives on predatory publishing; and (3) developing technological solutions to stop submissions to predatory journals and other low-quality journals. PARTICIPANTS:Through snowball and purposive sampling of targeted experts, we identified 45 noted experts in predatory journals and journalology. The international group included funders, academics and representatives of academic institutions, librarians and information scientists, policy makers, journal editors, publishers, researchers involved in studying predatory journals and legitimate journals, and patient partners. In addition, 198 authors of articles discussing predatory journals were invited to participate in round 1. RESULTS:A total of 115 individuals (107 in round 1 and 45 in rounds 2 and 3) completed the survey on predatory journals and publishers. We reached consensus on 18 items out of a total of 33 to be included in a consensus definition of predatory journals and publishers. We came to consensus on educational outreach and policy initiatives on which to focus, including the development of a single checklist to detect predatory journals and publishers, and public funding to support research in this general area. We identified technological solutions to address the problem: a 'one-stop-shop' website to consolidate information on the topic and a 'predatory journal research observatory' to identify ongoing research and analysis about predatory journals/publishers. CONCLUSIONS:In bringing together an international group of diverse stakeholders, we were able to use a modified Delphi process to inform the development of a definition of predatory journals and publishers. This definition will help institutions, funders and other stakeholders generate practical guidance on avoiding predatory journals and publishers.
Project description:Objective:The purpose of predatory open access (OA) journals is primarily to make a profit rather than to disseminate quality, peer-reviewed research. Publishing in these journals could negatively impact faculty reputation, promotion, and tenure, yet many still choose to do so. Therefore, the authors investigated faculty knowledge and attitudes regarding predatory OA journals. Methods:A twenty-item questionnaire containing both quantitative and qualitative items was developed and piloted. All university and medical school faculty were invited to participate. The survey included knowledge questions that assessed respondents' ability to identify predatory OA journals and attitudinal questions about such journals. Chi-square tests were used to detect differences between university and medical faculty. Results:A total of 183 faculty completed the survey: 63% were university and 37% were medical faculty. Nearly one-quarter (23%) had not previously heard of the term "predatory OA journal." Most (87%) reported feeling very confident or confident in their ability to assess journal quality, but only 60% correctly identified a journal as predatory, when given a journal in their field to assess. Chi-square tests revealed that university faculty were more likely to correctly identify a predatory OA journal (p=0.0006) and have higher self-reported confidence in assessing journal quality, compared with medical faculty (p=0.0391). Conclusions:Survey results show that faculty recognize predatory OA journals as a problem. These attitudes plus the knowledge gaps identified in this study will be used to develop targeted educational interventions for faculty in all disciplines at our university.