Effects of print publication lag in dual format journals on scientometric indicators.
ABSTRACT: BACKGROUND: Publication lag between manuscript submission and its final publication is considered as an important factor affecting the decision to submit, the timeliness of presented data, and the scientometric measures of the particular journal. Dual-format peer-reviewed journals (publishing both print and online editions of their content) adopted a broadly accepted strategy to shorten the publication lag: to publish the accepted manuscripts online ahead of their print editions, which may follow days, but also years later. Effects of this widespread habit on the immediacy index (average number of times an article is cited in the year it is published) calculation were never analyzed. METHODOLOGY/PRINCIPAL FINDINGS: Scopus database (which contains nearly up-to-date documents in press, but does not reveal citations by these documents until they are finalized) was searched for the journals with the highest total counts of articles in press, or highest counts of articles in press appearing online in 2010-2011. Number of citations received by the articles in press available online was found to be nearly equal to citations received within the year when the document was assigned to a journal issue. Thus, online publication of in press articles affects severely the calculation of immediacy index of their source titles, and disadvantages online-only and print-only journals when evaluating them according to the immediacy index and probably also according to the impact factor and similar measures. CONCLUSIONS/SIGNIFICANCE: Caution should be taken when evaluating dual-format journals supporting long publication lag. Further research should answer the question, on whether the immediacy index should be replaced by an indicator based on the date of first publication (online or in print, whichever comes first) to eliminate the problems analyzed in this report. Information value of immediacy index is further questioned by very high ratio of authors' self-citations among the citation window used for its calculation.
Project description:Journal impact factors have become an important criterion to judge the quality of scientific publications over the years, influencing the evaluation of institutions and individual researchers worldwide. However, they are also subject to a number of criticisms. Here we point out that the calculation of a journal's impact factor is mainly based on the date of publication of its articles in print form, despite the fact that most journals now make their articles available online before that date. We analyze 61 neuroscience journals and show that delays between online and print publication of articles increased steadily over the last decade. Importantly, such a practice varies widely among journals, as some of them have no delays, while for others this period is longer than a year. Using a modified impact factor based on online rather than print publication dates, we demonstrate that online-to-print delays can artificially raise a journal's impact factor, and that this inflation is greater for longer publication lags. We also show that correcting the effect of publication delay on impact factors changes journal rankings based on this metric. We thus suggest that indexing of articles in citation databases and calculation of citation metrics should be based on the date of an article's online appearance, rather than on that of its publication in print.
Project description:<h4>Background</h4>Scholarly publishing system relies on external peer review. However, the duration of publication process is a major concern for authors and funding bodies.<h4>Objective</h4>To evaluate the duration of the publication process in pharmacy practice journals compared with other biomedical journals indexed in PubMed.<h4>Methods</h4>All the articles published from 2009 to 2018 by the 33 pharmacy practice journals identified in Mendes et al. study and indexed in PubMed were gathered as study group. A comparison group was created through a random selection of 3000 PubMed PMIDs for each year of study period. Articles with publication dates outside the study period were excluded. Metadata of both groups of articles were imported from PubMed. The duration of editorial process was calculated with three periods: acceptance lag (days between 'submission date' and 'acceptance date'), lead lag (days between 'acceptance date' and 'online publication date'), and indexing lag (days between 'online publication date' and 'Entry date'). Null hypothesis significance tests and effect size measures were used to compare these periods between both groups.<h4>Results</h4>The 33 pharmacy practice journals published 26,256 articles between 2009 and 2018. Comparison group random selection process resulted in a pool of 23,803 articles published in 5,622 different journals. Acceptance lag was 105 days (IQR 57-173) for pharmacy practice journals and 97 days (IQR 56-155) for the comparison group with a null effect difference (Cohen's d 0.081). Lead lag was 13 (IQR 6-35) and 23 days (IQR 9-45) for pharmacy practice and comparison journals, respectively, which resulted in a small effect. Indexing lag was 5 days (IQR 2-46) and 4 days (IQR 2-12) for pharmacy practice and control journals, which also resulted in a small effect. Slight positive time trend was found in pharmacy practice acceptance lag, while slight negative trends were found for lead and indexing lags for both groups.<h4>Conclusions</h4>Publication process duration of pharmacy practice journals is similar to a general random sample of articles from all disciplines.
Project description:The rapid growth of online tools to communicate scientific research raises the important question of whether online attention is associated with citations in the scholarly literature. The Altmetric Attention Score (AAS) quantifies the attention received by a scientific publication on various online platforms including news, blogs and social media. It has been advanced as a rapid way of gauging the impact of a piece of research, both in terms of potential future scholarly citations and wider online engagement. Here, we explore variation in the AAS of 2677 research articles published in 10 ornithological journals between 2012 and 2016. On average, AAS increased sevenfold in just five years, primarily due to increased activity on Twitter which contributed 75% of the total score. For a subset of 878 articles published in 2014, including an additional 323 ornithology articles from non-specialist journals, an increase in AAS from 1 to 20 resulted in a predicted 112% increase in citation count from 2.6 to 5.5 citations per article. This effect interacted with journal impact factor, with weaker effects of AAS in higher impact factor journals. Our results suggest that altmetrics (or the online activity they measure), as well as complementing traditional measures of scholarly impact in ornithology such as citations, may also anticipate or even drive them.
Project description:PURPOSE: The research assesses the impact of online journals on citation patterns by examining whether researchers were more likely to limit the resources they cited to those journals available online rather than those only in print. SETTING: Publications from a large urban university with a medical college at an urban location and at a smaller regional location were examined. The number of online journals available to authors on either campus was the same. The number of print journals available on the large campus was much greater than the print journals available at the small campus. METHODOLOGY: Searches by author affiliation from 1996 to 2005 were performed in the Web of Science to find all articles written by affiliated members in the college of medicine at the selected institution. Cited references from randomly selected articles were recorded, and the cited journals were coded into five categories based on their availability at the study institution: print only, print and online, online only, not owned, and dropped. Results were analyzed using SPSS. The age of articles cited for selected years as well as for 2006 and 2007 was also examined. RESULTS: The number of journals cited each year continued to increase. On the large urban campus, researchers were not more likely to cite journals available online or less likely to cite journals only in print. At the regional location, at which the number of print-only journals was minimal, use of print-only journals significantly decreased. CONCLUSION/DISCUSSION: The citation of print-only journals by researchers with access to a library with a large print and electronic collection appeared to continue, despite the availability of potential alternatives in the online collection. Journals available in electronic format were cited more frequently in publications from the campus whose library had a small print collection, and the citation of journals available in both print and electronic formats generally increased over the years studied.
Project description:Bibliometric indicators increasingly affect careers, funding, and reputation of individuals, their institutions and journals themselves. In contrast to author self-citations, little is known about kinetics of journal self-citations. Here we hypothesized that they may show a generalizable pattern within particular research fields or across multiple fields. We thus analyzed self-cites to 60 journals from three research fields (multidisciplinary sciences, parasitology, and information science). We also hypothesized that the kinetics of journal self-citations and citations received from other journals of the same publisher may differ from foreign citations. We analyzed the journals published the American Association for the Advancement of Science, Nature Publishing Group, and Editura Academiei Române. We found that although the kinetics of journal self-cites is generally faster compared to foreign cites, it shows some field-specific characteristics. Particularly in information science journals, the initial increase in a share of journal self-citations during post-publication year 0 was completely absent. Self-promoting journal self-citations of top-tier journals have rather indirect but negligible direct effects on bibliometric indicators, affecting just the immediacy index and marginally increasing the impact factor itself as long as the affected journals are well established in their fields. In contrast, other forms of journal self-citations and citation stacking may severely affect the impact factor, or other citation-based indices. We identified here a network consisting of three Romanian physics journals Proceedings of the Romanian Academy, Series A, Romanian Journal of Physics, and Romanian Reports in Physics, which displayed low to moderate ratio of journal self-citations, but which multiplied recently their impact factors, and were mutually responsible for 55.9%, 64.7% and 63.3% of citations within the impact factor calculation window to the three journals, respectively. They did not receive nearly any network self-cites prior impact factor calculation window, and their network self-cites decreased sharply after the impact factor calculation window. Journal self-citations and citation stacking requires increased attention and elimination from citation indices.
Project description:<h4>Background</h4>Outbreaks of emerging infectious diseases, especially those of a global nature, require rapid epidemiological analysis and information dissemination. The final products of those activities usually comprise internal memoranda and briefs within public health authorities and original research published in peer-reviewed journals. Using the 2003 severe acute respiratory syndrome (SARS) epidemic as an example, we conducted a comprehensive time-stratified review of the published literature to describe the different types of epidemiological outputs.<h4>Methods and findings</h4>We identified and analyzed all published articles on the epidemiology of the SARS outbreak in Hong Kong or Toronto. The analysis was stratified by study design, research domain, data collection, and analytical technique. We compared the SARS-case and matched-control non-SARS articles published according to the timeline of submission, acceptance, and publication. The impact factors of the publishing journals were examined according to the time of publication of SARS articles, and the numbers of citations received by SARS-case and matched-control articles submitted during and after the epidemic were compared. Descriptive, analytical, theoretical, and experimental epidemiology concerned, respectively, 54%, 30%, 11%, and 6% of the studies. Only 22% of the studies were submitted, 8% accepted, and 7% published during the epidemic. The submission-to-acceptance and acceptance-to-publication intervals of the SARS articles submitted during the epidemic period were significantly shorter than the corresponding intervals of matched-control non-SARS articles published in the same journal issues (p<0.001 and p<0.01, respectively). The differences of median submission-to-acceptance intervals and median acceptance-to-publication intervals between SARS articles and their corresponding control articles were 106.5 d (95% confidence interval [CI] 55.0-140.1) and 63.5 d (95% CI 18.0-94.1), respectively. The median numbers of citations of the SARS articles submitted during the epidemic and over the 2 y thereafter were 17 (interquartile range [IQR] 8.0-52.0) and 8 (IQR 3.2-21.8), respectively, significantly higher than the median numbers of control article citations (15, IQR 8.5-16.5, p<0.05, and 7, IQR 3.0-12.0, p<0.01, respectively).<h4>Conclusions</h4>A majority of the epidemiological articles on SARS were submitted after the epidemic had ended, although the corresponding studies had relevance to public health authorities during the epidemic. To minimize the lag between research and the exigency of public health practice in the future, researchers should consider adopting common, predefined protocols and ready-to-use instruments to improve timeliness, and thus, relevance, in addition to standardizing comparability across studies. To facilitate information dissemination, journal managers should reengineer their fast-track channels, which should be adapted to the purpose of an emerging outbreak, taking into account the requirement of high standards of quality for scientific journals and competition with other online resources.
Project description:OBJECTIVES: The research identified the publication types and ages most frequently cited in the infectious diseases literature and the most commonly cited journals. METHODS: From 2008-2010, 5,056 articles in 5 infectious diseases journals cited 166,650 items. Two random samples were drawn: one (n?=?1,060) from the total set of citations and one (n?=?1,060) from the citations to journal articles. For each sample citation, publication type and date, age of cited item, and inclusion of uniform resource locator (URL) were collected. For each item in the cited journal articles sample, journal title, publication date, and age of the cited article were collected. Bradford zones were used for further analysis. RESULTS: Journal articles (91%, n?=?963) made up the bulk of cited items, followed by miscellaneous items (4.6%, n?=?49). Dates of publication for cited items ranged from 1933-2010 (mean?=?2001, mode?=?2007). Over half (50.2%, n?=?483) of cited journal articles were published within the previous 5 years. The journal article citations included 358 unique journal titles. DISCUSSION: The citations to current and older publications in a range of disciplines, heavy citation of journals, and citation of miscellaneous and government documents revealed the depth and breadth of resources needed for the study of infectious diseases.
Project description:Importance:Citation analysis is a bibliometric method that uses citation rates to evaluate research performance. This type of analysis can identify the articles that have shaped the modern history of obstetrics and gynecology (OBGYN). Objectives:To identify and characterize top-cited OBGYN articles in the Institute for Scientific Information Web of Science's Science Citation Index Expanded and to compare top-cited OBGYN articles published in specialty OBGYN journals with those published in nonspecialty journals. Design, Setting, and Participants:Cross-sectional bibliometric analysis of top-cited articles that were indexed in the Science Citation Index Expanded from 1980 to 2018. The Science Citation Index Expanded was queried using search terms from the American Board of Obstetrics and Gynecology's 2018 certifying examination topics list. The top 100 articles from all journals and the top 100 articles from OBGYN journals were evaluated for specific characteristics. Data were analyzed in March 2019. Main Outcomes and Measures:The articles were characterized by citation number, publication year, topic, study design, and authorship. After excluding articles that featured on both lists, top-cited articles were compared. Results:The query identified 3?767?874 articles, of which 278?846 (7.4%) were published in OBGYN journals. The top-cited article was published by Rossouw and colleagues in JAMA (2002). Top-cited articles published in nonspecialty journals were more frequently cited than those in OBGYN journals (median [interquartile range], 1738 [1490-2077] citations vs 666 [580-843] citations, respectively; P?<?.001) and were more likely to be randomized trials (25.0% vs 2.2%, respectively; difference, 22.8%; 95% CI, 13.5%-32.2%; P?<?.001). Whereas articles from nonspecialty journals focused on broad topics like osteoporosis, articles from OBGYN journal focused on topics like preeclampsia and endometriosis. Conclusions and Relevance:This study found substantial differences between top-cited OBGYN articles published in nonspecialty vs OBGYN journals. These differences may reflect the different goals of the journals, which work together to ensure optimal dissemination of impactful articles.
Project description:The relationship between traditional metrics of research impact (e.g., number of citations) and alternative metrics (altmetrics) such as Twitter activity are of great interest, but remain imprecisely quantified. We used generalized linear mixed modeling to estimate the relative effects of Twitter activity, journal impact factor, and time since publication on Web of Science citation rates of 1,599 primary research articles from 20 ecology journals published from 2012-2014. We found a strong positive relationship between Twitter activity (i.e., the number of unique tweets about an article) and number of citations. Twitter activity was a more important predictor of citation rates than 5-year journal impact factor. Moreover, Twitter activity was not driven by journal impact factor; the 'highest-impact' journals were not necessarily the most discussed online. The effect of Twitter activity was only about a fifth as strong as time since publication; accounting for this confounding factor was critical for estimating the true effects of Twitter use. Articles in impactful journals can become heavily cited, but articles in journals with lower impact factors can generate considerable Twitter activity and also become heavily cited. Authors may benefit from establishing a strong social media presence, but should not expect research to become highly cited solely through social media promotion. Our research demonstrates that altmetrics and traditional metrics can be closely related, but not identical. We suggest that both altmetrics and traditional citation rates can be useful metrics of research impact.
Project description:<h4>Background</h4>Wikipedia, the multilingual encyclopedia, was founded in 2001 and is the world's largest and most visited online general reference website. It is widely used by health care professionals and students. The inclusion of journal articles in Wikipedia is of scholarly interest, but the time taken for a journal article to be included in Wikipedia, from the moment of its publication to its incorporation into Wikipedia, is unclear.<h4>Objective</h4>We aimed to determine the ranking of the most cited journals by their representation in the English-language medical pages of Wikipedia. In addition, we evaluated the number of days between publication of journal articles and their citation in Wikipedia medical pages, treating this measure as a proxy for the information-diffusion rate.<h4>Methods</h4>We retrieved the dates when articles were included in Wikipedia and the date of journal publication from Crossref by using an application programming interface.<h4>Results</h4>From 11,325 Wikipedia medical articles, we identified citations to 137,889 journal articles from over 15,000 journals. There was a large spike in the number of journal articles published in or after 2002 that were cited by Wikipedia. The higher the importance of a Wikipedia article, the higher was the mean number of journal citations it contained (top article, 48.13 [SD 33.67]; lowest article, 6.44 [SD 9.33]). However, the importance of the Wikipedia article did not affect the speed of reference addition. The Cochrane Database of Systematic Reviews was the most cited journal by Wikipedia, followed by The New England Journal of Medicine and The Lancet. The multidisciplinary journals Nature, Science, and the Proceedings of the National Academy of Sciences were among the top 10 journals with the highest Wikipedia medical article citations. For the top biomedical journal papers cited in Wikipedia's medical pages in 2016-2017, it took about 90 days (3 months) for the citation to be used in Wikipedia.<h4>Conclusions</h4>We found evidence of "recentism," which refers to preferential citation of recently published journal articles in Wikipedia. Traditional high-impact medical and multidisciplinary journals were extensively cited by Wikipedia, suggesting that Wikipedia medical articles have robust underpinnings. In keeping with the Wikipedia policy of citing reviews/secondary sources in preference to primary sources, the Cochrane Database of Systematic Reviews was the most referenced journal.