Health research funding in Mexico: the need for a long-term agenda.
ABSTRACT: BACKGROUND: The legal framework and funding mechanisms of the national health research system were recently reformed in Mexico. A study of the resource allocation for health research is still missing. We identified the health research areas funded by the National Council on Science and Technology (CONACYT) and examined whether research funding has been aligned to national health problems. METHODS AND FINDINGS: We collected the information to create a database of research grant projects supported through the three main Sectoral Funds managed by CONACYT between 2003 and 2010. The health-related projects were identified and classified according to their methodological approach and research objective. A correlation analysis was carried out to evaluate the association between disease-specific funding and two indicators of disease burden. From 2003 to 2010, research grant funding increased by 32% at a compound annual growth rate of 3.5%. By research objective, the budget fluctuated annually resulting in modest increments or even decrements during the period under analysis. The basic science category received the largest share of funding (29%) while the less funded category was violence and accidents (1.4%). The number of deaths (? = 0.51; P<0.001) and disability-adjusted life years (DALYs; ? = 0.33; P = 0.004) were weakly correlated with the funding for health research. Considering the two indicators, poisonings and infectious and parasitic diseases were among the most overfunded conditions. In contrast, congenital anomalies, road traffic accidents, cerebrovascular disease, and chronic obstructive pulmonary disease were the most underfunded conditions. CONCLUSIONS: Although the health research funding has grown since the creation of CONACYT sectoral funds, the financial effort is still low in comparison to other Latin American countries with similar development. Furthermore, the great diversity of the funded topics compromises the efficacy of the investment. Better mechanisms of research priority-setting are required to adjust the research portfolio to the new health panorama of Mexican population.
Project description:Previous reports from National Institutes of Health and National Science Foundation have suggested that peer review scores of funded grants bear no association with grant citation impact and productivity. This lack of association, if true, may be particularly concerning during times of increasing competition for increasingly limited funds. We analyzed the citation impact and productivity for 1755 de novo investigator-initiated R01 grants funded for at least 2 years by National Institute of Mental Health between 2000 and 2009. Consistent with previous reports, we found no association between grant percentile ranking and subsequent productivity and citation impact, even after accounting for subject categories, years of publication, duration and amounts of funding, as well as a number of investigator-specific measures. Prior investigator funding and academic productivity were moderately strong predictors of grant citation impact.
Project description:We perform promoter Capture Hi-C in human adipocytes to investigate interactions between gene promoters and distal elements as a transcription-regulating mechanism contributing to these phenotypes. We find that promoter-interacting elements in human adipocytes are enriched for adipose-related transcription factor motifs, such as PPARG and CEBPB, and contribute to heritability of cis-regulated gene expression. David Pan was funded through Grant Title: Biomedical Big Data Training Grant Grant ID: T32LM012424 Funding Source: National Institutes of Health-National Cancer Institute Marcus Alvarez was funded through Grant Title: NIH training grant in Genomic Analysis and Interpretation Grant ID: T32HG002536 Funding Source: National Institutes of Health Overall design: Promoter Capture Hi-C library was produced from human white adipocytes differentiated from primary human white preadipocytes
Project description:<h4>Objective</h4>To quantify randomness and cost when choosing health and medical research projects for funding.<h4>Design</h4>Retrospective analysis.<h4>Setting</h4>Grant review panels of the National Health and Medical Research Council of Australia.<h4>Participants</h4>Panel members' scores for grant proposals submitted in 2009.<h4>Main outcome measures</h4>The proportion of grant proposals that were always, sometimes, and never funded after accounting for random variability arising from differences in panel members' scores, and the cost effectiveness of different size assessment panels.<h4>Results</h4>59% of 620 funded grants were sometimes not funded when random variability was taken into account. Only 9% (n = 255) of grant proposals were always funded, 61% (n = 1662) never funded, and 29% (n=788) sometimes funded. The extra cost per grant effectively funded from the most effective system was $A18,541 (£11,848; €13,482; $19,343).<h4>Conclusions</h4>Allocating funding for scientific research in health and medicine is costly and somewhat random. There are many useful research questions to be addressed that could improve current processes.
Project description:BACKGROUND:The widespread adoption of smartphones provides researchers with expanded opportunities for developing, testing and implementing interventions. National Institutes of Health (NIH) funds competitive, investigator-initiated grant applications. Funded grants represent the state of the science and therefore are expected to anticipate the progression of research in the near future. OBJECTIVE:The objective of this paper is to provide an analysis of the kinds of smartphone-based intervention apps funded in NIH research grants during the five-year period between 2014 and 2018. METHODS:We queried NIH Reporter to identify candidate funded grants that addressed mHealth and the use of smartphones. From 1524 potential grants, we identified 397 that met the requisites of including an intervention app. Each grant's abstract was analyzed to understand the focus of intervention. The year of funding, type of activity (eg, R01, R34, and so on) and funding were noted. RESULTS:We identified 13 categories of strategies employed in funded smartphone intervention apps. Most grants included either one (35.0%) or two (39.0%) intervention approaches. These included artificial intelligence (57 apps), bionic adaptation (33 apps), cognitive and behavioral therapies (68 apps), contingency management (24 apps), education and information (85 apps), enhanced motivation (50 apps), facilitating, reminding and referring (60 apps), gaming and gamification (52 apps), mindfulness training (18 apps), monitoring and feedback (192 apps), norm setting (7 apps), skills training (85 apps) and social support and social networking (59 apps). The most frequently observed grant types included Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) grants (40.8%) and Research Project Grants (R01s) (26.2%). The number of grants funded increased through the five-year period from 60 in 2014 to 112 in 2018. CONCLUSIONS:Smartphone intervention apps are increasingly competitive for NIH funding. They reflect a wide diversity of approaches that have significant potential for use in applied settings.
Project description:Background:The Health Research Council of New Zealand is the first major government funding agency to use a lottery to allocate research funding for their Explorer Grant scheme. This is a somewhat controversial approach because, despite the documented problems of peer review, many researchers believe that funding should be allocated solely using peer review, and peer review is used almost ubiquitously by funding agencies around the world. Given the rarity of alternative funding schemes, there is interest in hearing from the first cohort of researchers to ever experience a lottery. Additionally, the Health Research Council of New Zealand wanted to hear from applicants about the acceptability of the randomisation process and anonymity of applicants. Methods:This paper presents the results of a survey of Health Research Council applicants from 2013 to 2019. The survey asked about the acceptability of using a lottery and if the lottery meant researchers took a different approach to their application. Results:The overall response rate was 39% (126 of 325 invites), with 30% (76 of 251) from applicants in the years 2013 to 2018, and 68% (50 of 74) for those in the year 2019 who were not aware of the funding result. There was agreement that randomisation is an acceptable method for allocating Explorer Grant funds with 63% (n = 79) in favour and 25% (n = 32) against. There was less support for allocating funds randomly for other grant types with only 40% (n = 50) in favour and 37% (n = 46) against. Support for a lottery was higher amongst those that had won funding. Multiple respondents stated that they supported a lottery when ineligible applications had been excluded and outstanding applications funded, so that the remaining applications were truly equal. Most applicants reported that the lottery did not change the time they spent preparing their application. Conclusions:The Health Research Council's experience through the Explorer Grant scheme supports further uptake of a modified lottery.
Project description:Mini-grants are an increasingly common tool for engaging communities in evidence-based interventions for promoting public health. This article describes efforts by 4 Centers for Disease Control and Prevention/National Cancer Institute-funded Cancer Prevention and Control Research Network centers to design and implement mini-grant programs to disseminate evidence-based interventions for cancer prevention and control. This article also describes source of evidence-based interventions, funding levels, selection criteria, time frame, number and size of grants, types of organizations funded, selected accomplishments, training and technical assistance, and evaluation topics/methods. Grant size ranged from $1000 to $10 000 (median = $6250). This mini-grant opportunity was characterized by its emphasis on training and technical assistance for evidence-based programming and dissemination of interventions from National Cancer Institute's Research-Tested Intervention Programs and Centers for Disease Control and Prevention's Guide to Community Preventive Services. All projects had an evaluation component, although they varied in scope. Mini-grant processes described can serve as a model for organizations such as state health departments working to bridge the gap between research and practice.
Project description:Health services and policy research is the innovation engine of a health care system. In 2000, the Canadian Institutes of Health Research (CIHR) was formed to foster the growth of all sciences that could improve health care. We evaluated trends in health services and policy research funding, in addition to determinants of funding success.All applications submitted to CIHR strategic and open operating grant competitions between 2001 and 2011 were included in our analysis. Age, sex, size of research team, critical mass, season, year and research discipline were retrieved from application information. A cohort of 4725 applicants successfully funded between 2001 and 2005 were followed for 5 years to evaluate predictors of continuous funding. Multivariate generalized estimating equation logistic regression was used to estimate predictors of funding success and sustained funding.Between 2001 and 2011, 80?163 applications were submitted to open and strategic grant competitions. Over time, grant applications increased from 327 to 1137 per year, and annual funding increased from $12.6 to $48.0 million. Grant applications from young male researchers were more likely to be funded than those from female researchers (odds ratio [OR] 1.40, 95% confidence interval [CI] 1.01-1.95), as were applications from larger research teams and institutions with a large critical mass. Only 24.0% of scientists whose first funded grant was in health services and policy research had sustained 5-year funding, compared with 52.8% of biomedical scientists (OR 0.34, 95% CI 0.24-0.49).The CIHR has successfully increased the amount of health services and policy research in Canada. To enhance conditions for success, researchers should be encouraged to work in teams, request longer duration grants, resubmit unsuccessful applications and affiliate themselves with institutions with a greater critical mass.
Project description:RATIONALE:The American Recovery and Reinvestment Act (ARRA) allowed National Heart, Lung, and Blood Institute to fund R01 grants that fared less well on peer review than those funded by meeting a payline threshold. It is not clear whether the sudden availability of additional funding enabled research of similar or lesser citation impact than already funded work. OBJECTIVE:To compare the citation impact of ARRA-funded de novo National Heart, Lung, and Blood Institute R01 grants with concurrent de novo National Heart, Lung, and Blood Institute R01 grants funded by standard payline mechanisms. METHODS AND RESULTS:We identified de novo (type 1) R01 grants funded by National Heart, Lung, and Blood Institute in fiscal year 2009: these included 458 funded by meeting Institute's published payline and 165 funded only because of ARRA funding. Compared with payline grants, ARRA grants received fewer total funds (median values, $1.03 versus $1.87 million; P<0.001) for a shorter duration (median values including no-cost extensions, 3.0 versus 4.9 years; P<0.001). Through May 2014, the payline R01 grants generated 3895 publications, whereas the ARRA R01 grants generated 996. Using the InCites database from Thomson-Reuters, we calculated a normalized citation impact for each grant by weighting each article for the number of citations it received normalizing for subject, article type, and year of publication. The ARRA R01 grants had a similar normalized citation impact per $1 million spent as the payline grants (median values [interquartile range], 2.15 [0.73-4.68] versus 2.03 [0.75-4.10]; P=0.61). The similar impact of the ARRA grants persisted even after accounting for potential confounders. CONCLUSIONS:Despite shorter durations and lower budgets, ARRA R01 grants had comparable citation outcomes per $million spent to that of contemporaneously funded payline R01 grants.
Project description:<h4>Agencies that fund scientific research must choose</h4>is it more effective to give large grants to a few elite researchers, or small grants to many researchers? Large grants would be more effective only if scientific impact increases as an accelerating function of grant size. Here, we examine the scientific impact of individual university-based researchers in three disciplines funded by the Natural Sciences and Engineering Research Council of Canada (NSERC). We considered four indices of scientific impact: numbers of articles published, numbers of citations to those articles, the most cited article, and the number of highly cited articles, each measured over a four-year period. We related these to the amount of NSERC funding received. Impact is positively, but only weakly, related to funding. Researchers who received additional funds from a second federal granting council, the Canadian Institutes for Health Research, were not more productive than those who received only NSERC funding. Impact was generally a decelerating function of funding. Impact per dollar was therefore lower for large grant-holders. This is inconsistent with the hypothesis that larger grants lead to larger discoveries. Further, the impact of researchers who received increases in funding did not predictably increase. We conclude that scientific impact (as reflected by publications) is only weakly limited by funding. We suggest that funding strategies that target diversity, rather than "excellence", are likely to prove to be more productive.
Project description:Obtaining grant funding from the National Institutes of Health (NIH) is increasingly competitive, as funding success rates have declined over the past decade. To allocate relatively scarce funds, scientific peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination, little research has explored how reviewers assign ratings to the applications they review and whether there is consistency in the reviewers' evaluation of the same application. Replicating all aspects of the NIH peer-review process, we examined 43 individual reviewers' ratings and written critiques of the same group of 25 NIH grant applications. Results showed no agreement among reviewers regarding the quality of the applications in either their qualitative or quantitative evaluations. Although all reviewers received the same instructions on how to rate applications and format their written critiques, we also found no agreement in how reviewers "translated" a given number of strengths and weaknesses into a numeric rating. It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant. This research replicates the NIH peer-review process to examine in detail the qualitative and quantitative judgments of different reviewers examining the same application, and our results have broad relevance for scientific grant peer review.