Is NIAAA a better steward of NIH grant monies than is NIDA?

Oct 06 2010 Published by under Alcohol, NIH, NIH Budgets and Economics, NIH funding

A Nature News bit [h/t @davidkroll] on the NIAAA/NIDA merger mostly covers the usual ground. It did add two bits of info into the discussion space, however. First, they got a "no comment" response from the beverage industry:

Although the alcohol industry is unlikely to relish its legal product being lumped in for study with street drugs such as cocaine and heroin, it has so far remained silent. US Trade groups including the Beer Institute, the Wine Institute, the American Beverage Institute and the Distilled Spirits Council of the United States all declined to comment for this article.

I'm still betting their entire strategy (if they actually care about this, which I suspect they do) is going to be by trying to get a pet Congress Critter or two to oppose the plan. Spirited opposition can probably block the whole plan.

The second tidbit is contained in a graph of the recent success rates for applicants to the two Institutes. This underlines something that NIAAA people have been quietly bragging about for the past several years. They managed to soften the GreatBudgetCrash blow through 2005-2007, maintaining relatively higher success rates when NIDA's went in the tanker. The two ICs success rates are closer now, albeit NIAAA applicants enjoy a slight advantage. What does this reflect? Hard to tell from this type of data.

Is it better stewardship? That NIAAA Program Staff anticipated the flatlined federal allocation and planned for it better? Important to know as the POs of the two institutes are integrated. Maybe the NIAAA staff deserve a more-equal ranking based on their performance in past years?

Is it a reflection that smaller ICs can respond more nimbly? This would seem to be an important analysis to make given the push to consolidate not just NIAAA/NIDA but to merge additional small institutes in the future.

Is this only a reflection of a smaller, more insular research community that was more responsive to Program Staff's warnings not to submit so many proposals? Did the NIAAA's pronounced bias to put a lot of $$$ into BigMech Centers and the like smooth the process somehow?

...just thinking out loud here...

12 responses so far

  • pinus says:

    In my speaking with a senior program officer, it was suggested that this increased payline is due to NIAAA trimming back dollar amounts on grants.

  • drugmonkey says:

    That suggests they were doing this to greater extent than was NIDA. I don't seem to recall the reductions-upon-funding being any more severe at NIAAA at the time but I'm going by announced policy and a shaky memory there....

  • bsci says:

    Some of the data that is relevant to this might be at:
    http://officeofbudget.od.nih.gov/spending_hist.html
    The relevant pdf is labeled: Mechanism Detail by IC, FY 1983 - FY 2009 and includes the competing & noncompeting grant budgets & number of awards along with a lot of other interesting numbers by IC and year.
    At a quick glance, it looks NIDA might have given a slight larger proportion of its budget to Research Grants vs Research Centers & other research starting in 2007, but the biggest % awarded difference effect is in 2005-6. I also looked at the # of grants vs funding for those grants (competing & noncompeting separately).
    NIAAA actually gave more for each competitive grant in 2005 than 2004 (2004 was the lowest year from 2003-2009) and NIDA was fairly flat. For each year, the mean NIDA grant was always larger than the mean NIAAA grant. The raw numbers of grants awarded fluctuations, but doesn't look very different between the institutes.

    At first glance, I don't think the data fits what pinus heard. I suspect there was at least some NIAAA decrease in # of grant applications

  • The only possible way success rates can be better than other institutes is by the number of applications not increasing as much as for other institutes--perhaps by declining assignment of stuff that *could* be accepted--and by cutting grant budgets. For example, it is well known that NIGMS has a substantially better payline than some of the other institutes, but at the expense of much more severe administrative budget cuts.

  • drugmonkey says:

    If an IC had seen the plunge coming 2-3 years away could they not have picked up more shorter-duration awards (assuming they get enough 3-4 yr R01s to matter) and avoided the cliff?

    I pulled the excel sheet off the RePORTER site and am doing scatter plots of the application numbers versus success rates. fun to play around with those numbers...

  • physioprof says:

    Yeah, cutting years is exactly the same as cutting budgets.

  • drugmonkey says:

    Actually I was talking about using their preferential pickups to select R01s that happened to come in with less than 5 years proposed....maybe not enough of these to matter.

    my first round of charts from the success data finds the only evidence for smallerIC=better success in FY2002 and FY2007 so far. lots of very flat looking years.

    it also seems to be the case that NCCAM and NCRR are frequently poor-success rate outliers for low-submission ICs.

  • ginger says:

    "Spirited opposition" - I see what you did there.

  • becca says:

    What would happen to those numbers if you increased the % of grants triaged? Or do those data take into account *all* grant submissions (I was under the impression they only took into account scored submissions)??

  • DrugMonkey says:

    Success rate includes unscored applications becca.

  • becca says:

    Ahh. My confusion was over what counts as "reviewed"- triaged is peer-reviewed, just not scored. Is it possible for a grant to just be sent back, without even peer review? (the equivalent of a test ungraded because it didn't have a name written on it, I suppose)

    Those numbers seem high then- people (eventually) get about 1/5 grants funded (mythical average people, obviously)?

  • drugmonkey says:

    I was just discussing this at Odyssey's blog:

    In the NIH system there is a similar confusion, brought about by at least two factors. First, NIH publishes / brags on their Success Rate defined as

    Success rates are defined as the percentage of reviewed grant applications that receivefunding. They are computed on a fiscal year basis and include applications that are peerreviewed and either scored or unscored by an Initial Review Group. Success rates aredetermined by dividing the number of competing applications funded by the sum of the totalnumber of competing applications reviewed and the number of funded carryovers1.Applications having one or more amendments in the same fiscal year are only counted once.

    that nice little trick of counting revisions submitted in the same FY boosts the numbers but throws variability based on when exactly the revision was submitted, etc.

    second, published or rumoured "paylines" are often thought to be synonymous with what we might think of as the funding rate but this is not so. the payline is inevitably conservative. it represents what they *know* they can fund based on their budget, and without considering the particulars of how expensive a given grant is (including variable indirect costs across Unis). Once those are set, they take a look at their remaining budget for that round (or the entire FY) and pay out a bunch more awards. So the apparent funding rate is always going to exceed whatever an Institute or Center of the NIH publishes in advance as their payline. *Furthermore* the payline is based on a percentile rank which is baselined against a rolling 3 cycles of review. So if the current round of apps perform really well or poorly against the prior two, again the percentile rank is not well-aligned with the chances of getting funded.

    In short, it is the PI's poor understanding of the various numbers that is the problem with thinking "gee, why are NIH success rates so much higher than my subjective impression of funding rate?"

Leave a Reply