Archive for the 'NIH funding' category

Berg posts data on NIH Intramural funding

Berg2014IntramuralChartJeremy Berg has a new column up at ASBMB Today which examines the distribution of NIH intramural funding. Among other things, he notes that you can play along at home via searching RePORTER using the ZIA activity code (i.e., in place of R01, R21, etc). At first blush you might think "WOWZA!". The intramural lab is pretty dang flush. If you think about the direct costs of an extramural R01 grant - the full modular is only $250K per year. So you would need three awards (ok, the third one could be an R21) just to clear the first bin. But there are interesting caveats sprinkled throughout Berg's comments and in the first comment to the piece. Note the "Total Costs"? Well, apparently there is an indirect costs rate within the IRPs and Berg comments that it is so variable that it is hard to issue anything similar to a negotiated extramural IDC rate for the entire NIH Intramural program. The comment from an ex-IRP investigator points to more issues. There may be some shared costs inserted into a given PI's apparent budget that this PI has no control over. Whether this is part of the overhead or an overhead-like cost....or maybe a cost shard across one IC's IRP...who knows?

We also don't know what a given PI has to pay for out of his or her ZIA allocation. What are animal housing costs like? Are they subsidized for certain ICs' IRPs? For certain labs? Who is a PI and who is a staff scientist of some sort within the IRPs? Do these status' differ? Are they comparable to extramural lab operations? I know for certain sure that people who are more or less the equivalent of an extramural Assistant/Associate Professor in a soft money job category exist within the NIH IRPs without being considered a PI with their own ZIA allocation. So that means that a "PI" on the chart that Berg presents may in fact be equivalent to 2-3 PIs out here in extramural land. (And yes, I understand that some of the larger extramural labs similarly have several people who would otherwise be heading their own lab all subsumed within the grants awarded to one BigCheez PI.)

With that said, however, the IRP is supposed to be special. As Berg notes

The IRP mission statement asserts that the IRP should “conduct distinct, high-impact laboratory, clinical, and population-based research” and that it should support research that “cannot be readily funded or accomplished in traditional academia.”

So by one way of looking at it, we shouldn't be comparing the IRP scientists to ourselves. They should be different.

Even if we think of IRP investigators as not much different from ourselves, I'm having difficulty making any sense of these numbers. It is nice to see them, but it is really hard to compare to what is going on with extramural grant funding.

Perhaps of greater value is the analysis Berg presents for whether NIH's intramural research is feeling their fair share of the budgetary pain.

In 2003, when I became an NIH institute director, the overall NIH appropriation was $26.74 billion, while the overall intramural program consumed $2.56 billion, or 9.6 percent. In fiscal 2013, the overall NIH appropriation was $29.15 billion, and the intramural share had grown to $3.26 billion, or 11.2 percent.
Some of this growth is because of ongoing intramural activities, such as those involving the NIH Clinical Center, where, like at other hospitals, costs are very hard to contain below rates of inflation, or because of new activities, such as the NIH Chemical Genomics Center. The IRP is particularly expensive in terms of taxpayer dollars, because it is difficult to leverage the federal support to the IRP with funds from other sources as occurs in the extramural community.

So I guess that would be "no". No the IRP, in aggregate, is not sharing the pain of the flatlined budget. There is no doubt that some of the individual components of the various IRPs are. It is inevitable. Previously flush budgets no doubt being reduced. Senior folk being pushed out. Mid and lower level employees being cashiered. I'm sure there are counter examples. But as a whole, it is clear that the IRP is being protected, inevitably at the expense of R-mech extramural awards.



34 responses so far

New Grant Snooping

Feb 04 2014 Published by under NIH, NIH Budgets and Economics, NIH funding

As usual, I like to keep and eye on RePORTER and SILK to see what the various ICs of my own dearest interest are up to with regard to grants that were supposed to fund Dec 1, 2013. Per usual, there was no budget and the more conservative ICs wait around to do anything. Some of the less-conservative ones do tend to start funding new grant awards in December and Jan so there is always something to see on SILK.

I noticed something interesting. NIAID has 44 new R01s listed that were on the A1 revision and 19 that were funded on the "first" submission. RePORTER notes that 30 funded in Dec, 12 of these funded in Jan and  17 on or after 2/1/2014 (not sure if I miscounted totals on SILK or RePORTER hasn't caught up or what).

My ICs of dearest concern are still waiting, only a bare handful of new R01s are listed.

NCI has 36 new R01 apps funded on A1, 21 on the A0. DK is running 15/13.

Scanning down the rest of the list of ICs, it looks like DK is about as close to even as it gets and that a 2:1 ratio of A1 to A0 being funded is not too far off the mean.


I still think we'd be a lot better off if something like 2/3rd of grants were awarded on first submission and the A1s were only about a third.

11 responses so far

What about diversity in the R37/MERIT deals, NIH?

Jan 17 2014 Published by under NIH, NIH Careerism, NIH funding

The R37/MERIT award is an interesting beast in NIH-land. It is typically (exclusively?) awarded upon a successful competing continuation (now called renewal) R01 application. Program then decides in some cases to extend the interval of non-competition for another 5 years*. This, my friends, is person-not-project based funding.

The R37 is a really good gig....if you can get it.

So, given that I'm blogging about award disparity this week....I took a look at the R37s currently on the books for one of my favorite ICs.

There are 25 of them.

The PIs include

1 transgender PI.
4 female PIs
0 East Asian / East Asian-American PIs (that I could detect)
3 South Asian / South Asian-American PIs (that I could detect)
0 SubSaharan African / African-American PIs (that I could detect)
0 Latino PIs (that I could detect)

hmmm, not that strong of a job. How about another of my favorite ICs?

23 awards (Interesting because this IC is half the size of the above-mentioned one)

12 female PIs.
0 East Asian / East Asian-American PIs (that I could detect)
1-2 South Asian / South Asian-American PIs (that I could detect)
0 SubSaharan African / African-American PIs (that I could detect)
3-4 Latino PIs (that I could detect)

way better on the sex distribution. Whether this number of R37s reflects more than average good-old-folks clubbery or the above represents less than average I don't know. 25 at another large IC close to my interests. 95ish (I didn't parse for supplements) at another. Only 45ish at NCI. Clearly a big range relative to IC size.

Both of these are doing really poorly on East Asian/ Asian-American and African-American PIs. The first is pretty pathetic on Latino PIs as well.

On the other hand, good old white guys with grey hair or receding hairlines are doing quite well in the R37 stakes.

How are your favorite ICs doing, Dear Reader?

*The way I hear it. I have heard rumor that these can go beyond a total of 10 years of R37 but I'm not sure on that.

6 responses so far

What would the NIH do if it wanted to make things really hard for Asian and Black PIs to get funded?

Jan 17 2014 Published by under Anger, Fixing the NIH, NIH, NIH Careerism, NIH funding

The takeaway message from the report of Ginther and colleagues (2011) on Race, Ethnicity and NIH Research Awards can be summed up by this passage from the end of the article:

Applications from black and Asian investigators were significantly less likely to receive R01 funding compared with whites for grants submitted once or twice. For grants submitted three or more times, we found no significant difference in award probability between blacks and whites; however, Asians remained almost 4 percentage points less likely to receive an R01 award (P < .05). Together, these data indicate that black and Asian investigators are less likely to be awarded an R01 on the first or second attempt, blacks and Hispanics are less likely to resubmit a revised application, and black investigators that do resubmit have to do so more often to receive an award.

Recall that these data reflect applications received for Fiscal Years 2000 to 2006.

Interestingly, we were just discussing the most recent funding data from the NIH with a particular focus on the triaged applications. A comment on the Rock Talk blog of the OER at NIH was key.

I received a table of data covering A0 R01s received between FY 2010 and FY2012 (ARRA funds and solicited applications were excluded). Overall at NIH, 2.3% of new R01s that were “not scored” as A0s were funded as A1s (range at different ICs was 0.0% to 8.4%), and 8.7% of renewals that were unscored as A0s were funded as A1s (range 0.0% to 25.7%).

I noted the following for a key distinction between new and competing-continuation applications.

The mean and selected ICs I checked tell the same tale, i.e., that Type 2 apps have a much better shot at getting funded after triage on the A0. NIDA is actually pretty extreme from what I can tell- 2.8% versus 15.2%. So if there is a difference in the A1 resubmission rate for Type 1 and Type 2 (and I bet Type 2 apps that get triaged on A0 are much more likely to be amended and resubmitted) apps, the above analysis doesn't move the relative disadvantage around all that much. However for NIAAA the Type 1 and Type 2 numbers are closer- 4.7% versus 9.8%. So for NIAAA supplicants, a halving of the resubmission rate for Type 1 might bring the odds for Type 1 and Type 2 much closer.

So look. If you were going to try to really screw over some category of investigators you would make sure they were more likely to be triaged and then make it really unlikely that a triaged application could be revised into the fundable range. You could stoke this by giving an extra boost to triaged applications that had already been funded for a prior interval....because your process has already screened your target population to decrease representation in the first place. It's a feed-forward acceleration.

What else could you do? Oh yes. About those revisions, poorer chances on the first 1-2 attempts and the need for Asian and black PIs to submit more often to get funded. Hey I know, you could prevent everybody from submitting too many revised versions of the grant! That would provide another amplification of the screening procedure.

So yeah. The NIH halved the number of permitted revisions to previously unfunded applications for those submitted after January 25, 2009.

Think we're ever going to see an extension of the Ginther analysis to applications submitted from FY2007 onward? I mean, we're seeing evidence in this time of pronounced budgetary grimness that the NIH is slipping on its rather overt efforts to keep early stage investigator success rates similar to experienced investigators' and to keep women's success rates similar to mens'.

The odds are good that the plight of African-American and possibly even Asian/Asian-American applicants to the NIH has gotten even worse than it was for Fiscal Years 2000-2006.

26 responses so far

More thoughts on the dismal NIH response to Ginther

Jeremy Berg made a comment

If you look at the data in the Ginther report, the biggest difference for African-American applicants is the percentage of "not discussed" applications. For African-Americans, 691/1149 =60.0% of the applications were not discussed whereas for Whites, 23,437/58,124 =40% were not discussed (see supplementary material to the paper). The actual funding curves (funding probability as a function of priority score) are quite similar (Supplementary Figure S1). If applications are not discussed, program has very little ability to make a case for funding, even if this were to be deemed good policy.

that irritated me because it sounds like yet another version of the feigned-helpless response of the NIH on this topic. It also made me take a look at some numbers and bench race my proposal that the NIH should, right away, simply pick up enough applications from African American PIs to equalize success rates. Just as they have so clearly done, historically, for Early Stage Investigators and very likely done for woman PIs.

Here's the S1 figure from Ginther et al, 2011:

[In the below analysis I am eyeballing the probabilities for illustration's sake. If I'm off by a point or two this is immaterial to the the overall thrust of the argument.]

My knee jerk response to Berg's comment is that there are plenty of African-American PI's applications available for pickup. As in, far more than would be required to make up the aggregate success rate discrepancy (which was about 10% in award probability). So talking about the triage rate is a distraction (but see below for more on that).

There is a risk here of falling into the Privilege-Thinking, i.e. that we cannot possible countenance any redress of discrimination that, gasp, puts the previously underrepresented group above the well represented groups even by the smallest smidge. But looking at Supplementary Fig1 from Gither, and keeping in mind that the African American PI application number is only 2% of the White applications, we can figure out that a substantial effect on African American PI's award probability would cause only an imperceptible change in that for White PI applications. And there's an amazing sweetener....merit.

Looking at the award probability graph from S1 of Ginther, we note that there are some 15% of the African-American PI's grants scoring in the 175 bin (old scoring method, youngsters) that were not funded. About 55-56% of all ethnic/racial category grants in the next higher (worse) scoring bin were funded. So if Program picks up more of the better scoring applications from African American PIs (175 bin) at the expense of the worse scoring applications of White PIs (200 bin), we have actually ENHANCED MERIT of the total population of funded grants. Right? Win/Win.

So if we were to follow my suggestion, what would be the relative impact? Well thanks to the 2% ratio of African-American to White PI apps, it works like this:

Take the 175 scoring bin in which about 88% of white PIs and 85% of AA PIs were successful. Take a round number of 1,000 apps in that scoring bin (for didactic purposes, also ignoring the other ethnicities) and you get a 980/20 White/African-AmericanPI ratio of apps. In that 175 bin we'd need 3 more African-American PI apps funded to get to 100%. In the next higher (worse) scoring bin (200 score), about 56% of White PI apps were funded. Taking three from this bin and awarding three more AA PI awards in the next better scoring bin would plunge the White PI award probability from 56% to 55.7%. Whoa, belt up cowboy.

Moving down the curve with the same logic, we find in the 200 score bin that there are about 9 AA PI applications needed to put the 200 score bin to 100%. Looking down to the next worse scoring bin (225) and pulling these 9 apps from white PIs we end up changing the award probability for these apps from 22% to ..wait for it..... 20.8%.

And so on.

(And actually, the percentage changes would be smaller in reality because there is typically not a flat distribution across these bins and there are very likely more applications in each worse-scoring bin compared to the next better-scoring bin. I assumed 1,000 in each bin for my example.)

Another way to look at this issue is to take Berg's triage numbers from above. To move to 40% triage rate for the African-AmericanPI applications, we need to shift 20% (230 applications) into the discussed pile. This represents a whopping 0.4% of the White PI apps being shifted onto the triage pile to keep the numbers discussed the same.

These are entirely trivial numbers in terms of the "hit" to the chances of White PIs and yet you could easily equalize the success rate or award probability for African-American PIs.

It is even more astounding that this could be done by picking up African-American PI applications that scored better than the White PI applications that would go unfunded to make up the difference.

Tell me how this is not a no-brainer for the NIH?

34 responses so far

In hard times in NIH Grantlandia, guess who pays the steepest price?

A post over at Rock Talk blog describes some recent funding data from the NIH. The takeaway message is that every thing is down. Fewer grants awarded, fewer percentages of the applications being funded. Not exactly news to my audience. However, head over to the NIH data book for some interesting tidbits.

2013-FundingByCareerStageFirst up, my oldest soapbox, the new investigator. As you can see, up to FY2006 the PI who had not previously had any NIH funding faced a steeper hurdle to get a new grant (Type 1) funding compared to established investigators. This was despite the "New Investigator" checkbox at the top of the application and the fact that reviewers were instructed to give such applications a break. And they did in my experience....just not enough to actually get them funded. Study section discussion that ended with "...but this investigator is new and highly promising so that's why I'm giving it such a good score...[insert clearly unfundable post-discussion score]" were not uncommon during my term of appointed service. So round about FY2007 the prior NIH Director, Zerhouni, put in place an affirmative action system to fund newly-transitioned independent investigators. There's a great description in this Science news bit [PDF]. You can see the result below.

Interestingly, this will to maintain success rates of the inexperienced PIs at levels similar to the experienced PIs has evaporated for FY2011 and FY2013. See title.

2013-FundingBySexofPINext, the slightly more subtle case of women PIs. This will be a two-grapher. First, the overall Research Project Grant success rate broken down by PI sex. As you can see, up through FY2002 there was a disparity which disappeared in the subsequent years. Miracle? Hell no. I guarantee you there has been some placing of the affirmative action fingers on the scale for the sex disparity as well. Fortunately, the elastic hasn't snapped back in the past two FYs as it has for inexperienced investigators. But I'm keeping a suspicious eye on it, as should you. Notice how women trickle along juuuuust a little bit behind men? Interesting, isn't it, how the disparity is never actually reversed? You know, because if whomever was previously advantaged even slipped back to disadvantaged (instead of merely equal) the whole world would end.

2013-FundingBySexandTypeR01Moving along, we downshift to R01-equivalent grants so as to perform the analysis of new proposals versus competing continuation (aka, "renewal") applications. There are mechanisms included in the "RPG" grouping that cannot be continued so this is necessary. What we find is that the disparity for woman PIs in continuing their R01/equivalent grants has been maintained all along. New grants have been level in recent years. There is a halfway decent bet that this may be down to the graybeard factor. This hypothesis depends on the idea that the longer a given R01 has been continued, the higher the success rate for each subsequent renewal. These data also show that a goodly amount of the sex disparity up through FY2002 was addressed at the renewal stage. Not all of it. But clearly gains were made. This kind of selectivity suggests the heavy hand of affirmative action quota filling to me.

This is why I am pro-quota and totally in support of the heavy hand of Program in redressing study section biases, btw. Over time, it is the one thing that helps. Awareness, upgrading women's representation on study section (see the early 1970s)...New Investigator checkboxes and ESI initiatives* all fail. Quota-making works.

*In that Science bit I link it says:

Told about the quotas, study sections began “punishing the young investigators with bad scores,” says Zerhouni. That is, a previous slight gap in review scores for new grant applications from firsttime and seasoned investigators widened in 2007 and 2008, Berg says. It revealed a bias against new investigators, Zerhouni says.

26 responses so far

CHE digs out some excuses from NIH as to why they are doing so little about Ginther

Jan 13 2014 Published by under Fixing the NIH, NIH Careerism, NIH funding

As you know I am distinctly unimpressed with the NIH's response to the Ginther report which identified a disparity in the success rate of African-American PIs when submitting grant applications to the NIH.

The NIH response (i.e., where they have placed their hard money investment in change) has been to blame pipeline issues. The efforts are directed at getting more African-American trainees into the pipeline and, somehow, training them better. The subtext here is twofold.

First, it argues that the problem is that the existing African-American PIs submitting to the NIH just kinda suck. They are deserving of lower success rates! Clearly. Otherwise, the NIH would not be looking in the direction of getting new ones. Right? Right.

Second, it argues that there is no actual bias in the review of applications. Nothing to see here. No reason to ask about review bias or anything. No reason to ask whether the system needs to be revamped, right now, to lead to better outcome.

A journalist has been poking around a bit. The most interesting bits involve Collins' and Tabak's initial response to Ginther and the current feigned-helplessness tack that is being followed.

From Paul Basken in the Chronicle of Higher Education:

Regarding the possibility of bias in its own handling of grant applications, the NIH has taken some initial steps, including giving its top leaders bias-awareness training. But a project promised by the NIH's director, Francis S. Collins, to directly test for bias in the agency's grant-evaluation systems has stalled, with officials stymied by the legal and scientific challenges of crafting such an experiment.

"The design of the studies has proven to be difficult," said Richard K. Nakamura, director of the Center for Scientific Review, the NIH division that handles incoming grant applications.

Hmmm. "difficult", eh? Unlike making scientific advances, hey, that stuff is easy. This, however, just stumps us.

Dr. Collins, in his immediate response to the Ginther study, promised to conduct pilot experiments in which NIH grant-review panels were given identical applications, one using existing protocols and another in which any possible clue to the applicant's race—such as name or academic institution—had been removed.

"The well-described and insidious possibility of unconscious bias must be assessed," Dr. Collins and his deputy, Lawrence A. Tabak, wrote at the time.

Oh yes, I remember this editorial distinctly. It seemed very well-intentioned. Good optics. Did we forget that the head of the NIH is a political appointment with all that that entails? I didn't.

The NIH, however, is still working on the problem, Mr. Nakamura said. It hopes to soon begin taking applications from researchers willing to carry out such a study of possible biases in NIH grant approvals, and the NIH also recently gave Molly Carnes, a professor of medicine, psychiatry, and industrial and systems engineering at the University of Wisconsin at Madison, a grant to conduct her own investigation of the matter, Mr. Nakamura said.

The legal challenges include a requirement that applicants get a full airing of their submission, he said. The scientific challenges include figuring out ways to get an unvarnished assessment from a review panel whose members traditionally expect to know anyone qualified in the field, he said.

What a freaking joke. Applicants have to get a full airing and will have to opt-in, eh? Funny, I don't recall ever being asked to opt-in to any of the non-traditional review mechanisms that the CSR uses. These include phone-only reviews, video-conference reviews and online chat-room reviews. Heck, they don't even so much as disclose that this is what happened to your application! So the idea that it is a "legal" hurdle that is solved by applicants volunteering for their little test is clearly bogus.

Second, the notion that a pilot study would prevent "full airing" is nonsense. I see very few alternatives other than taking the same pool of applications and putting them through regular review as the control condition and then trying to do a bias-decreasing review as the experimental condition. The NIH is perfectly free to use the normal, control review as the official review. See? No difference in the "full airing".

I totally agree it will be scientifically difficult to try to set up PI blind review but hey, since we already have so many geniuses calling for blinded review anyway...this is well worth the effort.

But "blind" review is not the only way to go here. How's about simply mixing up the review panels a bit? Bring in a panel that is heavy in precisely those individuals who have struggled with lower success rates- based on PI characteristics, University characteristics, training characteristics, etc. See if that changes anything. Take a "normal" panel and provide them with extensive instruction on the Ginther data. Etc. Use your imagination people, this is not hard.

Disappointingly, the CHE piece contains not one single bit of investigation into the real question of interest. Why is this any different from any other area of perceived disparity between interests and study section outcome at the NIH? From topic domain to PI characteristics (sex and relative age) to University characteristics (like aggregate NIH funding, geography, Congressional district, University type/rank, etc) the NIH is full willing to use Program prerogative to redress the imbalance. They do so by funding grants out of order and, sometimes, by setting up funding mechanisms that limit who can compete for the grants.

2013-FundingByCareerStageIn the recent case of young/recently transitioned investigators they have trumpeted the disparity loudly, hamfistedly and brazenly "corrected" the study section disparity with special paylines and out of order pickups that amount to an affirmative action quota system [PDF].
All with exceptionally poor descriptions of exactly why they need to do so, save "we're eating out seed corn" and similar platitudes. All without any attempt to address the root problem of why study sections return poorer scores for early stage investigators. All without proving bias, describing the nature of the bias and without clearly demonstrating the feared outcome of any such bias.

"Eating our seed corn" is a nice catch phrase but it is essentially meaningless. Especially when there are always more freshly trained PHD scientist eager and ready to step up. Why would we care if a generation is "lost" to science? The existing greybeards can always be replaced by whatever fresh faces are immediately available, after all. And there was very little crying about the "lost" GenerationX scientists, remember. Actually, none, outside of GenerationX itself.

The point being, the NIH did not wait for overwhelming proof of nefarious bias. They just acted very directly to put a quota system in place. Although, as we've seen in recent data this has slipped a bit in the past two Fiscal Years, the point remains.

Why, you might ask yourself, are they not doing the same in response to Ginther?

17 responses so far

A Bold Proposal to Fix the NIH

Our longtime blog commenter dsks is always insightful. This time, the proposal is such a doozy that it is worth dragging up as a new post.

... just make it official and block all triaged applications from subsequent resubmission. Maybe then use the extra reviewer time and money to bring back the A2, perhaps restricting it to A1 proposals that come in under ~30%ile or something.

Hell, I think any proposal that consistently scores better than 20%ile should be allowed to be resubmitted ad infinitum until it gets funded. Having to completely restructure a proposal because it couldn't quite make the last yard over what is accepted to be a rather arbitrary pay-line is insane.

On first blush that first one sounds pretty good. Not so sure about the endless queuing of an above payline, below 20%ile grant, personally. (I mean, isn't this where Program steps in and just picks it up already?)

This reminds me of something, though. Unlike in times past, the applicant now has some information on just how strong the rejection really was because of the criterion scores. This gives some specific quantification in contrast to only being able to parse the language of the review.

One would hope that there would be some correlation between the criterion scores and the choice of the PI to resubmit. As in, if you get 4s and 5s on Approach or Significance, maybe it is worth it. 7s and 8s mean you really better not bother.

36 responses so far

Should you revise and resubmit a triaged NIH Grant application?

Jan 06 2014 Published by under NIH, NIH Budgets and Economics, NIH funding

A December 18 post on the Rock Talk blog issued an update on the funding rate situation for grant applications submitted to the NIH. The data provide early snapshot on success rates for 2013 competing research project grant (RPG) applications and awards.

We received 49,581 competing RPG applications at NIH in fiscal year 2013, slightly declining compared to last year (51,313 applications in FY2012).

... In FY2013 we made 8,310 competing RPG awards, 722 fewer than in FY2012. This puts the overall research project grant (RPG) success rate at 16.8%, a decline from the 17.6% reported in FY2012. One might have expected a bigger drop in the success rates since we made about 8% fewer competing awards this year, but the reduction in the number of applications explains part of it.

emphasis added, as if I need to do so.

See this graph for a recent historical trend on success rates and application submission numbers. With respect to the latter, you can see that the small decrease to 49,581 is not hugely significant. We'll have to wait for a few more years to be convinced of any trend. Success rates are at an all-time low. This is rather unsurprising to anyone of you that has been paying attention to doing at the NIH and is a result of the long trend toward Defunding the NIH.

Of greater interest in the Rock Talk post was a comment made in response to a query about the fate of initially-triaged applications. A Deborah Frank wrote:

A few months ago, I emailed Rock Talk to ask the same question as Mr. Doherty’s question #3. My query was routed to the Freedom of Information Act Office, and a few months later I received a table of data covering A0 R01s received between FY 2010 and FY2012 (ARRA funds and solicited applications were excluded). Overall at NIH, 2.3% of new R01s that were “not scored” as A0s were funded as A1s (range at different ICs was 0.0% to 8.4%), and 8.7% of renewals that were unscored as A0s were funded as A1s (range 0.0% to 25.7%). These data have at least two limitations. First, funding decisions made in 2013 were not included, so the actual success rates are likely a bit higher. Second, the table does not indicate how many of the unscored A0s were resubmitted.

The NIH data miner / blog team then posted a link to an excel spreadsheet with the relevant numbers for ICs, divided by Type 1 (new) and Type 2 (renewal, aka competing continuation) applications. The spreadsheet notes that this analysis is for unsolicited (i.e., non-RFA) applications and that since the FY2013 funding data were not complete when these were generated (7/15/2013), it is possible that some A0 submitted in this interval may still be funded.

Now, this is not precisely the same as the usual success rate numbers because of

  • the aforementioned exclusions
  • the way A0 and A1 submitted in the same FY are counted as one application in success rate calculation
  • the fact that if an A1 is not submitted it isn't (cannot be) counted in success rate

Nevertheless, keeping these details in mind it is hard to escape noticing that one is facing steep odds to get a triaged A0 Type 1 proposal funded. On the face of it, anyway. And I have to tell you, Dear Reader, that this is consistent with my personal experience. I can't recall ever getting a triaged application to the funded level on the next submission. In fact I'm hard pressed to recall getting a triaged A0 funded as an A2 when that was still possible.

Yet I continue to revise them. Not entirely sure why, looking at these data.

Moving along, it is really disappointing that the NIH didn't go ahead and put all the relevant numbers in their spreadsheet. The thing that PIs really want to know is still terribly obscured by this selected analysis. NIDA, for example, lists 394 unscored Type 1 applications of which 11 (2.8%) were eventually funded. But unlike the now-disappeared CSR FY2004 databook analysis (see here, here for reference to it), they have failed to provide the number of applications that were initially triaged that the PI actually resubmitted as A1! If only half of the triaged applications were amended and resubmitted, then the odds go to 5.6%.

Is this difference relevant to PI decision making? I don't know for sure but I suspect it would be. It is also relevant to understanding the different success rate for initially-triaged Type 1 and Type 2 applications. The mean and selected ICs I checked tell the same tale, i.e., that Type 2 apps have a much better shot at getting funded after triage on the A0. NIDA is actually pretty extreme from what I can tell- 2.8% versus 15.2%. So if there is a difference in the A1 resubmission rate for Type 1 and Type 2 (and I bet Type 2 apps that get triaged on A0 are much more likely to be amended and resubmitted) apps, the above analysis doesn't move the relative disadvantage around all that much. However for NIAAA the Type 1 and Type 2 numbers are closer- 4.7% versus 9.8%. So for NIAAA supplicants, a halving of the resubmission rate for Type 1 might bring the odds for Type 1 and Type 2 much closer.

Do these data change my approach? They probably should. However, there is a factor of submission dates here. For any given round, new applications are submitted one month and then amended applications are due the next month. So if you are a few weeks away from the second deadline and considering whether to resubmit an application or not....there is no "new" application that you could submit right now. You have to wait for the next round. So if you are feeling grant pressure..what else are you going to do? Take the low odds or take the guarantee of zero odds?

Final note. I continue to believe, until NIH demonstrates my error very clearly, that considerable numbers of "A0" submissions are really a reworking of ideas that have been previously reviewed. I also believe that these "A0" submissions are disproportionally likely to be funded due to the prior submission/review rounds. Whether this is due to improved grant crafting, additional preliminary data, better approaches, gradual convincing of a study section or Program is not critical here, I'd say all these contribute. If I am correct, then there is value in continuing to work the steps by resubmitting a triaged A0.
Additional Reading:

NIH Historical Success Rates Explain Current Attitudes

More data on historical success rates for NIH grants

Old Boys’ Network Favors Men’s Continuing Grants?

35 responses so far

NIH still doesn't get anywhere close to a response to the Ginther finding.

In case my comment never makes it out of moderation at RockTalk....

Interesting to contrast your Big Data and BRAINI approaches with your one for diversity. Try switching those around…”establish a forum..blah, blah…in partnership…blah, engage” in Big Data. Can’t you hear the outraged howling about what a joke of an effort that would be? It is embarrassing that the NIH has chosen to kick the can down the road and hide behind fake-helplessness when it comes to enhancing diversity. In the case of BRAINI, BigData and yes, discrimination against a particular class of PI applicants (the young) the NIH fixes things with hard money- awards for research projects. Why does it draw back when it comes to fixing the inequality of grant awards identified in Ginther?

When you face up to the reasons why you are in full cry and issuing real, R01 NGA solutions for the dismal plight of ESIs and doing nothing similar for underrepresented PIs then you will understand why the Ginther report found what it did.

ESIs continue, at least six years on, to benefit from payline breaks and pickups. You trumpet this behavior as a wonderful thing. Why are you not doing the same to redress the discrimination against underrepresented PIs? How is it different?

The Ginther bombshell dropped in August of 2011. There has been plenty of time to put in real, effective fixes. The numbers are such that the NIH would have had to fund mere handfuls of new grants to ensure success rate parity. And they could still do all the can-kicking, ineffectual hand waving stuff as well.

And what about you, o transitioning scientists complaining about an "unfair" NIH system stacked against the young? Is your complaint really about fairness? Or is it really about your own personal success?

If it is a principled stand, you should be name dropping Ginther as often as you do the fabled "42 years before first R01" stat.

13 responses so far

« Newer posts Older posts »