Archive for the 'Grantsmanship' category

Your Grant in Review: When they aren't talking to you.

Aug 22 2014 Published by under Grant Review, Grantsmanship, NIH Careerism

It is always good to remember that sometimes comments in the written critique are not directed at the applicant.

Technically, of course these comments are directed at Program Staff in an advisory capacity. Not to help the applicant in any way whatsoever- assistance in revising is a side effect.

Still a comment that opposes a Stock Criticism is particularly likely to be there for the consumption of either Program or the other reviewers.

It is meant to preempt the Stock Criticism when the person making the comment lies the grant.

12 responses so far

Your Grant in Review Reminder: Research Study Sections First

Aug 22 2014 Published by under Grant Review, Grantsmanship, NIH Careerism, NIH funding

One key to determining the right study section to request is to look on RePORTER for funded grants reviewed in your study sections of interest.

Sometimes this is much more informative than the boilerplate description of the study section listed at CSR.

8 responses so far

Peer Review: Advocates and Detractors Redux

A comment on a recent post from Grumble is a bit of key advice for those seeking funding from the NIH.

It's probably impossible to eliminate all Stock Critique bait from an application. But you need to come close, because if you don't, even a reviewer who likes everything else about your application is going to say to herself, "there's no way I can defend this in front of the committee because the other reviewers are going to bring up all these annoying flaws." So she won't even bother trying. She'll hold her fire and go all out to promote/defend the one application that hits on most cylinders and proposes something she's really excited about.

This is something that I present as an "advocates and detractors" heuristic to improving your grant writing, surely, but it applies to paper writing/revising and general career management as well. I first posted comments on Peer Review: Friends and Enemies in 2007 and reposted in 2009.

The heuristic is this. In situations of scientific evaluation, whether this be manuscript peer-review, grant application review, job application or the tenure decision, one is going to have a set of advocates in favor of one's case and detractors who are against. The usual caveats apply to such a strict polarization. Sometimes you will have no advocates, in which case you are sunk anyway so that case isn't worth discussing. The same reviewer can simultaneously express pro and con views but as we'll discuss this is just a special case.

The next bit in my original phrasing is what Grumble is getting at in the referenced comment.

Give your advocates what they need to go to bat for you.

This is the biggie. In all things you have to give the advocate something to work with. It does not have to be overwhelming evidence, just something. Let's face it, how many times are you really in position in science to overwhelm objections with the stupendous power of your argument and data to the point where the most confirmed critic cries "Uncle". Right. Never happens.

The point here is that you need not put together a perfect grant, nor need you "wait" until you have X, Y or Z bit of Preliminary Data lined up. You just have to come up with something that your advocates can work with. As Grumble was pointing out, if you give your advocate a grant filled with StockCritique bait then this advocate realizes it is a sunk cause and abandons it. Why fight with both hands and legs trussed up like a Thanksgiving turkey?

Let's take some stock critiques as examples.

"Productivity". The goal here is not to somehow rush 8 first author papers into press. Not at all. Just give them one or two more papers, that's enough. Sometimes reiterating the difficulty of the model or the longitudinal nature of the study might be enough.

"Independence of untried PI with NonTenureTrackSoundin' title". Yes, you are still in the BigPIs lab, nothing to be done about that. But emphasize your role in supervising whole projects, running aspects of the program, etc. It doesn't have to be meticulously documented, just state it and show some sort of evidence. Like your string of first and second authorships on the papers from that part of the program.

"Not hypothesis driven". Sure, well sometimes we propose methodological experiments, sometimes the outcome is truly a matter of empirical description and sometimes the results will be useful no matter how it comes out so why bother with some bogus bet on a hypothesis? Because if you state one, this stock critique is de-fanged, it is much easier to argue the merits of a given hypothesis than it is the merits of the lack of a hypothesis.

Instead of railing against the dark of StockCriticism, light a tiny candle. I know. As a struggling newb it is really hard to trust the more-senior colleagues who insist that their experiences on various study sections has shown that reviewers often do go to bat for untried investigators. But....they do. Trust me.

There's a closely related reason to brush up your application to avoid as many obvious pitfalls as possible. Because it takes ammunition away from your detractors, which makes the advocates job easier.

Deny your detractors grist for their mill.

Should be simple, but isn't. Particularly when the critique is basically a reviewer trying to tell you to conduct the science the way s/he would if they were the PI. (An all to common and inappropriate approach in my view) If someone wants you to cut something minor out, for no apparent reason (like say the marginal cost of doing that particular experiment is low), just do it. Add that extra control condition. Respond to all of their critiques with something, even if it is not exactly what the reviewer is suggesting; again your ultimate audience is the advocate, not the detractor. Don't ignore anything major. This way, they can't say you "didn't respond to critique". They may not like the quality of the response you provide, but arguing about this is tougher in the face of your advocating reviewer.

This may actually be closest to the core of what Grumble was commenting on.

I made some other comments about the fact that a detractor can be converted to an advocate in the original post. The broader point is that an entire study section can be gradually converted. No joke that with enough applications from you, you can often turn the tide. Either because you have argued enough of them (different reviewers might be assigned over time to your many applications) into seeing science your way or because they just think you should be funded for something already. It happens. There is a "getting to know you" factor that comes into play. Guess what? The more credible apps you send to a study section, the more they get to know you.

Ok, there is a final bit for those of you who aren't even faculty yet. Yes, you. Things you do as a graduate student or as a postdoc will come in handy, or hurt you, when it comes time to apply for grants as faculty. This is why I say everyone needs to start thinking about the grant process early. This is why I say you need to start talking with NIH Program staff as a grad student or postdoc.

Plan ahead

Although the examples I use are from the grant review process, the application to paper review and job hunts are obvious with a little thought. This brings me to the use of this heuristic in advance to shape your choices.

Postdocs, for example, often feel they don't have to think about grant writing because they aren't allowed to at present, may never get that job and if they do they can deal with it later. This is an error. The advocate/detractor heuristic suggests that postdocs make choices to expend some effort in broad range of areas. It suggests that it is a bad idea to gamble on the BIG PAPER approach if this means that you are not going to publish anything else. An advocate on a job search committee can work much more easily with the dearth of Science papers than s/he can a dearth of any pubs whatsoever!

The heuristic suggests that going to the effort of teaching just one or two courses can pay off- you never know if you'll be seeking a primarily-teaching job after all. Nor when "some evidence of teaching ability" will be the difference between you and the next applicant for a job. Take on that series of time-depleting undergraduate interns in the lab so that you can later describe your supervisory roles in the laboratory.

This latter bit falls under the general category of managing your CV and what it will look like for future purposes.

Despite what we would like to be the case, despite what should be the case, despite what is still the case in some cozy corners of a biomedical science career....let us face some facts.

  • The essential currency for determining your worth and status as a scientist is your list of published, peer reviewed contributions to the scientific literature.
  • The argument over your qualities between advocates and detractors in your job search, promotions, grant review, etc is going to boil down to pseudo quantification of your CV at some point
  • Quantification means analyzing your first author / senior author /contributing author pub numbers. Determining the impact factor of the journals in which you publish. Examining the consistency of your output and looking for (bad) trends. Viewing the citation numbers for your papers.
  • You can argue to some extent for extenuating circumstances, the difficulty of the model, the bad PI, etc but it comes down to this: Nobody Cares.

My suggestion is, if you expect to have a career you had better have a good idea of what the standards are. So do the research. Do compare your CV with those of other scientists. What are the minimum criteria for getting a job / grant / promotion / tenure in your area? What are you going to do about it? What can you do about it?

This echos something Odyssey said on the Twitts today:


are true for your subfield stage as well as your University stage of performance.

6 responses so far

On coming up with multiple ideas for R01 proposals

A question to the blog asked the perennial concern that is raised every time I preach on about submitting a lot of proposals. How does one have enough ideas for that? My usual answer is a somewhat perplexed inability to understand how other scientists do not have more ideas in a given six month interval than they can possible complete in the next 20 years.

I reflected slightly more that usual today and thought of something.

There is one tendency of new grant writers that can be addressed here.

My experience is that early grant writers have a tendency to write a 10 year program of research into their initial R01s. It is perfectly understandable and I've done it myself. Probably still fall into this now and again. A Stock Critique of "too ambitious" is one clue that you may need to think about whether you are writing a 10 year research program rather than a 5 year, limited dollar figure, research project that fits within a broader programmatic plan.

One of the key developments as a grant writer, IMNSHO, is to figure out how to write a streamlined, minimalist grant that really is focused on a coherent project.

When you are doing this properly, it leaves an amazing amount of additional room to write additional, highly-focused proposals on, roughly speaking, the same topic.

33 responses so far

Women in the R00 phase don't apply for R01s as frequently as men

Sally Rockey:

A specific issue that recently has recently created interesting conversations in the blogosphere is whether female K99/R00 awardees were less likely to receive a subsequent R01 award compared to male K99/R00 awardees. We at NIH have also found this particular outcome among K99/R00 PIs and have noted that those differences again stem from differential rates of application. Of the 2007 cohort of K99 PIs, 86 percent of the men had applied for R01s by 2013, but only 69 percent of the women had applied.

She's referring here to a post over at DataHound ("K99-R00 Evaluation: A Striking Gender Disparity") which observed:

Of the 201 men with R00 awards, 114 (57%) have gone on to receive at least 1 R01 award to date. In contrast, of the 127 women with R00 awards, only 53 (42%) have received an R01 award. This difference is jarring and is statistically significant (P value=0.009).
To investigate this further, I looked at the two cohorts separately. For the FY2007 cohort, 70 of the 108 men (65%) with R00 awards have received R01 grants whereas only 31 of the 62 women (50%) have (P value = 0.07). For the FY2008 cohort, 44 of the 93 men (47%) with R00 awards have received R01s whereas only 22 of the 65 women (34%) have (P value = 0.10). The lack of statistical significance is due to the smaller sample sizes for the cohorts separately rather than any difference in the trends for the separate cohorts, which are quite similar.

And Rockey isn't even giving us the data on the vigor with which a R00 holder is seeking R01 funding. That may or may not make the explanation even stronger.

Seems to me that any mid or senior level investigators who have new R00-holding female assistant professors in their department might want to make a special effort to encourage them to submit R01 apps early and often.

13 responses so far

Simple, can't-miss strategy to get a grant from the NIH

Aug 08 2014 Published by under Grantsmanship, NIH, NIH Careerism

Work at it.
Continue Reading »

79 responses so far

Sex differences in K99/R00 awardees from my favorite ICs

Jul 21 2014 Published by under Grantsmanship, NIH, NIH Careerism, NIH funding, Peer Review

Datahound has some very interesting analyses up regarding NIH-wide sex differences in the success of the K99/R00 program.

Of the 218 men with K99 awards, 201 (or 92%) went on to activate the R00 portion. Of the 142 women, 127 (or 89%) went on to these R00 phase. These differences in these percentages are not statistically different.

Of the 201 men with R00 awards, 114 (57%) have gone on to receive at least 1 R01 award to date. In contrast, of the 127 women with R00 awards, only 53 (42%) have received an R01 award. This difference is jarring and is statistically significant (P value=0.009).


So per my usual, I'm very interested in what the ICs that are closest to my lab's heart have been up to with this program. Looking at K99 awardees from 07 to 09 I find women PIs to constitute 3/3, 1/3 and 2/4 in one Institute and 1/7, 2/6 and 5/14 in the other Institute. One of these is doing better than the other and I will just note that was before the arrival of a Director who has been very vocal about sex discrimination in science and academia.

In terms of the conversion to R01 funding that is the subject of Datahound's post, the smaller Institution has decent outcomes* for K99 awardees from 07 (R01, R21, nothing), 08 (R01, R01, R01) and 09 (P20 component, U component, nothing, nothing).

In the other Institute, the single woman from 07 did not appear to convert to the R00 phase but Google suggests made Assistant Professor rank anyway. No additional NIH funding. The rest of the 07 class contains 4 with R01 and two with nothing. In 08, the women PIs are split (one R01, one nothing) similar to the men (2 R01, 2 with nothing). In 09 the women PIs have two with R01s, one R03 and two with nothing.

So from this qualitative look, nothing is out of step with Datahound's NIH-wide stats. There are 14/37 women PIs, this 38% is similar to the NIH-wide 39% Datahound quoted although there may be a difference between these two ICS (30% vs 60%) that could stand some inquiry. One of 37 K99 awardees failed to convert to R00 from the K99 (but seems to be faculty anyway). Grant conversion past the R00 is looking to be roughly half or a bit better.

I didn't do the men for the 2009 cohort in the larger Institute but otherwise the sex differences in terms of getting/not getting additional funding beyond the R00 seems pretty similar.

I do hope Datahound's stats open some eyes at the NIH, however. Sure, there are reasons to potentially excuse away a sex difference in the rates of landing additional research funding past the R00. But I am reminded of a graph Sally Rockey posted regarding the success rate on R01-equivalent awards. It showed that men and women PIs had nearly identical success rates on new (Type 1) proposals but slightly lower success on Renewal (Type 2) applications. This pastes over the rates of conversion to R00 and the acquisition of additional funding, if you squint a bit.

Are women overall less productive once they've landed some initial funding? Are they viewed negatively on the continuation of a project but not on the initiation of it? Are women too humble about what they have accomplished?
*I'm counting components of P or U mechanisms but not pilot awards.

15 responses so far

Your Grant in Review: The Ideal Personnel Lineup

Excellent comment from eeke:

My last NIH grant application was criticized for not including a post-doc at 100% effort. I had listed two techs, instead. Are reviewers under pressure to ding PI's for not having post-docs or some sort of trainee? WTF?

My answer:

I think it is mostly because reviewers think that a postdoc will provide more intellectual horsepower than a tech and especially when you have two techs, you could have one of each.

I fully embrace this bias, I have to admit. I think a one-tech, one-PD full modular R01 is about the standardest of standard lineups. Anchors the team. Best impact for the money and all that.

A divergence from this expectation would require special circumstances to be explained (and of course there are many projects imaginable where two-tech, no-postdoc *would* be best, you just have to explain it)

What do you think, folks? What arrangement of personnel do you expect to see on a grant proposal in your field, for your agencies of closest interest? Are you a blank slate until you see what the applicant has proposed or do you have....expectations?

22 responses so far

Your Grant in Review: The F32 Postdoctoral Fellowship Application

We've previously discussed the NIH F32 Fellowship designed to support postdoctoral trainees. Some of the structural limitations to a system designed on its fact to provide necessary support for necessary (additional) training overlap considerably with the problems of the F31 program designed to support graduate students.

Nevertheless, winning an individual NRSA training fellowship (graduate or postdoctoral) has all kinds of career benefits to the trainee and primary mentor so they remain an attractive option.

A question arose on the Twitts today about whether it was worth it for a postdoc in a new lab to submit an application.

In my limited experience reviewing NRSA proposals in a fellowship-dedicated panel for the NIH, there is one issue that looms large in these situations.

Reviewer #1, #2 and #3: "There is no evidence in the application that sufficient research funds will be available to complete the work described during the proposed interval of funding."

NRSA fellowships, as you are aware, do not come with money to pay for the actual research. The fellowship applications require a good deal of discussion of the research the trainee plans to complete for the proposed interval of training. In most cases that research plan involves a fair amount of work that require a decent amount of research funding to complete.

The reviewers, nearly all of them in my experience, will be looking for signs of feasibility. That the PI is actually funded, funded to do something vaguely related* to the topic of the fellowship proposal and funded for the duration over which the fellowship will be active.

When the PI is not obviously funded through that interval, eyebrows are raised. Criticism is leveled.

So, what is a postdoc in a newer lab to do? What is the PI of a newish lab, without substantial funding to do?

One popular option is to find a co-mentor for the award. A co-mentor that is involved. Meaning the research plan needs to be written as a collaborative project between laboratories. Obviously, this co-mentor should have the grant support that the primary PI is lacking. It needs to be made clear that there will be some sort of research funds to draw upon to support the fellow doing some actual research.

The inclusion of "mentoring committees" and "letters of support from the Chair" are not sufficient. Those are needed, don't get me wrong, but they address other concerns** that people have about untried PIs supervising a postdoctoral fellow.

It is essential that you anticipate the above referenced Stock Critique and do your best*** to head it off.

*I have seen several highly regarded NRSA apps for which the research plan looks to me to be of R01-quality writing and design.

**We're in stock-critique land here. Stop raging about how you are more qualified than Professor AirMiles to actually mentor a postdoc.

***Obviously the application needs to present the primary mentor's funding in as positive a light as possible. Talk about startup funds, refer to local pilot grants, drop promising R01 scores if need be. You don't want to blow smoke, or draw too much attention to deficits, but a credible plan for acquiring funding goes a lot farther than ignoring the issue.

28 responses so far

Not just a list, but a story

May 23 2014 Published by under Grantsmanship, NIH

A guest post from @iGrrrl

Like winter, changes to the biosketch are coming

Dr. Rockey spoke about the changes to the biographical sketch at the NORDP meeting this week, and I think I can at least offer a bit more depth about the thinking behind this, both from her comments and from what I've seen over the last few years. Certainly my knee-jerk negative opinion about this change has evolved upon reflection and listening to her presentation. This may not be as bad as it sounds. Maybe.

In her talk, of which this question of biosketches was one very small part, her short-hand way of referring to the reasoning behind this was to "reduce the reliance on publishing in one-word-named journals" as a shorthand to judging the quality of an investigator's productivity. When the biosketch was changed in 2010, shortening the publication list to 15 seemed to me to be designed to reduce a senior investigator's advantage of sheer numbers of publications. The rise of metrics and h-factor means that the impact factor of the journal in which the work was published now substitutes, in many a reviewer's mind, as the quick heuristic for assessing the Investigator criterion.

The move to the shorter publication list was also borrowed from NSF's limit of 10 products for the biosketch. This sounds good on paper, but didn't account for the differences in culture. Researchers in NSF-type fields are just as conscious of h-index, but you don't find the same reliance on "glamour magazines" that cut across all NSF research. The result seems to be that many young investigators in biomedicine feel they have to wait to publish until they have a story worthy of C/N/S. I hear sometimes about young researchers failing to make tenure in part because they did not publish enough, not because they didn't have data, but because they were trying for the high-level journal, didn't simply get it out in field- or sub-field-specific journals.

And work that appears in those so-called lower-tier journals shouldn't be dismissed, but it often is effectively ignored when a reviewer's eyes are looking for the titles of the high-impact journals. If a young faculty member's list maxes to 15 and they are all solid papers in reasonable journals, that's usually fine. But sometimes they have fewer than 15, so the reviewer relies more on the impact factor of the journals in which the work appears, and that in turn leads to reliance on C/N/S (JAMA, NEJM, etc). But for the applicant, sometimes the work reflected in the papers is based on a study that simply takes a long time to run, so that one paper in that year might represent a great deal of effort and time with results highly relevant within the context of the subfield. Or a series of papers may have methods published in one journal, the study in two more, and none of them are top-tier, but the entire story is important. This new narrative gives the opportunity to give that context.

This appears to be the point of the change to the biosketch: the impact factor of the journal(s) in which the work appeared may not reflect the impact of the results. Some applicants were including a sentence after every paper on the biosketch to try to give the context and impact--the contribution to the field--but in my experience, reviewers did not like and did not read these sentences. Yet, when reviewers come from a diversity of backgrounds, they may not be able to appreciate the impact of a result on the sub-field. Many of these concerns have been vociferously expressed to Dr. Rockey through various social media, primarily comments here at Our Host's blot, but also on the RockTalking blog.

The idea behind this new approach to discussing an applicant's contributions has some reasonable foundations, but I don't expect it will work. In the short term, applicants will likely struggle to assemble a response to this new requirement. I can't imagine reviewers will enjoy reading the resulting narratives. It may be that a common rubric approach to writing these sections as a clear story will make them uniform enough for reviewers to quickly judge, but I fully expect they will still be looking for Cell, Nature, and Science.

15 responses so far

Older posts »