Archive for the 'Grantsmanship' category

Peer Review: Advocates and Detractors Redux

A comment on a recent post from Grumble is a bit of key advice for those seeking funding from the NIH.

It's probably impossible to eliminate all Stock Critique bait from an application. But you need to come close, because if you don't, even a reviewer who likes everything else about your application is going to say to herself, "there's no way I can defend this in front of the committee because the other reviewers are going to bring up all these annoying flaws." So she won't even bother trying. She'll hold her fire and go all out to promote/defend the one application that hits on most cylinders and proposes something she's really excited about.

This is something that I present as an "advocates and distractors" heuristic to improving your grant writing, surely, but it applies to paper writing/revising and general career management as well. I first posted comments on Peer Review: Friends and Enemies in 2007 and reposted in 2009.


The heuristic is this. In situations of scientific evaluation, whether this be manuscript peer-review, grant application review, job application or the tenure decision, one is going to have a set of advocates in favor of one's case and detractors who are against. The usual caveats apply to such a strict polarization. Sometimes you will have no advocates, in which case you are sunk anyway so that case isn't worth discussing. The same reviewer can simultaneously express pro and con views but as we'll discuss this is just a special case.

The next bit in my original phrasing is what Grumble is getting at in the referenced comment.


Give your advocates what they need to go to bat for you.

This is the biggie. In all things you have to give the advocate something to work with. It does not have to be overwhelming evidence, just something. Let's face it, how many times are you really in position in science to overwhelm objections with the stupendous power of your argument and data to the point where the most confirmed critic cries "Uncle". Right. Never happens.

The point here is that you need not put together a perfect grant, nor need you "wait" until you have X, Y or Z bit of Preliminary Data lined up. You just have to come up with something that your advocates can work with. As Grumble was pointing out, if you give your advocate a grant filled with StockCritique bait then this advocate realizes it is a sunk cause and abandons it. Why fight with both hands and legs trussed up like a Thanksgiving turkey?

Let's take some stock critiques as examples.

"Productivity". The goal here is not to somehow rush 8 first author papers into press. Not at all. Just give them one or two more papers, that's enough. Sometimes reiterating the difficulty of the model or the longitudinal nature of the study might be enough.

"Independence of untried PI with NonTenureTrackSoundin' title". Yes, you are still in the BigPIs lab, nothing to be done about that. But emphasize your role in supervising whole projects, running aspects of the program, etc. It doesn't have to be meticulously documented, just state it and show some sort of evidence. Like your string of first and second authorships on the papers from that part of the program.

"Not hypothesis driven". Sure, well sometimes we propose methodological experiments, sometimes the outcome is truly a matter of empirical description and sometimes the results will be useful no matter how it comes out so why bother with some bogus bet on a hypothesis? Because if you state one, this stock critique is de-fanged, it is much easier to argue the merits of a given hypothesis than it is the merits of the lack of a hypothesis.

Instead of railing against the dark of StockCriticism, light a tiny candle. I know. As a struggling newb it is really hard to trust the more-senior colleagues who insist that their experiences on various study sections has shown that reviewers often do go to bat for untried investigators. But....they do. Trust me.

There's a closely related reason to brush up your application to avoid as many obvious pitfalls as possible. Because it takes ammunition away from your detractors, which makes the advocates job easier.


Deny your detractors grist for their mill.

Should be simple, but isn't. Particularly when the critique is basically a reviewer trying to tell you to conduct the science the way s/he would if they were the PI. (An all to common and inappropriate approach in my view) If someone wants you to cut something minor out, for no apparent reason (like say the marginal cost of doing that particular experiment is low), just do it. Add that extra control condition. Respond to all of their critiques with something, even if it is not exactly what the reviewer is suggesting; again your ultimate audience is the advocate, not the detractor. Don't ignore anything major. This way, they can't say you "didn't respond to critique". They may not like the quality of the response you provide, but arguing about this is tougher in the face of your advocating reviewer.

This may actually be closest to the core of what Grumble was commenting on.

I made some other comments about the fact that a detractor can be converted to an advocate in the original post. The broader point is that an entire study section can be gradually converted. No joke that with enough applications from you, you can often turn the tide. Either because you have argued enough of them (different reviewers might be assigned over time to your many applications) into seeing science your way or because they just think you should be funded for something already. It happens. There is a "getting to know you" factor that comes into play. Guess what? The more credible apps you send to a study section, the more they get to know you.

Ok, there is a final bit for those of you who aren't even faculty yet. Yes, you. Things you do as a graduate student or as a postdoc will come in handy, or hurt you, when it comes time to apply for grants as faculty. This is why I say everyone needs to start thinking about the grant process early. This is why I say you need to start talking with NIH Program staff as a grad student or postdoc.


Plan ahead

Although the examples I use are from the grant review process, the application to paper review and job hunts are obvious with a little thought. This brings me to the use of this heuristic in advance to shape your choices.

Postdocs, for example, often feel they don't have to think about grant writing because they aren't allowed to at present, may never get that job and if they do they can deal with it later. This is an error. The advocate/detractor heuristic suggests that postdocs make choices to expend some effort in broad range of areas. It suggests that it is a bad idea to gamble on the BIG PAPER approach if this means that you are not going to publish anything else. An advocate on a job search committee can work much more easily with the dearth of Science papers than s/he can a dearth of any pubs whatsoever!

The heuristic suggests that going to the effort of teaching just one or two courses can pay off- you never know if you'll be seeking a primarily-teaching job after all. Nor when "some evidence of teaching ability" will be the difference between you and the next applicant for a job. Take on that series of time-depleting undergraduate interns in the lab so that you can later describe your supervisory roles in the laboratory.

This latter bit falls under the general category of managing your CV and what it will look like for future purposes.

Despite what we would like to be the case, despite what should be the case, despite what is still the case in some cozy corners of a biomedical science career....let us face some facts.

  • The essential currency for determining your worth and status as a scientist is your list of published, peer reviewed contributions to the scientific literature.
  • The argument over your qualities between advocates and detractors in your job search, promotions, grant review, etc is going to boil down to pseudo quantification of your CV at some point
  • Quantification means analyzing your first author / senior author /contributing author pub numbers. Determining the impact factor of the journals in which you publish. Examining the consistency of your output and looking for (bad) trends. Viewing the citation numbers for your papers.
  • You can argue to some extent for extenuating circumstances, the difficulty of the model, the bad PI, etc but it comes down to this: Nobody Cares.

My suggestion is, if you expect to have a career you had better have a good idea of what the standards are. So do the research. Do compare your CV with those of other scientists. What are the minimum criteria for getting a job / grant / promotion / tenure in your area? What are you going to do about it? What can you do about it?

This echos something Odyssey said on the Twitts today:

and

are true for your subfield stage as well as your University stage of performance.

3 responses so far

On coming up with multiple ideas for R01 proposals

A question to the blog asked the perennial concern that is raised every time I preach on about submitting a lot of proposals. How does one have enough ideas for that? My usual answer is a somewhat perplexed inability to understand how other scientists do not have more ideas in a given six month interval than they can possible complete in the next 20 years.

I reflected slightly more that usual today and thought of something.

There is one tendency of new grant writers that can be addressed here.

My experience is that early grant writers have a tendency to write a 10 year program of research into their initial R01s. It is perfectly understandable and I've done it myself. Probably still fall into this now and again. A Stock Critique of "too ambitious" is one clue that you may need to think about whether you are writing a 10 year research program rather than a 5 year, limited dollar figure, research project that fits within a broader programmatic plan.

One of the key developments as a grant writer, IMNSHO, is to figure out how to write a streamlined, minimalist grant that really is focused on a coherent project.

When you are doing this properly, it leaves an amazing amount of additional room to write additional, highly-focused proposals on, roughly speaking, the same topic.

33 responses so far

Women in the R00 phase don't apply for R01s as frequently as men

Sally Rockey:

A specific issue that recently has recently created interesting conversations in the blogosphere is whether female K99/R00 awardees were less likely to receive a subsequent R01 award compared to male K99/R00 awardees. We at NIH have also found this particular outcome among K99/R00 PIs and have noted that those differences again stem from differential rates of application. Of the 2007 cohort of K99 PIs, 86 percent of the men had applied for R01s by 2013, but only 69 percent of the women had applied.

She's referring here to a post over at DataHound ("K99-R00 Evaluation: A Striking Gender Disparity") which observed:

Of the 201 men with R00 awards, 114 (57%) have gone on to receive at least 1 R01 award to date. In contrast, of the 127 women with R00 awards, only 53 (42%) have received an R01 award. This difference is jarring and is statistically significant (P value=0.009).
...
To investigate this further, I looked at the two cohorts separately. For the FY2007 cohort, 70 of the 108 men (65%) with R00 awards have received R01 grants whereas only 31 of the 62 women (50%) have (P value = 0.07). For the FY2008 cohort, 44 of the 93 men (47%) with R00 awards have received R01s whereas only 22 of the 65 women (34%) have (P value = 0.10). The lack of statistical significance is due to the smaller sample sizes for the cohorts separately rather than any difference in the trends for the separate cohorts, which are quite similar.

And Rockey isn't even giving us the data on the vigor with which a R00 holder is seeking R01 funding. That may or may not make the explanation even stronger.

Seems to me that any mid or senior level investigators who have new R00-holding female assistant professors in their department might want to make a special effort to encourage them to submit R01 apps early and often.

13 responses so far

Simple, can't-miss strategy to get a grant from the NIH

Aug 08 2014 Published by under Grantsmanship, NIH, NIH Careerism

Work at it.
Continue Reading »

79 responses so far

Sex differences in K99/R00 awardees from my favorite ICs

Jul 21 2014 Published by under Grantsmanship, NIH, NIH Careerism, NIH funding, Peer Review

Datahound has some very interesting analyses up regarding NIH-wide sex differences in the success of the K99/R00 program.

Of the 218 men with K99 awards, 201 (or 92%) went on to activate the R00 portion. Of the 142 women, 127 (or 89%) went on to these R00 phase. These differences in these percentages are not statistically different.

Of the 201 men with R00 awards, 114 (57%) have gone on to receive at least 1 R01 award to date. In contrast, of the 127 women with R00 awards, only 53 (42%) have received an R01 award. This difference is jarring and is statistically significant (P value=0.009).

Yowza.

So per my usual, I'm very interested in what the ICs that are closest to my lab's heart have been up to with this program. Looking at K99 awardees from 07 to 09 I find women PIs to constitute 3/3, 1/3 and 2/4 in one Institute and 1/7, 2/6 and 5/14 in the other Institute. One of these is doing better than the other and I will just note that was before the arrival of a Director who has been very vocal about sex discrimination in science and academia.

In terms of the conversion to R01 funding that is the subject of Datahound's post, the smaller Institution has decent outcomes* for K99 awardees from 07 (R01, R21, nothing), 08 (R01, R01, R01) and 09 (P20 component, U component, nothing, nothing).

In the other Institute, the single woman from 07 did not appear to convert to the R00 phase but Google suggests made Assistant Professor rank anyway. No additional NIH funding. The rest of the 07 class contains 4 with R01 and two with nothing. In 08, the women PIs are split (one R01, one nothing) similar to the men (2 R01, 2 with nothing). In 09 the women PIs have two with R01s, one R03 and two with nothing.

So from this qualitative look, nothing is out of step with Datahound's NIH-wide stats. There are 14/37 women PIs, this 38% is similar to the NIH-wide 39% Datahound quoted although there may be a difference between these two ICS (30% vs 60%) that could stand some inquiry. One of 37 K99 awardees failed to convert to R00 from the K99 (but seems to be faculty anyway). Grant conversion past the R00 is looking to be roughly half or a bit better.

I didn't do the men for the 2009 cohort in the larger Institute but otherwise the sex differences in terms of getting/not getting additional funding beyond the R00 seems pretty similar.

I do hope Datahound's stats open some eyes at the NIH, however. Sure, there are reasons to potentially excuse away a sex difference in the rates of landing additional research funding past the R00. But I am reminded of a graph Sally Rockey posted regarding the success rate on R01-equivalent awards. It showed that men and women PIs had nearly identical success rates on new (Type 1) proposals but slightly lower success on Renewal (Type 2) applications. This pastes over the rates of conversion to R00 and the acquisition of additional funding, if you squint a bit.

Are women overall less productive once they've landed some initial funding? Are they viewed negatively on the continuation of a project but not on the initiation of it? Are women too humble about what they have accomplished?
__
*I'm counting components of P or U mechanisms but not pilot awards.

15 responses so far

Your Grant in Review: The Ideal Personnel Lineup

Excellent comment from eeke:

My last NIH grant application was criticized for not including a post-doc at 100% effort. I had listed two techs, instead. Are reviewers under pressure to ding PI's for not having post-docs or some sort of trainee? WTF?

My answer:

I think it is mostly because reviewers think that a postdoc will provide more intellectual horsepower than a tech and especially when you have two techs, you could have one of each.

I fully embrace this bias, I have to admit. I think a one-tech, one-PD full modular R01 is about the standardest of standard lineups. Anchors the team. Best impact for the money and all that.

A divergence from this expectation would require special circumstances to be explained (and of course there are many projects imaginable where two-tech, no-postdoc *would* be best, you just have to explain it)

What do you think, folks? What arrangement of personnel do you expect to see on a grant proposal in your field, for your agencies of closest interest? Are you a blank slate until you see what the applicant has proposed or do you have....expectations?

22 responses so far

Your Grant in Review: The F32 Postdoctoral Fellowship Application

We've previously discussed the NIH F32 Fellowship designed to support postdoctoral trainees. Some of the structural limitations to a system designed on its fact to provide necessary support for necessary (additional) training overlap considerably with the problems of the F31 program designed to support graduate students.

Nevertheless, winning an individual NRSA training fellowship (graduate or postdoctoral) has all kinds of career benefits to the trainee and primary mentor so they remain an attractive option.

A question arose on the Twitts today about whether it was worth it for a postdoc in a new lab to submit an application.

In my limited experience reviewing NRSA proposals in a fellowship-dedicated panel for the NIH, there is one issue that looms large in these situations.

Reviewer #1, #2 and #3: "There is no evidence in the application that sufficient research funds will be available to complete the work described during the proposed interval of funding."

NRSA fellowships, as you are aware, do not come with money to pay for the actual research. The fellowship applications require a good deal of discussion of the research the trainee plans to complete for the proposed interval of training. In most cases that research plan involves a fair amount of work that require a decent amount of research funding to complete.

The reviewers, nearly all of them in my experience, will be looking for signs of feasibility. That the PI is actually funded, funded to do something vaguely related* to the topic of the fellowship proposal and funded for the duration over which the fellowship will be active.

When the PI is not obviously funded through that interval, eyebrows are raised. Criticism is leveled.

So, what is a postdoc in a newer lab to do? What is the PI of a newish lab, without substantial funding to do?

One popular option is to find a co-mentor for the award. A co-mentor that is involved. Meaning the research plan needs to be written as a collaborative project between laboratories. Obviously, this co-mentor should have the grant support that the primary PI is lacking. It needs to be made clear that there will be some sort of research funds to draw upon to support the fellow doing some actual research.

The inclusion of "mentoring committees" and "letters of support from the Chair" are not sufficient. Those are needed, don't get me wrong, but they address other concerns** that people have about untried PIs supervising a postdoctoral fellow.

It is essential that you anticipate the above referenced Stock Critique and do your best*** to head it off.

__
*I have seen several highly regarded NRSA apps for which the research plan looks to me to be of R01-quality writing and design.

**We're in stock-critique land here. Stop raging about how you are more qualified than Professor AirMiles to actually mentor a postdoc.

***Obviously the application needs to present the primary mentor's funding in as positive a light as possible. Talk about startup funds, refer to local pilot grants, drop promising R01 scores if need be. You don't want to blow smoke, or draw too much attention to deficits, but a credible plan for acquiring funding goes a lot farther than ignoring the issue.

28 responses so far

Not just a list, but a story

May 23 2014 Published by under Grantsmanship, NIH

A guest post from @iGrrrl


Like winter, changes to the biosketch are coming

Dr. Rockey spoke about the changes to the biographical sketch at the NORDP meeting this week, and I think I can at least offer a bit more depth about the thinking behind this, both from her comments and from what I've seen over the last few years. Certainly my knee-jerk negative opinion about this change has evolved upon reflection and listening to her presentation. This may not be as bad as it sounds. Maybe.

In her talk, of which this question of biosketches was one very small part, her short-hand way of referring to the reasoning behind this was to "reduce the reliance on publishing in one-word-named journals" as a shorthand to judging the quality of an investigator's productivity. When the biosketch was changed in 2010, shortening the publication list to 15 seemed to me to be designed to reduce a senior investigator's advantage of sheer numbers of publications. The rise of metrics and h-factor means that the impact factor of the journal in which the work was published now substitutes, in many a reviewer's mind, as the quick heuristic for assessing the Investigator criterion.

The move to the shorter publication list was also borrowed from NSF's limit of 10 products for the biosketch. This sounds good on paper, but didn't account for the differences in culture. Researchers in NSF-type fields are just as conscious of h-index, but you don't find the same reliance on "glamour magazines" that cut across all NSF research. The result seems to be that many young investigators in biomedicine feel they have to wait to publish until they have a story worthy of C/N/S. I hear sometimes about young researchers failing to make tenure in part because they did not publish enough, not because they didn't have data, but because they were trying for the high-level journal, didn't simply get it out in field- or sub-field-specific journals.

And work that appears in those so-called lower-tier journals shouldn't be dismissed, but it often is effectively ignored when a reviewer's eyes are looking for the titles of the high-impact journals. If a young faculty member's list maxes to 15 and they are all solid papers in reasonable journals, that's usually fine. But sometimes they have fewer than 15, so the reviewer relies more on the impact factor of the journals in which the work appears, and that in turn leads to reliance on C/N/S (JAMA, NEJM, etc). But for the applicant, sometimes the work reflected in the papers is based on a study that simply takes a long time to run, so that one paper in that year might represent a great deal of effort and time with results highly relevant within the context of the subfield. Or a series of papers may have methods published in one journal, the study in two more, and none of them are top-tier, but the entire story is important. This new narrative gives the opportunity to give that context.

This appears to be the point of the change to the biosketch: the impact factor of the journal(s) in which the work appeared may not reflect the impact of the results. Some applicants were including a sentence after every paper on the biosketch to try to give the context and impact--the contribution to the field--but in my experience, reviewers did not like and did not read these sentences. Yet, when reviewers come from a diversity of backgrounds, they may not be able to appreciate the impact of a result on the sub-field. Many of these concerns have been vociferously expressed to Dr. Rockey through various social media, primarily comments here at Our Host's blot, but also on the RockTalking blog.

The idea behind this new approach to discussing an applicant's contributions has some reasonable foundations, but I don't expect it will work. In the short term, applicants will likely struggle to assemble a response to this new requirement. I can't imagine reviewers will enjoy reading the resulting narratives. It may be that a common rubric approach to writing these sections as a clear story will make them uniform enough for reviewers to quickly judge, but I fully expect they will still be looking for Cell, Nature, and Science.

15 responses so far

Revision strategy, with an eye to the new A2 as A0 policy at NIH

Occasionally, Dear Reader, one or another of you solicits my specific advice on some NIH grant situation you are experiencing. Sometimes the issues are too specific to be of much general good but this one is at least grist for discussion of how to proceed.

Today's discussion starts with the criterion scores for an R01/equivalent proposal. As a reminder, the five criteria are ordered as Significance, Investigator, Innovation, Approach and Environment. The first round for this proposal ended up with

Reviewer #1: 1, 1, 1, 3, 1
Reviewer #2: 3, 1, 1, 3, 1
Reviewer #3: 6, 2, 1, 8, 1
Reviewer #4: 2, 1, 3, 2, 1

From this, the overall outcome was.... Not Discussed. Aka, triaged.

As you might imagine, the PI was fuming. To put it mildly. Three pretty decent looking reviews and one really, really unfavorable one. This should, in my opinion, have been pulled up for discussion to resolve the differences of opinion. It was not. That indicates that the three favorable reviewers were either somehow convinced by what Reviewer #3 wrote that they had been too lenient...or they were simply not convinced discussion would make a material difference (i.e. push it over the "fund" line). The two 3s on Approach from the first two reviewers are basically a "I'd like to see this come back, fixed" type of position. So they might have decided, screw it, let this one come back and we'll fight over it then.

This right here points to my problem with the endless queue of the revision traffic pattern and the new A2 as A0 policy that will restore it to the former glory. It should be almost obligatory to discuss significantly divergent scores, particularly when they make a categorical difference. The difference between triaged and discussed and the difference between a maybe-fundable and a clearly-not-fundable score is known to the Chair and the SRO of the study section. Thee Chair could insist on resolving these types of situations. I think they should be obliged to do so, personally. It would save some hassle and extra rounds of re-review. It seems particularly called-for when the majority of the scores are in the better direction because that should be some minor indication that the revised version would have a good chance to improve in the minds of the reviewers.

There is one interesting instructive point that reinforces one of my usual soapboxes. This PI had actually asked me before the review, when the study section roster was posted, what to do about reviewer conflicts. This person was absolutely incensed (and depressed) about the fact that a scientific competitor in highly direct competition with the proposal had been brought on board. There is very little you can do, btw, 30 days out from review. That ship has sailed.

After seeing the summary statement, the PI had to admit that going by the actual criticism comments, the only person with the directly-competing expertise was not Reviewer #3. Since the other three scores were actually pretty good, we can see that I am right on the assumption of what a reviewer will think of your application based on perceptions of competition or personal dis/like. You will often be surprised that the reviewer that you assume is out to screw your application over will be pulling for it. Or at least, will be giving it a score that is in line with the majority of the other reviewers. This appears to be what happened in this case.

Okay. So, as I may have mentioned I have been reluctantly persuading myself that revising triaged applications is a waste of time. Too few of them make it over the line to fund. And in the recently past era of A1 and out....well perhaps time was better spent on a new app. In this case, however, I think there is a strong case for revision. Three of four (and we need to wonder about why there even were four reviews instead of three) of these criterion score sets look to me like scores that would get an app discussed. The ND seems to be a bit of an unfair result, based on the one hater. The PI agreed, apparently, and resubmitted a revised application. In this case the criterion scores were:

Reviewer #1: 1, 2, 2, 5, 1
Reviewer #2: 2, 2, 2, 2, 1
Reviewer #3: 1, 1, 2, 2, 1
Reviewer #4: 2, 1, 1, 2, 1
Reviewer #5: 1, 1, 4, 7, 1

I remind you that we cannot assume any overlap in reviewers nor any identity of reviewer number in the case of re-assigned reviewers. In this case the grant was discussed at study section and ended up with a 26 voted impact score. The PI noted that a second direct competitor on the science had been included on the review panel this time in addition to the aforementioned first person in direct competition.

Oh Brother.

I assure you, Dear Reader, that I understand the pain of getting reviews like this. Three reviewers throwing 1s and 2s is not only a "surely discussed" outcome but is a "probably funded" zone, especially for a revised application. Even the one "5" from Reviewer #1 on Approach is something that perhaps the other reviewers might talk him/her down from. But to have two obviously triage numbers thrown on Approach? A maddening split decision, leading to a score that is most decidedly on the bubble for funding.

My seat of the pants estimation is that this may require Program intervention to fund. I don't know for sure, I'm not familiar with the relevant paylines and likely success rates for this IC for this fiscal year.

Now, if this doesn't end up winning funding, I think the PI most certainly has to take advantage of the new A2 as A0 policy and put this sucker right back in. To the same study section. Addressing whatever complaints were associated with Reviewer #1's and #5's criticisms of course. But you have to throw yourself on the mercy of the three "good" reviewers and anyone they happened to convince during discussion. I bet a handful of them will be sufficient to bring the next "A0" of this application to a fundable score even if the two less-favorable reviewers refuse to budge. I also bet there is a decent chance the SRO will see that last reviewer as a significant outlier and not assign the grant to that person again.

I wish this PI my best of luck in getting the award.

29 responses so far

Your Grant in Review: The Biosketch Research Support Section

Apr 21 2014 Published by under Grant Review, Grantsmanship, NIH, NIH funding

A question came up on the twitts about the Research Support section of the NIH Biosketch.

The answer is that no, you do not. I will note that I am not entirely sure if this changed over the years or if my understanding of this rule was incomplete at the start. However, the instructions on the Sample Biosketch [PDF] provided by the NIH are clear.

D. Research Support
List both selected ongoing and completed research projects for the past three years (Federal or non-Federally-supported). Begin with the projects that are most relevant to the research proposed in the application. Briefly indicate the overall goals of the projects and responsibilities of the key person identified on the Biographical Sketch. Do not include number of person months or direct costs.

The last bit is the key bit for Dr24Hour's question but I include the full description for a reason.

dr24Hours also asked:

and there was a followup to my initial negative response

Together, these questions seem to indicate a misunderstanding of what this section is for, and what it is trying to communicate.

Note the use of the term "selected" and "most relevant" in the above passage.

The Biosketch is, in total, devoted to convincing the reviewers that the PI and other Investigative staff have the chops to pull off the project under review. It is about bragging on how accomplished they all are. Technically, it is not even a full recitation of all the support one has secured in the past three years. This is similar to how the Peer-reviewed Publications section is limited to 15 items, regardless of how many total publications that you have.

Inclusion of items in the Research Support section is to show that the Investigators have run projects of similar scope with acceptable success. Yes, the definition of acceptable success is variable, but this concept is clear. The goal is to show off the Investigator's accomplishments to the best possible advantage.

The Research Support is not about demonstrating that the PI is successful at winning grants. It is not about demonstrating how big and well-funded the lab has been (no direct costs). It is not even about the reviewers trying to decide if the PI is spread too thinly (no listing of effort). This is not the point*.

In theory, one would just put forward a subset of the best elements on one's CV. The most relevant and most successful grant awards. If space is an issue (the Biosketch is limited to 4 pages) then the PI might have to make some selections. Obviously you'd want to start with NIH R01s (or equivalent) if the application is an R01. Presumably you would want to supply the reviewer with what you think are your most successful projects- in terms of papers, scientific advance, pizzaz of findings or whatever floats your boat.

You might also want to "selectively" omit any of your less-successful awards or even ones that seem like they have too much overlap with the present proposal.

Don't do this.

If it is an NIH award, you can be assured at least one of the reviewers will have looked you up on RePORTER and will notice the omission. If it is a non-NIH award, perhaps the odds are lower but you just never know. If the reviewer thinks you are hiding something...this is not good. If your award has been active in the past three years and is listed somewhere Internet-accessible, particularly on your University or lab website, then list it on the Biosketch.

Obviously this latter advice only applies to people who have a lot of different sources of support. The more of them you have, the tighter the space. Clearly, you are going to have to make some choices if your lab operates on a lot of different sources of support. Prioritize by what makes sense to you, sure, but make sure you pay attention to the communication you are trying to accomplish when making your selections. And beware of being viewed with a suspicion that you are trying to conceal something.
__
*Yes, in theory. I grasp that there will be reviewers using this information to argue that the PI is spread too thinly or has "too much" grant support.

10 responses so far

Older posts »