Archive for the 'Grant Review' category

Your Grant in Review: When they aren't talking to you.

Aug 22 2014 Published by under Grant Review, Grantsmanship, NIH Careerism

It is always good to remember that sometimes comments in the written critique are not directed at the applicant.

Technically, of course these comments are directed at Program Staff in an advisory capacity. Not to help the applicant in any way whatsoever- assistance in revising is a side effect.

Still a comment that opposes a Stock Criticism is particularly likely to be there for the consumption of either Program or the other reviewers.

It is meant to preempt the Stock Criticism when the person making the comment lies the grant.

12 responses so far

Your Grant in Review Reminder: Research Study Sections First

Aug 22 2014 Published by under Grant Review, Grantsmanship, NIH Careerism, NIH funding

One key to determining the right study section to request is to look on RePORTER for funded grants reviewed in your study sections of interest.

Sometimes this is much more informative than the boilerplate description of the study section listed at CSR.

8 responses so far

Peer Review: Advocates and Detractors Redux

A comment on a recent post from Grumble is a bit of key advice for those seeking funding from the NIH.

It's probably impossible to eliminate all Stock Critique bait from an application. But you need to come close, because if you don't, even a reviewer who likes everything else about your application is going to say to herself, "there's no way I can defend this in front of the committee because the other reviewers are going to bring up all these annoying flaws." So she won't even bother trying. She'll hold her fire and go all out to promote/defend the one application that hits on most cylinders and proposes something she's really excited about.

This is something that I present as an "advocates and detractors" heuristic to improving your grant writing, surely, but it applies to paper writing/revising and general career management as well. I first posted comments on Peer Review: Friends and Enemies in 2007 and reposted in 2009.


The heuristic is this. In situations of scientific evaluation, whether this be manuscript peer-review, grant application review, job application or the tenure decision, one is going to have a set of advocates in favor of one's case and detractors who are against. The usual caveats apply to such a strict polarization. Sometimes you will have no advocates, in which case you are sunk anyway so that case isn't worth discussing. The same reviewer can simultaneously express pro and con views but as we'll discuss this is just a special case.

The next bit in my original phrasing is what Grumble is getting at in the referenced comment.


Give your advocates what they need to go to bat for you.

This is the biggie. In all things you have to give the advocate something to work with. It does not have to be overwhelming evidence, just something. Let's face it, how many times are you really in position in science to overwhelm objections with the stupendous power of your argument and data to the point where the most confirmed critic cries "Uncle". Right. Never happens.

The point here is that you need not put together a perfect grant, nor need you "wait" until you have X, Y or Z bit of Preliminary Data lined up. You just have to come up with something that your advocates can work with. As Grumble was pointing out, if you give your advocate a grant filled with StockCritique bait then this advocate realizes it is a sunk cause and abandons it. Why fight with both hands and legs trussed up like a Thanksgiving turkey?

Let's take some stock critiques as examples.

"Productivity". The goal here is not to somehow rush 8 first author papers into press. Not at all. Just give them one or two more papers, that's enough. Sometimes reiterating the difficulty of the model or the longitudinal nature of the study might be enough.

"Independence of untried PI with NonTenureTrackSoundin' title". Yes, you are still in the BigPIs lab, nothing to be done about that. But emphasize your role in supervising whole projects, running aspects of the program, etc. It doesn't have to be meticulously documented, just state it and show some sort of evidence. Like your string of first and second authorships on the papers from that part of the program.

"Not hypothesis driven". Sure, well sometimes we propose methodological experiments, sometimes the outcome is truly a matter of empirical description and sometimes the results will be useful no matter how it comes out so why bother with some bogus bet on a hypothesis? Because if you state one, this stock critique is de-fanged, it is much easier to argue the merits of a given hypothesis than it is the merits of the lack of a hypothesis.

Instead of railing against the dark of StockCriticism, light a tiny candle. I know. As a struggling newb it is really hard to trust the more-senior colleagues who insist that their experiences on various study sections has shown that reviewers often do go to bat for untried investigators. But....they do. Trust me.

There's a closely related reason to brush up your application to avoid as many obvious pitfalls as possible. Because it takes ammunition away from your detractors, which makes the advocates job easier.


Deny your detractors grist for their mill.

Should be simple, but isn't. Particularly when the critique is basically a reviewer trying to tell you to conduct the science the way s/he would if they were the PI. (An all to common and inappropriate approach in my view) If someone wants you to cut something minor out, for no apparent reason (like say the marginal cost of doing that particular experiment is low), just do it. Add that extra control condition. Respond to all of their critiques with something, even if it is not exactly what the reviewer is suggesting; again your ultimate audience is the advocate, not the detractor. Don't ignore anything major. This way, they can't say you "didn't respond to critique". They may not like the quality of the response you provide, but arguing about this is tougher in the face of your advocating reviewer.

This may actually be closest to the core of what Grumble was commenting on.

I made some other comments about the fact that a detractor can be converted to an advocate in the original post. The broader point is that an entire study section can be gradually converted. No joke that with enough applications from you, you can often turn the tide. Either because you have argued enough of them (different reviewers might be assigned over time to your many applications) into seeing science your way or because they just think you should be funded for something already. It happens. There is a "getting to know you" factor that comes into play. Guess what? The more credible apps you send to a study section, the more they get to know you.

Ok, there is a final bit for those of you who aren't even faculty yet. Yes, you. Things you do as a graduate student or as a postdoc will come in handy, or hurt you, when it comes time to apply for grants as faculty. This is why I say everyone needs to start thinking about the grant process early. This is why I say you need to start talking with NIH Program staff as a grad student or postdoc.


Plan ahead

Although the examples I use are from the grant review process, the application to paper review and job hunts are obvious with a little thought. This brings me to the use of this heuristic in advance to shape your choices.

Postdocs, for example, often feel they don't have to think about grant writing because they aren't allowed to at present, may never get that job and if they do they can deal with it later. This is an error. The advocate/detractor heuristic suggests that postdocs make choices to expend some effort in broad range of areas. It suggests that it is a bad idea to gamble on the BIG PAPER approach if this means that you are not going to publish anything else. An advocate on a job search committee can work much more easily with the dearth of Science papers than s/he can a dearth of any pubs whatsoever!

The heuristic suggests that going to the effort of teaching just one or two courses can pay off- you never know if you'll be seeking a primarily-teaching job after all. Nor when "some evidence of teaching ability" will be the difference between you and the next applicant for a job. Take on that series of time-depleting undergraduate interns in the lab so that you can later describe your supervisory roles in the laboratory.

This latter bit falls under the general category of managing your CV and what it will look like for future purposes.

Despite what we would like to be the case, despite what should be the case, despite what is still the case in some cozy corners of a biomedical science career....let us face some facts.

  • The essential currency for determining your worth and status as a scientist is your list of published, peer reviewed contributions to the scientific literature.
  • The argument over your qualities between advocates and detractors in your job search, promotions, grant review, etc is going to boil down to pseudo quantification of your CV at some point
  • Quantification means analyzing your first author / senior author /contributing author pub numbers. Determining the impact factor of the journals in which you publish. Examining the consistency of your output and looking for (bad) trends. Viewing the citation numbers for your papers.
  • You can argue to some extent for extenuating circumstances, the difficulty of the model, the bad PI, etc but it comes down to this: Nobody Cares.

My suggestion is, if you expect to have a career you had better have a good idea of what the standards are. So do the research. Do compare your CV with those of other scientists. What are the minimum criteria for getting a job / grant / promotion / tenure in your area? What are you going to do about it? What can you do about it?

This echos something Odyssey said on the Twitts today:

and

are true for your subfield stage as well as your University stage of performance.

6 responses so far

On coming up with multiple ideas for R01 proposals

A question to the blog asked the perennial concern that is raised every time I preach on about submitting a lot of proposals. How does one have enough ideas for that? My usual answer is a somewhat perplexed inability to understand how other scientists do not have more ideas in a given six month interval than they can possible complete in the next 20 years.

I reflected slightly more that usual today and thought of something.

There is one tendency of new grant writers that can be addressed here.

My experience is that early grant writers have a tendency to write a 10 year program of research into their initial R01s. It is perfectly understandable and I've done it myself. Probably still fall into this now and again. A Stock Critique of "too ambitious" is one clue that you may need to think about whether you are writing a 10 year research program rather than a 5 year, limited dollar figure, research project that fits within a broader programmatic plan.

One of the key developments as a grant writer, IMNSHO, is to figure out how to write a streamlined, minimalist grant that really is focused on a coherent project.

When you are doing this properly, it leaves an amazing amount of additional room to write additional, highly-focused proposals on, roughly speaking, the same topic.

34 responses so far

Quality of grant review

Jun 13 2014 Published by under Grant Review, NIH funding, Peer Review

Where are all the outraged complaints about the quality of grant peer review and Errors Of Fact for grants that were scored within the payline?

I mean, if the problem is with bad review it should plague the top scoring applications as much as the rest of the distribution. Right?

47 responses so far

I’m Your Huckleberry

Jun 06 2014 Published by under Fixing the NIH, Grant Review, NIH, Peer Review

bluebirdhappinessThis is a guest appearance of the bluebird of Twitter happiness known as My T Chondria. I am almost positive the bird does some sort of science at some sort of US institution of scientific research.


 

I’m your biased reviewer. I’ve sat on study sections for most of the years I’ve been a faculty member and I’m biased. I’m exactly who Sally Rockey and Richard Nakamura are targeting in their call for proposals to lessen bias and increase impartial reviewing of NIH applications.

Webster’s defines bias as “mental tendency or inclination” listing synonyms including “predisposition, preconception, predilection, partiality, and proclivity”. When I review a grant from an African American applicant, I have a preconception of who they are. I refine that judgment based on their training, publications and productivity.

I should share that I’m also biased in my review of applicants who have health issues, are women, are older than 30 and have children. I’ve had every one of these types of trainees in my lab and my experiences with them lead me to develop partiality and preconceptions that impact my opinions and judgments. Parts of my preconceptions arise from my experiences with these trainees in my as well those I interacted with while serving on my University’s admission committee. I was biased when I performed those duties as well.

Anyone who pretends to be utterly impartial is dangerous and hurtful to those we say we value as a scientific community. I am frankly stunned to see so many tone deaf and thoughtless comments claiming they are deeply offended at the at this ‘mindless drivel’.
MyT-fromRockTalk
Dr Marconi is just one of many scientists who claim, “I’ve never seen this, so it must not be true”. Scientist’s careers are based on things that cannot be seen, but we collect and interpret data and develop an understanding based on that which we cannot see. Data has been collected and the results are alarming and open for active debate.

Bias is far more insidious than racism. Racists reveal themselves and their ignorance and are often dismissed by ‘educated’ society for their extremist views. Bias is far subtler. Even if it results in an imperceptible change in scoring, we are in a climate where these things matter. Where razor fine decisions are being made on funding.

It's the people who are sure they have no bias that I fear. I know I have bias. We are simply incapable of being utterly impartial and anyone who says they are impartial is dangerously obtuse to these problems at best and a liar at worst.

22 responses so far

Your Grant in Review: The F32 Postdoctoral Fellowship Application

We've previously discussed the NIH F32 Fellowship designed to support postdoctoral trainees. Some of the structural limitations to a system designed on its fact to provide necessary support for necessary (additional) training overlap considerably with the problems of the F31 program designed to support graduate students.

Nevertheless, winning an individual NRSA training fellowship (graduate or postdoctoral) has all kinds of career benefits to the trainee and primary mentor so they remain an attractive option.

A question arose on the Twitts today about whether it was worth it for a postdoc in a new lab to submit an application.

In my limited experience reviewing NRSA proposals in a fellowship-dedicated panel for the NIH, there is one issue that looms large in these situations.

Reviewer #1, #2 and #3: "There is no evidence in the application that sufficient research funds will be available to complete the work described during the proposed interval of funding."

NRSA fellowships, as you are aware, do not come with money to pay for the actual research. The fellowship applications require a good deal of discussion of the research the trainee plans to complete for the proposed interval of training. In most cases that research plan involves a fair amount of work that require a decent amount of research funding to complete.

The reviewers, nearly all of them in my experience, will be looking for signs of feasibility. That the PI is actually funded, funded to do something vaguely related* to the topic of the fellowship proposal and funded for the duration over which the fellowship will be active.

When the PI is not obviously funded through that interval, eyebrows are raised. Criticism is leveled.

So, what is a postdoc in a newer lab to do? What is the PI of a newish lab, without substantial funding to do?

One popular option is to find a co-mentor for the award. A co-mentor that is involved. Meaning the research plan needs to be written as a collaborative project between laboratories. Obviously, this co-mentor should have the grant support that the primary PI is lacking. It needs to be made clear that there will be some sort of research funds to draw upon to support the fellow doing some actual research.

The inclusion of "mentoring committees" and "letters of support from the Chair" are not sufficient. Those are needed, don't get me wrong, but they address other concerns** that people have about untried PIs supervising a postdoctoral fellow.

It is essential that you anticipate the above referenced Stock Critique and do your best*** to head it off.

__
*I have seen several highly regarded NRSA apps for which the research plan looks to me to be of R01-quality writing and design.

**We're in stock-critique land here. Stop raging about how you are more qualified than Professor AirMiles to actually mentor a postdoc.

***Obviously the application needs to present the primary mentor's funding in as positive a light as possible. Talk about startup funds, refer to local pilot grants, drop promising R01 scores if need be. You don't want to blow smoke, or draw too much attention to deficits, but a credible plan for acquiring funding goes a lot farther than ignoring the issue.

28 responses so far

Revision strategy, with an eye to the new A2 as A0 policy at NIH

Occasionally, Dear Reader, one or another of you solicits my specific advice on some NIH grant situation you are experiencing. Sometimes the issues are too specific to be of much general good but this one is at least grist for discussion of how to proceed.

Today's discussion starts with the criterion scores for an R01/equivalent proposal. As a reminder, the five criteria are ordered as Significance, Investigator, Innovation, Approach and Environment. The first round for this proposal ended up with

Reviewer #1: 1, 1, 1, 3, 1
Reviewer #2: 3, 1, 1, 3, 1
Reviewer #3: 6, 2, 1, 8, 1
Reviewer #4: 2, 1, 3, 2, 1

From this, the overall outcome was.... Not Discussed. Aka, triaged.

As you might imagine, the PI was fuming. To put it mildly. Three pretty decent looking reviews and one really, really unfavorable one. This should, in my opinion, have been pulled up for discussion to resolve the differences of opinion. It was not. That indicates that the three favorable reviewers were either somehow convinced by what Reviewer #3 wrote that they had been too lenient...or they were simply not convinced discussion would make a material difference (i.e. push it over the "fund" line). The two 3s on Approach from the first two reviewers are basically a "I'd like to see this come back, fixed" type of position. So they might have decided, screw it, let this one come back and we'll fight over it then.

This right here points to my problem with the endless queue of the revision traffic pattern and the new A2 as A0 policy that will restore it to the former glory. It should be almost obligatory to discuss significantly divergent scores, particularly when they make a categorical difference. The difference between triaged and discussed and the difference between a maybe-fundable and a clearly-not-fundable score is known to the Chair and the SRO of the study section. Thee Chair could insist on resolving these types of situations. I think they should be obliged to do so, personally. It would save some hassle and extra rounds of re-review. It seems particularly called-for when the majority of the scores are in the better direction because that should be some minor indication that the revised version would have a good chance to improve in the minds of the reviewers.

There is one interesting instructive point that reinforces one of my usual soapboxes. This PI had actually asked me before the review, when the study section roster was posted, what to do about reviewer conflicts. This person was absolutely incensed (and depressed) about the fact that a scientific competitor in highly direct competition with the proposal had been brought on board. There is very little you can do, btw, 30 days out from review. That ship has sailed.

After seeing the summary statement, the PI had to admit that going by the actual criticism comments, the only person with the directly-competing expertise was not Reviewer #3. Since the other three scores were actually pretty good, we can see that I am right on the assumption of what a reviewer will think of your application based on perceptions of competition or personal dis/like. You will often be surprised that the reviewer that you assume is out to screw your application over will be pulling for it. Or at least, will be giving it a score that is in line with the majority of the other reviewers. This appears to be what happened in this case.

Okay. So, as I may have mentioned I have been reluctantly persuading myself that revising triaged applications is a waste of time. Too few of them make it over the line to fund. And in the recently past era of A1 and out....well perhaps time was better spent on a new app. In this case, however, I think there is a strong case for revision. Three of four (and we need to wonder about why there even were four reviews instead of three) of these criterion score sets look to me like scores that would get an app discussed. The ND seems to be a bit of an unfair result, based on the one hater. The PI agreed, apparently, and resubmitted a revised application. In this case the criterion scores were:

Reviewer #1: 1, 2, 2, 5, 1
Reviewer #2: 2, 2, 2, 2, 1
Reviewer #3: 1, 1, 2, 2, 1
Reviewer #4: 2, 1, 1, 2, 1
Reviewer #5: 1, 1, 4, 7, 1

I remind you that we cannot assume any overlap in reviewers nor any identity of reviewer number in the case of re-assigned reviewers. In this case the grant was discussed at study section and ended up with a 26 voted impact score. The PI noted that a second direct competitor on the science had been included on the review panel this time in addition to the aforementioned first person in direct competition.

Oh Brother.

I assure you, Dear Reader, that I understand the pain of getting reviews like this. Three reviewers throwing 1s and 2s is not only a "surely discussed" outcome but is a "probably funded" zone, especially for a revised application. Even the one "5" from Reviewer #1 on Approach is something that perhaps the other reviewers might talk him/her down from. But to have two obviously triage numbers thrown on Approach? A maddening split decision, leading to a score that is most decidedly on the bubble for funding.

My seat of the pants estimation is that this may require Program intervention to fund. I don't know for sure, I'm not familiar with the relevant paylines and likely success rates for this IC for this fiscal year.

Now, if this doesn't end up winning funding, I think the PI most certainly has to take advantage of the new A2 as A0 policy and put this sucker right back in. To the same study section. Addressing whatever complaints were associated with Reviewer #1's and #5's criticisms of course. But you have to throw yourself on the mercy of the three "good" reviewers and anyone they happened to convince during discussion. I bet a handful of them will be sufficient to bring the next "A0" of this application to a fundable score even if the two less-favorable reviewers refuse to budge. I also bet there is a decent chance the SRO will see that last reviewer as a significant outlier and not assign the grant to that person again.

I wish this PI my best of luck in getting the award.

29 responses so far

Your Grant in Review: The Biosketch Research Support Section

Apr 21 2014 Published by under Grant Review, Grantsmanship, NIH, NIH funding

A question came up on the twitts about the Research Support section of the NIH Biosketch.

The answer is that no, you do not. I will note that I am not entirely sure if this changed over the years or if my understanding of this rule was incomplete at the start. However, the instructions on the Sample Biosketch [PDF] provided by the NIH are clear.

D. Research Support
List both selected ongoing and completed research projects for the past three years (Federal or non-Federally-supported). Begin with the projects that are most relevant to the research proposed in the application. Briefly indicate the overall goals of the projects and responsibilities of the key person identified on the Biographical Sketch. Do not include number of person months or direct costs.

The last bit is the key bit for Dr24Hour's question but I include the full description for a reason.

dr24Hours also asked:

and there was a followup to my initial negative response

Together, these questions seem to indicate a misunderstanding of what this section is for, and what it is trying to communicate.

Note the use of the term "selected" and "most relevant" in the above passage.

The Biosketch is, in total, devoted to convincing the reviewers that the PI and other Investigative staff have the chops to pull off the project under review. It is about bragging on how accomplished they all are. Technically, it is not even a full recitation of all the support one has secured in the past three years. This is similar to how the Peer-reviewed Publications section is limited to 15 items, regardless of how many total publications that you have.

Inclusion of items in the Research Support section is to show that the Investigators have run projects of similar scope with acceptable success. Yes, the definition of acceptable success is variable, but this concept is clear. The goal is to show off the Investigator's accomplishments to the best possible advantage.

The Research Support is not about demonstrating that the PI is successful at winning grants. It is not about demonstrating how big and well-funded the lab has been (no direct costs). It is not even about the reviewers trying to decide if the PI is spread too thinly (no listing of effort). This is not the point*.

In theory, one would just put forward a subset of the best elements on one's CV. The most relevant and most successful grant awards. If space is an issue (the Biosketch is limited to 4 pages) then the PI might have to make some selections. Obviously you'd want to start with NIH R01s (or equivalent) if the application is an R01. Presumably you would want to supply the reviewer with what you think are your most successful projects- in terms of papers, scientific advance, pizzaz of findings or whatever floats your boat.

You might also want to "selectively" omit any of your less-successful awards or even ones that seem like they have too much overlap with the present proposal.

Don't do this.

If it is an NIH award, you can be assured at least one of the reviewers will have looked you up on RePORTER and will notice the omission. If it is a non-NIH award, perhaps the odds are lower but you just never know. If the reviewer thinks you are hiding something...this is not good. If your award has been active in the past three years and is listed somewhere Internet-accessible, particularly on your University or lab website, then list it on the Biosketch.

Obviously this latter advice only applies to people who have a lot of different sources of support. The more of them you have, the tighter the space. Clearly, you are going to have to make some choices if your lab operates on a lot of different sources of support. Prioritize by what makes sense to you, sure, but make sure you pay attention to the communication you are trying to accomplish when making your selections. And beware of being viewed with a suspicion that you are trying to conceal something.
__
*Yes, in theory. I grasp that there will be reviewers using this information to argue that the PI is spread too thinly or has "too much" grant support.

10 responses so far

NIH backs down on resubmitting unfunded A1 grant applications

Apr 17 2014 Published by under Grant Review, Grantsmanship, NIH, NIH Careerism, NIH funding

The rumors were true. NOT-OD-14-074says:

Effective immediately, for application due dates after April 16, 2014, following an unsuccessful resubmission (A1) application, applicants may submit the same idea as a new (A0) application for the next appropriate due date. The NIH and AHRQ will not assess the similarity of the science in the new (A0) application to any previously reviewed submission when accepting an application for review. Although a new (A0) application does not allow an introduction or responses to the previous reviews, the NIH and AHRQ encourage applicants to refine and strengthen all application submissions.

So, for all intents and purposes you can revise and resubmit your failed application endlessly. Maybe they will pick you up on the A6 or A7 attempt!

Sally Rockey has a blog entry up which gives a bit more background and rationale.

While the change in policy had the intended result of a greater number of applications being funded earlier,

I really wonder if she believes this or has to continue to parrot the company line for face saving reasons. There is no evidence this is true. Not until and unless she can show definitively that the supposed A0s being funded were not in fact re-workings of proposals that had been previously submitted. I continue to assert that a significant number of PIs were submitting "A0" applications that were directly and substantially benefited by having been previously reviewed in different guise.


As a result, we heard increasing concerns from the community about the impact of the policy on new investigators because finding new research directions can be quite difficult during this phase of their career.

If the true concern here was the ESI or NI, then they could have simply allowed them to pass the filter as a category.

The resubmission of an idea as new means the application will be considered without an association to a previous submission; the applicant will not provide an introduction to spell out how the application has changed or respond to previous reviews; and reviewers will be instructed to review it as a new idea even if they have seen it in prior cycles.

The only way this is remotely possible is to put it in a different study section and make sure there are no overlapping ad hocs. If they don't do this, then this idea is nonsense. Surely Dr. Rockey is aware you cannot expect "instruction" to stick and force reviewers to behave themselves. Not with perfect fidelity.

However, we will monitor this new policy closely.

HA! If they'd decided to allow endless amendments (and required related apps to be submitted as such) then they would have been able to monitor the policy. The way they did this, there is no way to assess the impact. They will never know how many supposed "A0" apps are really A2, A4, A6, nor how many "A1" apps are really A3, A5, A7...etc. So what on earth could they possibly monitor? The number of established PIs who call up complaining about the unfundable score they just received on their A1?

71 responses so far

Older posts »