Archive for the 'Grant Review' category

Quality of grant review

Jun 13 2014 Published by under Grant Review, NIH funding, Peer Review

Where are all the outraged complaints about the quality of grant peer review and Errors Of Fact for grants that were scored within the payline?

I mean, if the problem is with bad review it should plague the top scoring applications as much as the rest of the distribution. Right?

47 responses so far

I’m Your Huckleberry

Jun 06 2014 Published by under Fixing the NIH, Grant Review, NIH, Peer Review

bluebirdhappinessThis is a guest appearance of the bluebird of Twitter happiness known as My T Chondria. I am almost positive the bird does some sort of science at some sort of US institution of scientific research.


 

I’m your biased reviewer. I’ve sat on study sections for most of the years I’ve been a faculty member and I’m biased. I’m exactly who Sally Rockey and Richard Nakamura are targeting in their call for proposals to lessen bias and increase impartial reviewing of NIH applications.

Webster’s defines bias as “mental tendency or inclination” listing synonyms including “predisposition, preconception, predilection, partiality, and proclivity”. When I review a grant from an African American applicant, I have a preconception of who they are. I refine that judgment based on their training, publications and productivity.

I should share that I’m also biased in my review of applicants who have health issues, are women, are older than 30 and have children. I’ve had every one of these types of trainees in my lab and my experiences with them lead me to develop partiality and preconceptions that impact my opinions and judgments. Parts of my preconceptions arise from my experiences with these trainees in my as well those I interacted with while serving on my University’s admission committee. I was biased when I performed those duties as well.

Anyone who pretends to be utterly impartial is dangerous and hurtful to those we say we value as a scientific community. I am frankly stunned to see so many tone deaf and thoughtless comments claiming they are deeply offended at the at this ‘mindless drivel’.
MyT-fromRockTalk
Dr Marconi is just one of many scientists who claim, “I’ve never seen this, so it must not be true”. Scientist’s careers are based on things that cannot be seen, but we collect and interpret data and develop an understanding based on that which we cannot see. Data has been collected and the results are alarming and open for active debate.

Bias is far more insidious than racism. Racists reveal themselves and their ignorance and are often dismissed by ‘educated’ society for their extremist views. Bias is far subtler. Even if it results in an imperceptible change in scoring, we are in a climate where these things matter. Where razor fine decisions are being made on funding.

It's the people who are sure they have no bias that I fear. I know I have bias. We are simply incapable of being utterly impartial and anyone who says they are impartial is dangerously obtuse to these problems at best and a liar at worst.

22 responses so far

Your Grant in Review: The F32 Postdoctoral Fellowship Application

We've previously discussed the NIH F32 Fellowship designed to support postdoctoral trainees. Some of the structural limitations to a system designed on its fact to provide necessary support for necessary (additional) training overlap considerably with the problems of the F31 program designed to support graduate students.

Nevertheless, winning an individual NRSA training fellowship (graduate or postdoctoral) has all kinds of career benefits to the trainee and primary mentor so they remain an attractive option.

A question arose on the Twitts today about whether it was worth it for a postdoc in a new lab to submit an application.

In my limited experience reviewing NRSA proposals in a fellowship-dedicated panel for the NIH, there is one issue that looms large in these situations.

Reviewer #1, #2 and #3: "There is no evidence in the application that sufficient research funds will be available to complete the work described during the proposed interval of funding."

NRSA fellowships, as you are aware, do not come with money to pay for the actual research. The fellowship applications require a good deal of discussion of the research the trainee plans to complete for the proposed interval of training. In most cases that research plan involves a fair amount of work that require a decent amount of research funding to complete.

The reviewers, nearly all of them in my experience, will be looking for signs of feasibility. That the PI is actually funded, funded to do something vaguely related* to the topic of the fellowship proposal and funded for the duration over which the fellowship will be active.

When the PI is not obviously funded through that interval, eyebrows are raised. Criticism is leveled.

So, what is a postdoc in a newer lab to do? What is the PI of a newish lab, without substantial funding to do?

One popular option is to find a co-mentor for the award. A co-mentor that is involved. Meaning the research plan needs to be written as a collaborative project between laboratories. Obviously, this co-mentor should have the grant support that the primary PI is lacking. It needs to be made clear that there will be some sort of research funds to draw upon to support the fellow doing some actual research.

The inclusion of "mentoring committees" and "letters of support from the Chair" are not sufficient. Those are needed, don't get me wrong, but they address other concerns** that people have about untried PIs supervising a postdoctoral fellow.

It is essential that you anticipate the above referenced Stock Critique and do your best*** to head it off.

__
*I have seen several highly regarded NRSA apps for which the research plan looks to me to be of R01-quality writing and design.

**We're in stock-critique land here. Stop raging about how you are more qualified than Professor AirMiles to actually mentor a postdoc.

***Obviously the application needs to present the primary mentor's funding in as positive a light as possible. Talk about startup funds, refer to local pilot grants, drop promising R01 scores if need be. You don't want to blow smoke, or draw too much attention to deficits, but a credible plan for acquiring funding goes a lot farther than ignoring the issue.

26 responses so far

Revision strategy, with an eye to the new A2 as A0 policy at NIH

Occasionally, Dear Reader, one or another of you solicits my specific advice on some NIH grant situation you are experiencing. Sometimes the issues are too specific to be of much general good but this one is at least grist for discussion of how to proceed.

Today's discussion starts with the criterion scores for an R01/equivalent proposal. As a reminder, the five criteria are ordered as Significance, Investigator, Innovation, Approach and Environment. The first round for this proposal ended up with

Reviewer #1: 1, 1, 1, 3, 1
Reviewer #2: 3, 1, 1, 3, 1
Reviewer #3: 6, 2, 1, 8, 1
Reviewer #4: 2, 1, 3, 2, 1

From this, the overall outcome was.... Not Discussed. Aka, triaged.

As you might imagine, the PI was fuming. To put it mildly. Three pretty decent looking reviews and one really, really unfavorable one. This should, in my opinion, have been pulled up for discussion to resolve the differences of opinion. It was not. That indicates that the three favorable reviewers were either somehow convinced by what Reviewer #3 wrote that they had been too lenient...or they were simply not convinced discussion would make a material difference (i.e. push it over the "fund" line). The two 3s on Approach from the first two reviewers are basically a "I'd like to see this come back, fixed" type of position. So they might have decided, screw it, let this one come back and we'll fight over it then.

This right here points to my problem with the endless queue of the revision traffic pattern and the new A2 as A0 policy that will restore it to the former glory. It should be almost obligatory to discuss significantly divergent scores, particularly when they make a categorical difference. The difference between triaged and discussed and the difference between a maybe-fundable and a clearly-not-fundable score is known to the Chair and the SRO of the study section. Thee Chair could insist on resolving these types of situations. I think they should be obliged to do so, personally. It would save some hassle and extra rounds of re-review. It seems particularly called-for when the majority of the scores are in the better direction because that should be some minor indication that the revised version would have a good chance to improve in the minds of the reviewers.

There is one interesting instructive point that reinforces one of my usual soapboxes. This PI had actually asked me before the review, when the study section roster was posted, what to do about reviewer conflicts. This person was absolutely incensed (and depressed) about the fact that a scientific competitor in highly direct competition with the proposal had been brought on board. There is very little you can do, btw, 30 days out from review. That ship has sailed.

After seeing the summary statement, the PI had to admit that going by the actual criticism comments, the only person with the directly-competing expertise was not Reviewer #3. Since the other three scores were actually pretty good, we can see that I am right on the assumption of what a reviewer will think of your application based on perceptions of competition or personal dis/like. You will often be surprised that the reviewer that you assume is out to screw your application over will be pulling for it. Or at least, will be giving it a score that is in line with the majority of the other reviewers. This appears to be what happened in this case.

Okay. So, as I may have mentioned I have been reluctantly persuading myself that revising triaged applications is a waste of time. Too few of them make it over the line to fund. And in the recently past era of A1 and out....well perhaps time was better spent on a new app. In this case, however, I think there is a strong case for revision. Three of four (and we need to wonder about why there even were four reviews instead of three) of these criterion score sets look to me like scores that would get an app discussed. The ND seems to be a bit of an unfair result, based on the one hater. The PI agreed, apparently, and resubmitted a revised application. In this case the criterion scores were:

Reviewer #1: 1, 2, 2, 5, 1
Reviewer #2: 2, 2, 2, 2, 1
Reviewer #3: 1, 1, 2, 2, 1
Reviewer #4: 2, 1, 1, 2, 1
Reviewer #5: 1, 1, 4, 7, 1

I remind you that we cannot assume any overlap in reviewers nor any identity of reviewer number in the case of re-assigned reviewers. In this case the grant was discussed at study section and ended up with a 26 voted impact score. The PI noted that a second direct competitor on the science had been included on the review panel this time in addition to the aforementioned first person in direct competition.

Oh Brother.

I assure you, Dear Reader, that I understand the pain of getting reviews like this. Three reviewers throwing 1s and 2s is not only a "surely discussed" outcome but is a "probably funded" zone, especially for a revised application. Even the one "5" from Reviewer #1 on Approach is something that perhaps the other reviewers might talk him/her down from. But to have two obviously triage numbers thrown on Approach? A maddening split decision, leading to a score that is most decidedly on the bubble for funding.

My seat of the pants estimation is that this may require Program intervention to fund. I don't know for sure, I'm not familiar with the relevant paylines and likely success rates for this IC for this fiscal year.

Now, if this doesn't end up winning funding, I think the PI most certainly has to take advantage of the new A2 as A0 policy and put this sucker right back in. To the same study section. Addressing whatever complaints were associated with Reviewer #1's and #5's criticisms of course. But you have to throw yourself on the mercy of the three "good" reviewers and anyone they happened to convince during discussion. I bet a handful of them will be sufficient to bring the next "A0" of this application to a fundable score even if the two less-favorable reviewers refuse to budge. I also bet there is a decent chance the SRO will see that last reviewer as a significant outlier and not assign the grant to that person again.

I wish this PI my best of luck in getting the award.

29 responses so far

Your Grant in Review: The Biosketch Research Support Section

Apr 21 2014 Published by under Grant Review, Grantsmanship, NIH, NIH funding

A question came up on the twitts about the Research Support section of the NIH Biosketch.

The answer is that no, you do not. I will note that I am not entirely sure if this changed over the years or if my understanding of this rule was incomplete at the start. However, the instructions on the Sample Biosketch [PDF] provided by the NIH are clear.

D. Research Support
List both selected ongoing and completed research projects for the past three years (Federal or non-Federally-supported). Begin with the projects that are most relevant to the research proposed in the application. Briefly indicate the overall goals of the projects and responsibilities of the key person identified on the Biographical Sketch. Do not include number of person months or direct costs.

The last bit is the key bit for Dr24Hour's question but I include the full description for a reason.

dr24Hours also asked:

and there was a followup to my initial negative response

Together, these questions seem to indicate a misunderstanding of what this section is for, and what it is trying to communicate.

Note the use of the term "selected" and "most relevant" in the above passage.

The Biosketch is, in total, devoted to convincing the reviewers that the PI and other Investigative staff have the chops to pull off the project under review. It is about bragging on how accomplished they all are. Technically, it is not even a full recitation of all the support one has secured in the past three years. This is similar to how the Peer-reviewed Publications section is limited to 15 items, regardless of how many total publications that you have.

Inclusion of items in the Research Support section is to show that the Investigators have run projects of similar scope with acceptable success. Yes, the definition of acceptable success is variable, but this concept is clear. The goal is to show off the Investigator's accomplishments to the best possible advantage.

The Research Support is not about demonstrating that the PI is successful at winning grants. It is not about demonstrating how big and well-funded the lab has been (no direct costs). It is not even about the reviewers trying to decide if the PI is spread too thinly (no listing of effort). This is not the point*.

In theory, one would just put forward a subset of the best elements on one's CV. The most relevant and most successful grant awards. If space is an issue (the Biosketch is limited to 4 pages) then the PI might have to make some selections. Obviously you'd want to start with NIH R01s (or equivalent) if the application is an R01. Presumably you would want to supply the reviewer with what you think are your most successful projects- in terms of papers, scientific advance, pizzaz of findings or whatever floats your boat.

You might also want to "selectively" omit any of your less-successful awards or even ones that seem like they have too much overlap with the present proposal.

Don't do this.

If it is an NIH award, you can be assured at least one of the reviewers will have looked you up on RePORTER and will notice the omission. If it is a non-NIH award, perhaps the odds are lower but you just never know. If the reviewer thinks you are hiding something...this is not good. If your award has been active in the past three years and is listed somewhere Internet-accessible, particularly on your University or lab website, then list it on the Biosketch.

Obviously this latter advice only applies to people who have a lot of different sources of support. The more of them you have, the tighter the space. Clearly, you are going to have to make some choices if your lab operates on a lot of different sources of support. Prioritize by what makes sense to you, sure, but make sure you pay attention to the communication you are trying to accomplish when making your selections. And beware of being viewed with a suspicion that you are trying to conceal something.
__
*Yes, in theory. I grasp that there will be reviewers using this information to argue that the PI is spread too thinly or has "too much" grant support.

10 responses so far

NIH backs down on resubmitting unfunded A1 grant applications

Apr 17 2014 Published by under Grant Review, Grantsmanship, NIH, NIH Careerism, NIH funding

The rumors were true. NOT-OD-14-074says:

Effective immediately, for application due dates after April 16, 2014, following an unsuccessful resubmission (A1) application, applicants may submit the same idea as a new (A0) application for the next appropriate due date. The NIH and AHRQ will not assess the similarity of the science in the new (A0) application to any previously reviewed submission when accepting an application for review. Although a new (A0) application does not allow an introduction or responses to the previous reviews, the NIH and AHRQ encourage applicants to refine and strengthen all application submissions.

So, for all intents and purposes you can revise and resubmit your failed application endlessly. Maybe they will pick you up on the A6 or A7 attempt!

Sally Rockey has a blog entry up which gives a bit more background and rationale.

While the change in policy had the intended result of a greater number of applications being funded earlier,

I really wonder if she believes this or has to continue to parrot the company line for face saving reasons. There is no evidence this is true. Not until and unless she can show definitively that the supposed A0s being funded were not in fact re-workings of proposals that had been previously submitted. I continue to assert that a significant number of PIs were submitting "A0" applications that were directly and substantially benefited by having been previously reviewed in different guise.


As a result, we heard increasing concerns from the community about the impact of the policy on new investigators because finding new research directions can be quite difficult during this phase of their career.

If the true concern here was the ESI or NI, then they could have simply allowed them to pass the filter as a category.

The resubmission of an idea as new means the application will be considered without an association to a previous submission; the applicant will not provide an introduction to spell out how the application has changed or respond to previous reviews; and reviewers will be instructed to review it as a new idea even if they have seen it in prior cycles.

The only way this is remotely possible is to put it in a different study section and make sure there are no overlapping ad hocs. If they don't do this, then this idea is nonsense. Surely Dr. Rockey is aware you cannot expect "instruction" to stick and force reviewers to behave themselves. Not with perfect fidelity.

However, we will monitor this new policy closely.

HA! If they'd decided to allow endless amendments (and required related apps to be submitted as such) then they would have been able to monitor the policy. The way they did this, there is no way to assess the impact. They will never know how many supposed "A0" apps are really A2, A4, A6, nor how many "A1" apps are really A3, A5, A7...etc. So what on earth could they possibly monitor? The number of established PIs who call up complaining about the unfundable score they just received on their A1?

71 responses so far

On resubmitting unfunded A1 NIH grant applications

Apr 08 2014 Published by under Grant Review, Grantsmanship, NIH, NIH funding

Well, well, well.

The NIH limited applicants to a single revision ("amendment", hence -01A1 version) of an unfunded "new" grant submission (the -01 version, sometimes called "A0") in 2009.

This followed the action in 1997 to limit revisions to two (see RockTalk chart), which hurt PIs like Croce and Perrin. (Six revision? Wow, that is some serious persistence guys, my hat is off.)

I wasn't really paying attention to such matters in 1997 but there was some screaming in 2009, let me tell you.
Delusional Biomedical Researchers Seek Repeal Of Arithmetic
More on the new NIH policy on grant application revisions

Initial outcome of limiting NIH apps to a single revision?


NIH re-evaluating ‘two strikes’ rule – Updated

Crocodile tears from experienced NIH investigators over the discontinued A2 revision

I don't know how many people actually got stuck in the filter for submitting a A0 that was too similar to their prior, unfunded A1. I heard of a few, so it did happen. On the flip side of that, I've sure as heck been putting in more than two versions of a proposal which is designed to fund the same area of interest in my laboratory. I have not yet been flagged for it. My initial reaction that any PI who has an ounce of creativity ought to be able to come up with a credible alternative take on their project is still my current take.

Nevertheless, rumor has it that changes are in the wind.

Pinko Punko made an interesting comment on the blog:

DM, I heard the craziest thing today- the possibility of removing the "substantial revision" criterion for new A0 related to previous A1. Supposedly announcement soon- I was kind of surprised.

This was news to me but I have heard things from about five independent sources in the past few days that tend to confirm that changes are being considered.

The most consistent rumor is that new grants will no longer be checked for similarity to prior unfunded proposals. They will be called new grants, but there is no apparent reason for this. In all ways I can see, this is going to be a return to the days prior to 1997 where you could just endlessly revise until the study section relented.

The supposed benefit of reduced "time to award from original proposal" is now going totally by the wayside. I mean, the NIH will still be able to lie and say "look it was an A0!" if they want to but this is even less credible.

More dangerously, the will of study sections to endlessly queue applications will be released from whatever tepid effect the A1 limit has produced.

This is a very BadThing.

__
whoa. I found three A7 projects. All three are competing continuations. I can't EVEN....five and six year apparent funding gaps for two of them. The other I can't work out why there is no apparent gap in funding.

31 responses so far

Sharing (in science) with people you don't particularly like

Mar 24 2014 Published by under Academics, Grant Review, Grantsmanship

The Twitt @tehbride raised an interesting mentoring question:

 

As you are likely aware Dear Reader, due to the accident and intent of where I tend to sit on the scientific spectrum, the scooping type of competition is not a huge part of my professional life. That is, I have managed to get by to this point by not being terribly afraid of people knowing what I am working on or what I plan to work on. Part of this has to do with playing at a level of publication that is not obsessed with the very first person to demonstrate something. Part of it is selecting research questions that are not densely populated with dozens or scores of other laboratories trying to scratch the same flea. Part of it is my overweening and misplaced self-confidence that we did it better, dammit, so who cares who published first.

 

Part of it is pure wrongheadedness on my part, no doubt.

When it comes to grants, specifically, I was always around people who were reflexively generous with sharing their applications when I was a late-postdoc and an early-career faculty member. As time has gone on and more people are asking me for my proposals than I feel the need to ask, I have given mine out to anyone who requests them. (Usually with a little lecture about how my "successful" apps are no more informative than my triaged ones, of course.)

So take that into account.

On a purely tactical level, it is possible for the postdoc in this situation to simply refuse. We can extend this to PIs who are asked for their successful grant applications. You can just say no.

It seems to me to be unwise to do so, particularly when it comes to an application that has been successful. Even if you cannot stand the person who is asking. It just seem churlish when the cost to you is so low.

Is it going to give this person ScienceEnemy little boost ahead? Sure. But remember, the odds of funding are still very steep. So it isn't like you are handing them an award. They still have to write a credible application. And get lucky. So why not*? It costs you essentially nothing to email over your application.

On a strategic level, this person could be your colleague in science for a long time. They could very well be in a position to review you and your work, particularly if they are in a related area of science. And even if they annoy you, it isn't necessarily the case that they have so much as noticed. Lots of annoying people are kind of unaware... So why make an enemy?

And there is one more thing to consider. If you act within a professional capacity on personal whim and dislike, what does this say about your behavior as an objective peer reviewer? Shouldn't you be able to set aside personal dislike to effectively review the scientific content of a paper or grant proposal? Yes, yes you should.

 

__

*Now, if you think the person is a data fraud or something...well that is entirely different.

13 responses so far

Ask DrugMonkey: How do we focus the reviewer on 'Innovation'?

Mar 18 2014 Published by under Fixing the NIH, Grant Review, NIH, NIH funding

As you are aware, Dear Reader, despite attempts by the NIH to focus the grant reviewer on the "Innovation" criterion, the available data show that the overall Impact score for a NIH Grant application correlates best with Significance and Approach.

Jeremy Berg first posted data from NIGMS showing that Innovation was a distant third behind Significance and Approach. See Berg's blogposts for the correlations with NIGMS grants alone and a followup post on NIH-wide data broken out for each IC. The latter emphasized how Approach is much more of a driver than any other of the criterion scores.

This brings me to a query recently directed to the blog which wanted to know if the commentariat here had any brilliant ideas on how to effectively focus reviewer attention on the Innovation criterion.

There is a discussion to be had about novel approaches supporting innovative research. I can see that the Overall Impact score is correlated better with the Approach and not very well with the Innovation criterion score. This is the case even for funding mechanisms which are supposed to be targeting innovative research, including specific RFAs (i.e., not only the R21).

From one side, it is understandable because reviewers' concerns over the high risk associated with innovative research and lack of solid preliminary data. But on the other side, risk is the very nature of innovative research and the application should not be criticized heavily for this supposed weakness. From my view, for innovative research, the overall score should be correlated well with Innovation score.

So, I am wondering whether the language for these existing review criteria should be revised, whether additional review criterion instructing reviewers to appropriately evaluate innovation should be added and how this might be accomplished. (N.b. heavily edited for anonymity and other reasons. Apologies to the original questioner for any inaccuracies this introduced -DM)

My take on NIH grant reviewer instruction is that the NIH should do a lot more of it, instead of issuing ill-considered platitudes and then wringing their hands about a lack of result. My experience suggests that reviewers are actually really good (on average) about trying to do a fair job of the task set in front of them. The variability and frustration that we see applicants express about significantly divergent reviews of their proposals reflects, I believe, differential reviewer interpretation about what the job is supposed to be. This is a direct reflection of the uncertainty of instruction, and the degree to which the instruction cannot possibly fit the task.

With respect to the first point, Significance is an excellent example. What is "Significant" to a given reviewer? Well, there is wide latitude.

Does the project address an important problem or a critical barrier to progress in the field? If the aims of the project are achieved, how will scientific knowledge, technical capability, and/or clinical practice be improved? How will successful completion of the aims change the concepts, methods, technologies, treatments, services, or preventative interventions that drive this field?

Well? What is the reviewer to do with this? Is the ultimate pizza combo of "all of the above" the best? Is the reviewer's pet "important problem" far more important than any sort of attempt to look at the field as a whole? For that matter, why should the field as a whole trump the Small Town Grocer interest...after all, the very diversity of research interests is what protects us from group-think harms. Is technical capability sufficient? Is health advance sufficient? Does the one trump the other? How the hell does anyone know what will prove to be a "critical" barrier and what will be a false summit?

To come back to my correspondent's question, I don't particularly want the NIH to get more focused on this criterion. I think any and all of the above CAN represent a highly significant aspect of a grant proposal. Reviewers (and applicants) should be allowed to wrangle over this. Perhaps even more important for today's topic, the Significance recommendations from NIH seem to me to capture almost everything that a peer scientist might be looking for as "Significance". It captures the natural distribution of what the extramural scientists feel is important in a grant proposal.

You may have noticed over the years that for me, "Significance" is the most important criterion. In particular, I would like to see Approach de-emphasized because I think this is the most kabuki-theatre-like aspect of review. (The short version is that I think nitpicking well-experienced* investigators' description of what they plan to do is useless in affecting the eventual conduct of the science.)

Where I might improve reviewer instruction on this area is trying to get them to be clear about which of these suggested aspects of Significance are being addressed. Then to encourage reviewers to state more clearly why/why not these sub-criteria should be viewed as strengths or lack thereof.

With respect to second point raised by the correspondent, the Innovation criterion is a clear problem. One NIH site says this about the judgment of Innovation:

Does the application challenge and seek to shift current research or clinical practice paradigms by utilizing novel theoretical concepts, approaches or methodologies, instrumentation, or interventions? Are the concepts, approaches or methodologies, instrumentation, or interventions novel to one field of research or novel in a broad sense? Is a refinement, improvement, or new application of theoretical concepts, approaches or methodologies, instrumentation, or interventions proposed?

The trouble is not a lack of reviewer instruction, however. The fact is that many of us extramural scientists simply do not buy into the idea that every valuable NIH Grant application has to be innovative. Nor do we think that mere Innovation (as reflected in the above questions) is the most important thing. This makes it a different problem when this is co-equal with criteria for which the very existence as a major criterion is not in debate.

I think a recognition of this disconnect would go a long way to addressing the NIH's apparent goal of increasing innovation. The most effective thing that they could do, in my view, is to remove Innovation as one of the five general review criteria. This move could then be coupled to increased emphasis on FOA criteria and an issuance of Program Announcements and RFAs that were highly targeted to Innovation.

For an SEP convened in response to an RFA or PAR that emphasizes innovation....well, this should be relatively easy. The SRO simply needs to hammer relentlessly on the idea that the panel should prioritize Innovation as defined by...whatever. Use the existing verbiage quoted above, change it around a little....doesn't really matter.

As I said above, I believe that reviewers are indeed capable of setting aside their own derived criteria** and using the criteria they are given. NIH just has to be willing to give very specific guidance. If the SRO / Chair of a study section make it clear that Innovation is to be prioritized over Approach then it is easy during discussion to hammer down an "Approach" fan. Sure, it will not be perfect. But it would help a lot. I predict.

I'll leave you with the key question though. If you were to try to get reviewers to focus on Innovation, how would you accomplish this goal?

___
*Asst Professor and above. By the time someone lands a professorial job in biomedicine they know how to conduct a dang research project. Furthermore, most of the objections to Approach in grant review are the proper province of manuscript review.

**When it comes to training a reviewer how to behave on study section, the first point of attack is the way that s/he has perceived the treatment of their own grant applications in the past***. The second bit of training is the first round or two of study section service. Every section has a cultural tone. It can even be explicit during discussion such as "Well, yes it is Significant and Innovative but we would never give a good score to such a crappy Approach section". A comment like that makes it pretty clear to a new-ish reviewer on the panel that everything takes a back seat to Approach. Another panel might be positively obsessed with Innovation and care very little for the point-by-point detailing of experimental hypotheses and interpretations of various predicted outcomes.

***It is my belief that this is a significant root cause of "All those Assistant Professors on study section don't know how to review! They are too nitpicky! They do not respect my awesome track record! What do you mean they question my productivity because I list three grants on each paper?" complaining.

12 responses so far

Your Grant in Review: I Can't Work With This

Mar 14 2014 Published by under Grant Review, Grantsmanship, NIH, NIH Careerism

Reminder. You are going to have advocates and detractor reviewing your grant proposals. Your goal is to give the advocate what she needs to promote your proposal.

No matter how much the advocate might love the essential ideas in your application, nothing good is going to happen if you violate every rule of basic grantsmithing.

At the least you should be able to put together a proposal that gets it mostly right. Credible. Serious. Without huge gaping holes or obvious piles of StockCritique bait lying around everywhere.

It should not be hard to give the advocating reviewer something they can work with.

17 responses so far

Older posts »