Archive for the 'NIH funding' category

Revision strategy, with an eye to the new A2 as A0 policy at NIH

Occasionally, Dear Reader, one or another of you solicits my specific advice on some NIH grant situation you are experiencing. Sometimes the issues are too specific to be of much general good but this one is at least grist for discussion of how to proceed.

Today's discussion starts with the criterion scores for an R01/equivalent proposal. As a reminder, the five criteria are ordered as Significance, Investigator, Innovation, Approach and Environment. The first round for this proposal ended up with

Reviewer #1: 1, 1, 1, 3, 1
Reviewer #2: 3, 1, 1, 3, 1
Reviewer #3: 6, 2, 1, 8, 1
Reviewer #4: 2, 1, 3, 2, 1

From this, the overall outcome was.... Not Discussed. Aka, triaged.

As you might imagine, the PI was fuming. To put it mildly. Three pretty decent looking reviews and one really, really unfavorable one. This should, in my opinion, have been pulled up for discussion to resolve the differences of opinion. It was not. That indicates that the three favorable reviewers were either somehow convinced by what Reviewer #3 wrote that they had been too lenient...or they were simply not convinced discussion would make a material difference (i.e. push it over the "fund" line). The two 3s on Approach from the first two reviewers are basically a "I'd like to see this come back, fixed" type of position. So they might have decided, screw it, let this one come back and we'll fight over it then.

This right here points to my problem with the endless queue of the revision traffic pattern and the new A2 as A0 policy that will restore it to the former glory. It should be almost obligatory to discuss significantly divergent scores, particularly when they make a categorical difference. The difference between triaged and discussed and the difference between a maybe-fundable and a clearly-not-fundable score is known to the Chair and the SRO of the study section. Thee Chair could insist on resolving these types of situations. I think they should be obliged to do so, personally. It would save some hassle and extra rounds of re-review. It seems particularly called-for when the majority of the scores are in the better direction because that should be some minor indication that the revised version would have a good chance to improve in the minds of the reviewers.

There is one interesting instructive point that reinforces one of my usual soapboxes. This PI had actually asked me before the review, when the study section roster was posted, what to do about reviewer conflicts. This person was absolutely incensed (and depressed) about the fact that a scientific competitor in highly direct competition with the proposal had been brought on board. There is very little you can do, btw, 30 days out from review. That ship has sailed.

After seeing the summary statement, the PI had to admit that going by the actual criticism comments, the only person with the directly-competing expertise was not Reviewer #3. Since the other three scores were actually pretty good, we can see that I am right on the assumption of what a reviewer will think of your application based on perceptions of competition or personal dis/like. You will often be surprised that the reviewer that you assume is out to screw your application over will be pulling for it. Or at least, will be giving it a score that is in line with the majority of the other reviewers. This appears to be what happened in this case.

Okay. So, as I may have mentioned I have been reluctantly persuading myself that revising triaged applications is a waste of time. Too few of them make it over the line to fund. And in the recently past era of A1 and out....well perhaps time was better spent on a new app. In this case, however, I think there is a strong case for revision. Three of four (and we need to wonder about why there even were four reviews instead of three) of these criterion score sets look to me like scores that would get an app discussed. The ND seems to be a bit of an unfair result, based on the one hater. The PI agreed, apparently, and resubmitted a revised application. In this case the criterion scores were:

Reviewer #1: 1, 2, 2, 5, 1
Reviewer #2: 2, 2, 2, 2, 1
Reviewer #3: 1, 1, 2, 2, 1
Reviewer #4: 2, 1, 1, 2, 1
Reviewer #5: 1, 1, 4, 7, 1

I remind you that we cannot assume any overlap in reviewers nor any identity of reviewer number in the case of re-assigned reviewers. In this case the grant was discussed at study section and ended up with a 26 voted impact score. The PI noted that a second direct competitor on the science had been included on the review panel this time in addition to the aforementioned first person in direct competition.

Oh Brother.

I assure you, Dear Reader, that I understand the pain of getting reviews like this. Three reviewers throwing 1s and 2s is not only a "surely discussed" outcome but is a "probably funded" zone, especially for a revised application. Even the one "5" from Reviewer #1 on Approach is something that perhaps the other reviewers might talk him/her down from. But to have two obviously triage numbers thrown on Approach? A maddening split decision, leading to a score that is most decidedly on the bubble for funding.

My seat of the pants estimation is that this may require Program intervention to fund. I don't know for sure, I'm not familiar with the relevant paylines and likely success rates for this IC for this fiscal year.

Now, if this doesn't end up winning funding, I think the PI most certainly has to take advantage of the new A2 as A0 policy and put this sucker right back in. To the same study section. Addressing whatever complaints were associated with Reviewer #1's and #5's criticisms of course. But you have to throw yourself on the mercy of the three "good" reviewers and anyone they happened to convince during discussion. I bet a handful of them will be sufficient to bring the next "A0" of this application to a fundable score even if the two less-favorable reviewers refuse to budge. I also bet there is a decent chance the SRO will see that last reviewer as a significant outlier and not assign the grant to that person again.

I wish this PI my best of luck in getting the award.

29 responses so far

The NIH says investigators must incorporate sex-differences analyses in their studies

May 14 2014 Published by under Drug Abuse Science, NIH, NIH funding, Sex Differences

For some reason I am having a DOI error on the actual comment from Clayton and Collins. So until that is resolved, the sourcing is from the journalists who got the embargoed version.

Apparently Janine Clayton and Francis Collins have issued a commentary on a new policy that the NYT describes as:

The N.I.H. is directing scientists to perform their experiments with both female and male animals and include both sexes in sufficient numbers to see statistically significant differences. Grant reviewers will be instructed to take the sex balance of each study design into account when awarding grants.

Yeah, that sounds pretty clear. My studies just doubled...which means really that they were just cut in half. I'm cool with that. I actually agree that it would be good if we did almost everything as a sex-differences study.

There's the money though. Sex difference studies in a behaving animal are not just a doubling as it happens (and as I inaccurately described it just above). From a prior post on this topic entitled: The funding is the science II, "Why do they always drop the females?"

As nicely detailed in Isis' post, the inclusion of a sex comparison doubles the groups right off the bat but even more to the point, it requires the inclusion of various hormonal cycling considerations. This can be as simple as requiring female subjects to be assessed at multiple points of an estrous cycle. It can be considerably more complicated, often requiring gonadectomy (at various developmental timepoints) and hormonal replacement (with dose-response designs, please) including all of the appropriate control groups / observations. Novel hormonal antagonists? Whoops, the model is not "well established" and needs to be "compared to the standard gonadectomy models", LOL >sigh<.

The money and the progress.

Keep in mind, if you will, that there is always a more fundamental comparison or question at the root of the project, such as "does this drug compound ameliorate cocaine addiction?" So all the gender comparisons, designs and groups need to be multiplied against the cocaine addiction/treatment conditions. Suppose it is one of those cocaine models that requires a month or more of training per group? Who is going to run all those animals ? How many operant boxes / hours are available? and at what cost?

Oh, don't worry bench jockeys. According to the NYT article:

Researchers who work with cell cultures are also being encouraged to study cells derived from females as well as males, and to do separate analyses to tease out sex differences at the cellular level.

“Every cell has a sex,” Dr. Clayton said in a telephone interview. “Each cell is either male or female, and that genetic difference results in different biochemical processes within those cells.”

“If you don’t know that and put all of the cells together, you’re missing out, and you may also be misinterpreting your data,” Dr. Clayton added. For example, researchers recently discovered that neurons cultured from males are more susceptible to death from starvation than those from females, because differences in the ways their cells process nutrients.

"Encouraged". Okay, maybe you CultureClowns have an escape clause here. Animal model folks are facing "demanded" language.

Final observations are ridiculous:

But [the new policies] are likely to be met with resistance from scientists who fear increased costs and difficulty in performing their experiments. Studying animals of both sexes may potentially double the number required in order to get significant results.

“There’s incredible inertia among people when it comes to change, and the vast majority of people doing biological research are going to think this is a huge inconvenience,” Dr. Zucker said.

...

Margaret McCarthy, a neuroscientist at University of Maryland School of Medicine who studies sex differences, agreed. “The reactions will range from hostile — ‘You can’t make me do that’ — to ‘Oh, I don’t want to control for the estrous cycle,'” she said.

This has nothing to do with whether a scientist "wants" to or not.

Let me be clear, I want to do sex-differences studies. I am delighted that this will be a new prescription. I agree with the motivating sentiments.

What I "fear" is that grant applications will be kicked in the teeth if they include sex differences comparisons. What I "fear" is that my research programs will be even less productive on the main area of interest, to the tune of a lot of extra work that will simply confirm a lot of what we already know. For example, female rats tend to self-administer more drug than males do. A lot of my colleagues have been working on these topics for a long time. The identification of those areas where it actually matters (i.e., sex difference effects that haven't yet been detected) are going to come along with a lot of negative findings. What I "fear" is that when we are interested in a certain thing, there is a bit of sex-differences literature and the hypothesis is going to be "males and females are the same" or even "females are more/less sensitive to drug" that this is going to bring down the holy hells of reviewer wrath over what hypothesis we are testing.

I fear a lot of things about this. What I don't fear is my own interest in the topic. What I don't fear is the "inconvenience". I don't even fear "difficulty". It just isn't that difficult to add female groups to my studies.

What it takes is additional grant funding. Or tolerance on the part of P&T committees, hiring committees and grant review panels for apparently reduced progress on a scientific topic of interest. And those things are not at all easy to come by.

The funny thing is, we've been taking steps in the lab toward this direction in the past year anyway. So I should be grateful I have at least that little tiny bit of a head start on this stuff.

21 responses so far

Grantome details the trajectories of the NIH Grant "have" institutions

This is a fascinating read.

Grantome.com is a project of data scientists who have generated a database of grant funding information. This particular blog post focuses on a longitudinal analysis of some of the most heavily NIH-funded Universities and other research institutions. It shows those which are maintaining stable levels of support, those in decline and those which are grabbing an increased share of the extramural NIH pie.

The following graph was described thusly:

Each histogram bar represents the range in the percentage of grants that has been held between 1986 and 2013. The current, 2013 level is represented by a black vertical line. Finally, arrows inform on the latest trend in how these values are changing, where their length and direction reflect predictions in the level of funding that each institution will have over the next 3 years. These predictions were made from linear extrapolation of the average rate of change that was observed over the last 3 years.

This serves as an interesting counterpoint to the discussion of the "real problem" at the NIH as it has typically centered around too many awards per PI, too much funding per PI, the existence of soft-money job categories, the overhead rates enjoyed by certain Universities, etc.

I am particularly fascinated by their searchable database, in which you can play around with the funding histories of various institutions. Searches by fiscal year, funding IC are illuminating, as is toggling between total costs and the number of awards on the graphical display.

23 responses so far

Your Grant in Review: The Biosketch Research Support Section

Apr 21 2014 Published by under Grant Review, Grantsmanship, NIH, NIH funding

A question came up on the twitts about the Research Support section of the NIH Biosketch.

The answer is that no, you do not. I will note that I am not entirely sure if this changed over the years or if my understanding of this rule was incomplete at the start. However, the instructions on the Sample Biosketch [PDF] provided by the NIH are clear.

D. Research Support
List both selected ongoing and completed research projects for the past three years (Federal or non-Federally-supported). Begin with the projects that are most relevant to the research proposed in the application. Briefly indicate the overall goals of the projects and responsibilities of the key person identified on the Biographical Sketch. Do not include number of person months or direct costs.

The last bit is the key bit for Dr24Hour's question but I include the full description for a reason.

dr24Hours also asked:

and there was a followup to my initial negative response

Together, these questions seem to indicate a misunderstanding of what this section is for, and what it is trying to communicate.

Note the use of the term "selected" and "most relevant" in the above passage.

The Biosketch is, in total, devoted to convincing the reviewers that the PI and other Investigative staff have the chops to pull off the project under review. It is about bragging on how accomplished they all are. Technically, it is not even a full recitation of all the support one has secured in the past three years. This is similar to how the Peer-reviewed Publications section is limited to 15 items, regardless of how many total publications that you have.

Inclusion of items in the Research Support section is to show that the Investigators have run projects of similar scope with acceptable success. Yes, the definition of acceptable success is variable, but this concept is clear. The goal is to show off the Investigator's accomplishments to the best possible advantage.

The Research Support is not about demonstrating that the PI is successful at winning grants. It is not about demonstrating how big and well-funded the lab has been (no direct costs). It is not even about the reviewers trying to decide if the PI is spread too thinly (no listing of effort). This is not the point*.

In theory, one would just put forward a subset of the best elements on one's CV. The most relevant and most successful grant awards. If space is an issue (the Biosketch is limited to 4 pages) then the PI might have to make some selections. Obviously you'd want to start with NIH R01s (or equivalent) if the application is an R01. Presumably you would want to supply the reviewer with what you think are your most successful projects- in terms of papers, scientific advance, pizzaz of findings or whatever floats your boat.

You might also want to "selectively" omit any of your less-successful awards or even ones that seem like they have too much overlap with the present proposal.

Don't do this.

If it is an NIH award, you can be assured at least one of the reviewers will have looked you up on RePORTER and will notice the omission. If it is a non-NIH award, perhaps the odds are lower but you just never know. If the reviewer thinks you are hiding something...this is not good. If your award has been active in the past three years and is listed somewhere Internet-accessible, particularly on your University or lab website, then list it on the Biosketch.

Obviously this latter advice only applies to people who have a lot of different sources of support. The more of them you have, the tighter the space. Clearly, you are going to have to make some choices if your lab operates on a lot of different sources of support. Prioritize by what makes sense to you, sure, but make sure you pay attention to the communication you are trying to accomplish when making your selections. And beware of being viewed with a suspicion that you are trying to conceal something.
__
*Yes, in theory. I grasp that there will be reviewers using this information to argue that the PI is spread too thinly or has "too much" grant support.

10 responses so far

Thought of the Day: The NIH Can't Win

Apr 18 2014 Published by under Careerism, NIH, NIH Careerism, NIH funding

A comment over at Rock Talk made a fairly traditional complaint about the NIH funding system. Dan C stated that: "NIH is to be criticized that it funds “usual suspects.

Today, I find this funny. Because after all, most of the people complaining about the NIH system want to become one of the usual suspects!

Right? They want to get a grant, one. They want to have some reasonable stability of that grant funding in a program-like sustained career. Most of them don't want to have to struggle too hard to get that funding either....I doubt anyone would refuse the occasional Program pickup of their just-missed grant.

Once you cobble together a bit of success under the NIH extramural grant system, those who feel themselves to be on the outs call you a "usual suspect". For any number of reasons it is just obvious to them that you are a total Insider (and couldn't actually deserve what you've managed to accomplish, of course). This may be based on the mere fact that you've acquired a grant, because you work in a Department or University where a whole lot of other people are similarly successful. This may be because it appears that POs actually talk to the person in question. It may be because a FOA has appeared in a research domain that you work within.

Anyone sees the duck floating serenely on the water at a given point in time and it looks like this is one most usual suspect waterfowl indeed.

I used to be annoyed at my approximate lateral peers in science who appeared to be having an easier time of it than I did. I had my Insider attributes as a younger faculty member, make no mistake, but I also had considerable Outsider traits, considering where I was seeking funding and for what topics of research. Some of those folks, over there, well boy didn't they get an easy ride because of being such Insiders to the subpart of the NIH system!

I still have those thoughts. Even though I've seen many of the people I thought had it made in the shade go through their dry spells and funding down-cycles. Despite the fact that as each year goes by and my lab remains funded, I become more and more one of the "usual suspects".

I believe that if I ever feel like I am one of the usual suspects, if I feel like I deserve special treatment and stop fighting so hard to keep my lab going that this will be the end.

I advise you to try to retain the same feeling of "outsider" that you feel as a noob PI for as long as you can into your career.

Getting back to the point, however, the NIH simply cannot win with these criticisms. Those who are feeling unsuccessful will always carp about how the NIH just funds "their" people. And if the NIH does happen to fund one of these outsiders, this very act makes them a usual suspect to the next complainer.

The NIH can't win.

14 responses so far

NIH backs down on resubmitting unfunded A1 grant applications

Apr 17 2014 Published by under Grant Review, Grantsmanship, NIH, NIH Careerism, NIH funding

The rumors were true. NOT-OD-14-074says:

Effective immediately, for application due dates after April 16, 2014, following an unsuccessful resubmission (A1) application, applicants may submit the same idea as a new (A0) application for the next appropriate due date. The NIH and AHRQ will not assess the similarity of the science in the new (A0) application to any previously reviewed submission when accepting an application for review. Although a new (A0) application does not allow an introduction or responses to the previous reviews, the NIH and AHRQ encourage applicants to refine and strengthen all application submissions.

So, for all intents and purposes you can revise and resubmit your failed application endlessly. Maybe they will pick you up on the A6 or A7 attempt!

Sally Rockey has a blog entry up which gives a bit more background and rationale.

While the change in policy had the intended result of a greater number of applications being funded earlier,

I really wonder if she believes this or has to continue to parrot the company line for face saving reasons. There is no evidence this is true. Not until and unless she can show definitively that the supposed A0s being funded were not in fact re-workings of proposals that had been previously submitted. I continue to assert that a significant number of PIs were submitting "A0" applications that were directly and substantially benefited by having been previously reviewed in different guise.


As a result, we heard increasing concerns from the community about the impact of the policy on new investigators because finding new research directions can be quite difficult during this phase of their career.

If the true concern here was the ESI or NI, then they could have simply allowed them to pass the filter as a category.

The resubmission of an idea as new means the application will be considered without an association to a previous submission; the applicant will not provide an introduction to spell out how the application has changed or respond to previous reviews; and reviewers will be instructed to review it as a new idea even if they have seen it in prior cycles.

The only way this is remotely possible is to put it in a different study section and make sure there are no overlapping ad hocs. If they don't do this, then this idea is nonsense. Surely Dr. Rockey is aware you cannot expect "instruction" to stick and force reviewers to behave themselves. Not with perfect fidelity.

However, we will monitor this new policy closely.

HA! If they'd decided to allow endless amendments (and required related apps to be submitted as such) then they would have been able to monitor the policy. The way they did this, there is no way to assess the impact. They will never know how many supposed "A0" apps are really A2, A4, A6, nor how many "A1" apps are really A3, A5, A7...etc. So what on earth could they possibly monitor? The number of established PIs who call up complaining about the unfundable score they just received on their A1?

71 responses so far

Side thought on the NIH issuing project grants versus program grants

Apr 16 2014 Published by under NIH, NIH Careerism, NIH funding

I asked a poorly worded question on the Twitts

in which what I was trying to ask was this. From the perspective of awarding NIH grants, does it matter that a given proposal fits into a larger whole? If a brand new investigator, do we assume that he or she is applying for the first grant among many? For the greybeard for whom this might be a last-award, do we recognize that it is the capstone to a lengthy program? For the mid-career investigator do we assume this is only one of the many parts that will eventually form a large body of work?

Or is it all good if this is a singleton? One grant, awarded for 5 years and that is all.

The interesting thing is that nobody on the Twitts thought that I meant this. The answers went to various places- funding from non-NIH sources, relatively inexpensive research that didn't actually require an R01 to be vibrant, the idea of a single R01 that was continued beyond a mere 5 year interval. Many people assumed that what I was really talking about was assessing the merits and qualities of the PI.

After I got done kicking myself for not asking the question properly, a simple thought struck me.

Perhaps the very fact that people assumed I meant just about anything other than a single 5 year award, period, for a given PI was my answer. We do tend to expect that a R01 award fits into a larger research program. It does not stand alone as a single project.

20 responses so far

On resubmitting unfunded A1 NIH grant applications

Apr 08 2014 Published by under Grant Review, Grantsmanship, NIH, NIH funding

Well, well, well.

The NIH limited applicants to a single revision ("amendment", hence -01A1 version) of an unfunded "new" grant submission (the -01 version, sometimes called "A0") in 2009.

This followed the action in 1997 to limit revisions to two (see RockTalk chart), which hurt PIs like Croce and Perrin. (Six revision? Wow, that is some serious persistence guys, my hat is off.)

I wasn't really paying attention to such matters in 1997 but there was some screaming in 2009, let me tell you.
Delusional Biomedical Researchers Seek Repeal Of Arithmetic
More on the new NIH policy on grant application revisions

Initial outcome of limiting NIH apps to a single revision?


NIH re-evaluating ‘two strikes’ rule – Updated

Crocodile tears from experienced NIH investigators over the discontinued A2 revision

I don't know how many people actually got stuck in the filter for submitting a A0 that was too similar to their prior, unfunded A1. I heard of a few, so it did happen. On the flip side of that, I've sure as heck been putting in more than two versions of a proposal which is designed to fund the same area of interest in my laboratory. I have not yet been flagged for it. My initial reaction that any PI who has an ounce of creativity ought to be able to come up with a credible alternative take on their project is still my current take.

Nevertheless, rumor has it that changes are in the wind.

Pinko Punko made an interesting comment on the blog:

DM, I heard the craziest thing today- the possibility of removing the "substantial revision" criterion for new A0 related to previous A1. Supposedly announcement soon- I was kind of surprised.

This was news to me but I have heard things from about five independent sources in the past few days that tend to confirm that changes are being considered.

The most consistent rumor is that new grants will no longer be checked for similarity to prior unfunded proposals. They will be called new grants, but there is no apparent reason for this. In all ways I can see, this is going to be a return to the days prior to 1997 where you could just endlessly revise until the study section relented.

The supposed benefit of reduced "time to award from original proposal" is now going totally by the wayside. I mean, the NIH will still be able to lie and say "look it was an A0!" if they want to but this is even less credible.

More dangerously, the will of study sections to endlessly queue applications will be released from whatever tepid effect the A1 limit has produced.

This is a very BadThing.

__
whoa. I found three A7 projects. All three are competing continuations. I can't EVEN....five and six year apparent funding gaps for two of them. The other I can't work out why there is no apparent gap in funding.

31 responses so far

The NIH Grant "Have" States Resist Sharing

Apr 04 2014 Published by under NIH, NIH Budgets and Economics, NIH funding

From the Boston Globe (of course):

Two dozen rural states stretching from Maine to Mississippi and Montana are clamoring to increase their share of federal research dollars now disproportionately awarded to Boston-area institutions and scientists.

Whaddaya mean, "disproportionately"? WE DESERVE IT!!!

“There’s a battle between merit and egalitarianism,” said Dr. David Page, director of the Whitehead Institute, a prestigious research institution in Cambridge affiliated with MIT.

Yeah, pure merit versus affirmative action quotas for lame ass science from Universities we've never heard of maaaang. There couldn't possibly be any bias in grant review and award that puts a finger on the scale could there?

In one of the efforts, Senator Susan Collins, a Maine Republican on the Appropriations Committee, is proposing that funding for the special program to benefit rural states, formally called the NIH’s Institutional Development Award, be raised to $310 million, up from the current $273 million. The current amount equals just 1 percent of the institute’s research grants — a drop in the bucket compared with what Boston researchers win each year.

Last time I checked Massachusetts Congressional District 8 for NIH funding (probably a number of FY ago), Brigham and Women's Hospital was pulling in $253,333,482 in NIH grants. MIT? $172,184,305. Harvard Medical School? $168,648,847. The list goes on in this single Congressional district.

and while the Globe has this scare passage near the top:

The coalition of states that benefits from the NIH special program for rural states doubled the amount of money it spent on lobbying in the last decade, to $590,000 in 2013 from $300,000 in 2003. That number does not include direct lobbying by universities in those states.

this is going to barely manage to tread water against the combined might of the richest of "have" Universities and institutions:

Representative Michael Capuano, whose district encompassing the Boston-area research hospitals wins more NIH money than any other congressional district, said the Massachusetts delegation is playing defense right now.

“The system works reasonably well but it’s under attack in a serious way,” Capuano said.

Massachusetts is mobilizing. Hospital executives, university presidents, and Washington lobbyists make routine trips to the Capitol. Their not-so-subtle message: Boston is on top because its elite institutions offer the best chances of big scientific breakthroughs.

then there is classic misdirection and the usual conceit that the NIH award process is purely about merit, uncontaminated by self-reinforcing vicious cycles of the rich getting richer.

“There are people in Boston who deserve more than a million dollars in NIH money because that is the best use of those dollars,” said Dr. Barrett Rollins, chief scientific officer at Dana-Farber Cancer Institute, a top recipient of federal research funds. “Congress has a responsibility to spend taxpayer money in the best possible way, and to me, the most straightforward way to do that is to make sure the dollars are invested in the most meritorious work without regard to geographic distribution.”

Because the quality of science is not evenly distributed across the country, researchers should not expect federal dollars to be either, said Harry Orf, senior vice president for research at Massachusetts General Hospital, another top recipient of NIH grants.

“You have congressmen who can’t evaluate science sending money to places not rated for innovation,” Orf said. “As funds get more and more scarce, you want to make sure you’re betting on the best science.”

It is beyond asinine to pretend that the NIH grant money is distributed by geographic affirmative action to any extend that squeezes the elite coastal research institutions. The above numbers and any current search on RePORTER verifies that the kind of money that is being proposed to go into this geographical affirmative action is a drop in the bucket. One or two of the larger institutions funded by NIH (and keep in mind that a place such as "Harvard" is made up of multiple institutions which are named as independent awardees in the NIH records) account for the entire outlay in the the NIH’s Institutional Development Award program. Even if the increase to $310M goes through.

There is considerable debate about "the best science" and about the best way to hedge our scientific bets. The NIH works, haltingly, in a way by which the serendipity of chance discovery from a diversity of approaches is balanced against predictable brute-force progress from exceptionally well funded Universities, Medical Schools and research institutions. I find myself citing papers from the very biggest institutions, sure, but I have numerous critical findings that I cite in my work that have come from smaller research programs in smaller Universities and (gasp) Colleges. Don't you? If you do not, I question your scholarship. Seriously.

I suggest a purely self-interested goal, for those of you who are elite-coastal-University die hards. Every Congress Critter gets a more or less equal vote. The ones from Maine (Susan Collins, see above), from Alabama....

“It’s hard to compete against MIT or Harvard. . . . They’ve had their share. A lot of state colleges and universities all over the country, from Idaho to Maine, have some ideas too, and I think we should give these people from smaller schools in other states an opportunity,” said Senator Richard Shelby of Alabama, the top Republican on the powerful Senate Appropriations Committee. “It’s time to fix that.”

from West Virginia...

“The program stipulates that not everything goes to Harvard, Yale, and Stanford,” said Senator Jay Rockefeller, a West Virginia Democrat.

and from Oklahoma, among others.

Representative Tom Cole, a Republican from Oklahoma who serves on the House Appropriations Committee, said he’s simply interested in supporting research that occurs “outside the normal corridors of power.”

Rep Cole seems to understand why geographical affirmative action is necessary, doesn't he?


“There is a network where you tend to reward peers and people you know, and I think the distribution of funds, not intentionally, is skewed a bit toward places like Boston,” Cole said. “We just want to make sure that the playing field is fair.”

We need all these Critters to be on board if we expect Congress to listen to our pleas on behalf of the NIH.

It is politically stupid to fail to understand this.

52 responses so far

Ask DrugMonkey: How do we focus the reviewer on 'Innovation'?

Mar 18 2014 Published by under Fixing the NIH, Grant Review, NIH, NIH funding

As you are aware, Dear Reader, despite attempts by the NIH to focus the grant reviewer on the "Innovation" criterion, the available data show that the overall Impact score for a NIH Grant application correlates best with Significance and Approach.

Jeremy Berg first posted data from NIGMS showing that Innovation was a distant third behind Significance and Approach. See Berg's blogposts for the correlations with NIGMS grants alone and a followup post on NIH-wide data broken out for each IC. The latter emphasized how Approach is much more of a driver than any other of the criterion scores.

This brings me to a query recently directed to the blog which wanted to know if the commentariat here had any brilliant ideas on how to effectively focus reviewer attention on the Innovation criterion.

There is a discussion to be had about novel approaches supporting innovative research. I can see that the Overall Impact score is correlated better with the Approach and not very well with the Innovation criterion score. This is the case even for funding mechanisms which are supposed to be targeting innovative research, including specific RFAs (i.e., not only the R21).

From one side, it is understandable because reviewers' concerns over the high risk associated with innovative research and lack of solid preliminary data. But on the other side, risk is the very nature of innovative research and the application should not be criticized heavily for this supposed weakness. From my view, for innovative research, the overall score should be correlated well with Innovation score.

So, I am wondering whether the language for these existing review criteria should be revised, whether additional review criterion instructing reviewers to appropriately evaluate innovation should be added and how this might be accomplished. (N.b. heavily edited for anonymity and other reasons. Apologies to the original questioner for any inaccuracies this introduced -DM)

My take on NIH grant reviewer instruction is that the NIH should do a lot more of it, instead of issuing ill-considered platitudes and then wringing their hands about a lack of result. My experience suggests that reviewers are actually really good (on average) about trying to do a fair job of the task set in front of them. The variability and frustration that we see applicants express about significantly divergent reviews of their proposals reflects, I believe, differential reviewer interpretation about what the job is supposed to be. This is a direct reflection of the uncertainty of instruction, and the degree to which the instruction cannot possibly fit the task.

With respect to the first point, Significance is an excellent example. What is "Significant" to a given reviewer? Well, there is wide latitude.

Does the project address an important problem or a critical barrier to progress in the field? If the aims of the project are achieved, how will scientific knowledge, technical capability, and/or clinical practice be improved? How will successful completion of the aims change the concepts, methods, technologies, treatments, services, or preventative interventions that drive this field?

Well? What is the reviewer to do with this? Is the ultimate pizza combo of "all of the above" the best? Is the reviewer's pet "important problem" far more important than any sort of attempt to look at the field as a whole? For that matter, why should the field as a whole trump the Small Town Grocer interest...after all, the very diversity of research interests is what protects us from group-think harms. Is technical capability sufficient? Is health advance sufficient? Does the one trump the other? How the hell does anyone know what will prove to be a "critical" barrier and what will be a false summit?

To come back to my correspondent's question, I don't particularly want the NIH to get more focused on this criterion. I think any and all of the above CAN represent a highly significant aspect of a grant proposal. Reviewers (and applicants) should be allowed to wrangle over this. Perhaps even more important for today's topic, the Significance recommendations from NIH seem to me to capture almost everything that a peer scientist might be looking for as "Significance". It captures the natural distribution of what the extramural scientists feel is important in a grant proposal.

You may have noticed over the years that for me, "Significance" is the most important criterion. In particular, I would like to see Approach de-emphasized because I think this is the most kabuki-theatre-like aspect of review. (The short version is that I think nitpicking well-experienced* investigators' description of what they plan to do is useless in affecting the eventual conduct of the science.)

Where I might improve reviewer instruction on this area is trying to get them to be clear about which of these suggested aspects of Significance are being addressed. Then to encourage reviewers to state more clearly why/why not these sub-criteria should be viewed as strengths or lack thereof.

With respect to second point raised by the correspondent, the Innovation criterion is a clear problem. One NIH site says this about the judgment of Innovation:

Does the application challenge and seek to shift current research or clinical practice paradigms by utilizing novel theoretical concepts, approaches or methodologies, instrumentation, or interventions? Are the concepts, approaches or methodologies, instrumentation, or interventions novel to one field of research or novel in a broad sense? Is a refinement, improvement, or new application of theoretical concepts, approaches or methodologies, instrumentation, or interventions proposed?

The trouble is not a lack of reviewer instruction, however. The fact is that many of us extramural scientists simply do not buy into the idea that every valuable NIH Grant application has to be innovative. Nor do we think that mere Innovation (as reflected in the above questions) is the most important thing. This makes it a different problem when this is co-equal with criteria for which the very existence as a major criterion is not in debate.

I think a recognition of this disconnect would go a long way to addressing the NIH's apparent goal of increasing innovation. The most effective thing that they could do, in my view, is to remove Innovation as one of the five general review criteria. This move could then be coupled to increased emphasis on FOA criteria and an issuance of Program Announcements and RFAs that were highly targeted to Innovation.

For an SEP convened in response to an RFA or PAR that emphasizes innovation....well, this should be relatively easy. The SRO simply needs to hammer relentlessly on the idea that the panel should prioritize Innovation as defined by...whatever. Use the existing verbiage quoted above, change it around a little....doesn't really matter.

As I said above, I believe that reviewers are indeed capable of setting aside their own derived criteria** and using the criteria they are given. NIH just has to be willing to give very specific guidance. If the SRO / Chair of a study section make it clear that Innovation is to be prioritized over Approach then it is easy during discussion to hammer down an "Approach" fan. Sure, it will not be perfect. But it would help a lot. I predict.

I'll leave you with the key question though. If you were to try to get reviewers to focus on Innovation, how would you accomplish this goal?

___
*Asst Professor and above. By the time someone lands a professorial job in biomedicine they know how to conduct a dang research project. Furthermore, most of the objections to Approach in grant review are the proper province of manuscript review.

**When it comes to training a reviewer how to behave on study section, the first point of attack is the way that s/he has perceived the treatment of their own grant applications in the past***. The second bit of training is the first round or two of study section service. Every section has a cultural tone. It can even be explicit during discussion such as "Well, yes it is Significant and Innovative but we would never give a good score to such a crappy Approach section". A comment like that makes it pretty clear to a new-ish reviewer on the panel that everything takes a back seat to Approach. Another panel might be positively obsessed with Innovation and care very little for the point-by-point detailing of experimental hypotheses and interpretations of various predicted outcomes.

***It is my belief that this is a significant root cause of "All those Assistant Professors on study section don't know how to review! They are too nitpicky! They do not respect my awesome track record! What do you mean they question my productivity because I list three grants on each paper?" complaining.

12 responses so far

« Newer posts Older posts »