Managers at the National Institutes of Health are increasingly ignoring the advice of scientific review panels and giving hundreds of millions of dollars a year to scientists whose projects are deemed less scientifically worthy than those denied money.
The article gets a little better. It goes on to detail the NIH's defense against the charge (see writedit link below) which boils down to "we're saving the new investigators". But it also continues with the skeptical tone that something is...wrong about Program re-shuffling the order of initial review when funding grants.
There is nothing wrong with this per se and in fact it is a good thing to have a multi-layered decision process.
I have commented before on the role of NIH Program Staff in making funding decisions. See the repost here, because there is some foofraw about the differences between "payline" and "success rate" that plagued my original formulation. The problem is that study sections are not perfect and are subject to certain biases (entirely unintentional in many cases). This is the deal when human decision making is involved. So I endorse multi-layered decision processes.
After defending the general structure of NIH's Program interests:
Programmatic priorities dictate something other than the "best possible science" gets funded all the time. An Institute may decide that any of a whole host of issues are underrepresented in their portfolio for various reasons both internally scientific (i.e., Council recommendations, meeting or symposium discussions (Program attends meetings!), influential reviews, etc) or external (i.e., big media splash on some issue, Congressional "interest" via inquiry, Congressional mandate, etc). The Institute may decide that their portfolio is underrepresented with PIs of various gender, ethnic and geographic descriptions, under/overrepresented with grant mechanisms, New Investigators, etc. The Institute may decide that they "have an investment" in a given research program or resource and choose to keep it running. This really outrages people who fall just off the funding line and don't get their applications "picked up" as you can imagine.
Grow up. This is why the Institutes exist. The notion of pure investigator-initiated science is a good one, but much like "democracy" can't be carried to the extreme. Scientists, and the scientific enterprise, exhibit well discussed conservatism in many ways, see Nature editorial about Nobel-Prize-destined work being passed over. This is unsurprising given that we are human. We have a tendency to understand scientific models and domains that relate to our own work the best. We have a tendency to stick to these models and domains, particularly as our scientific careers mature. This is natural. But it means that the funding of science by the priorities of those doing the science leads to a suppression of innovation and novelty. Not to mention health domain coverage, the interest of the National Institutes of Health.
I concluded with a criticism:
With that said, there is a problem with Program's behavior in that it is almost perfectly opaque. There is very little way to determine how many grants have been "picked up" at all. Imprecision in the budgeting/prediction/scoreoutcome process means that the number of grants funded in perfect line with the priority scores can vary due to unexpectedly low numbers of high scoring grants per round (percentiling is across three rounds), high scoring grants that meet Program priorities, etc. In any case, Program is very loathe to explain their "pick up" reasoning in specific terms no doubt hoping to avoid lengthy debates, Congressional inquiry and even lawsuits from someone who didn't get funded. On the balance this seems silly. If Program is going to assert a priority, do so honestly and forthrightly. Just say, we picked up X number of women PIs and Y number of New Investigators and Z grants between the Mississippi and the Sierra Nevada! And then explain why. If the reason is good enough to use, it is good enough to defend, no?
Well, later events provided a defense of the Program interest in funding New Investigator grants. I'm not a big fan of the "affirmative action" language but it is apt: New Investigators were getting screwed by study section and the NIH finally got serious about redressing the bias against New Investigators. In the words of the prior NIH head, the Great Zerhouni:
[program directors] came on board when NIH noticed a change in behavior by peer reviewers. Told about the quotas, study sections began "punishing the young investigators with bad scores," says Zerhouni. That is, a previous slight gap in review scores for new grant applications from first-time and seasoned investigators widened in 2007 and 2008, Berg says. It revealed a bias against new investigators, Zerhouni says.
The same argument is the NIH defense against the current foofraw laid out in the NYT article. Fortunately, the incomparable writedit already snooped out the background on this story and wrote a post on a General Accounting Office (GAO) probe [pdf; go read] of the NIH behavior vis a vis funding "exceptions" to the priority score order.
data sourceLooking at the NIH's response to the GAO appended at the bottom of the report I was struck by how ineffectual* their graph was at making their point. So I re-plotted the data. I think this is much more intuitive in making their point that the majority of the 2007 OMG-WTF effect on the total number of out-of-order funded grants comes from New Investigator pickups. Which were a response, I will note, to a decrease in New Investigator success at the point of primary review, see first graph in the post.
I buy this defense of the charge that FY 2007 was somehow a huge increase in the number of grants pulled out of line for funding. But the analysis from the GAO seems to ignore the dramatically changed funding (and therefore "payline") climate. There are more issues here to discuss. Many. And such issues would go a long way towards 'splaining whether the tone of the critique is warranted.
Still, I am a little suspicious of the NIH's position that they do not know and do not care to know, more about their systematic processes and results for picking up apps out of line. That sort of thing, intentional ignorance of the actual function of your enterprise, is not cool. Maybe it's the data geek in me but if you had all these data about how the units under your managerial responsibility perform-wouldn't you want to know?
*[added] It would also be of interest to mention these data showing that the NIH funds about 9,500-10,000 Research Project Grants (of which around 6,000 are R01s) every year. Not entirely sure we're talking the same population (R01s versus RPGs versus R01-equivalent) as the denominator for the above described exception data so it would be nice to have the workup in the same place. What fraction of RPG and/or R01s are within payline, how many are skipped and how many are exceptions?
UPDATE: The director of NIGMS came by to point out that his Institute goes ahead and publishes their distribution of funded R01s by percentile rank of the initial priority score. I am so enthused about this! These are exactly the sort of workups on the funding behavior that would form the basis of the type of oversight the GAO report is demanding. I don't see where there is any harm and it would really focus the auditing eye. We drones don't necessarily need to know but I'd think someone should be asking about those 3-5 grants being funded at ranks north of 35%ile. There may be very good reasons but they should be on the table.
FY 2008 data are below, click here for FY2007, FY2006 and FY2005.
Figure 1. NIGMS R01 applications reviewed (white rectangles) and funded (black bars) in Fiscal Year 2008. All competing applications eligible for funding are included. (source)