"Oh, we know nobody is actually going to buy anything. We're just here to make connections with postdocs for down-the-road."
Archive for the 'NIH Budgets and Economics' category
Can be found here.
Lots of cheerleading and nice talk. Can they back it up in the actual budget fight?
Later on Drugmonkey, we will be discussing this.
So this linked set of slides describes analysis of the DP1 award mechanism. Said DP1 was created to "address concerns that high risk, visionary research was not being supported due to the conservative nature of existing NIH funding mechanisms". So, instead of gee, I dunno, FIXING this problem they did what they always do and created a new mechanism.
It was supposed to be creating review:
Based on the premise that “Person Based” application and review processes would reward past creativity and encourage innovators to go in new directions
The DP1 was open to all career stages and took a 5 page essay to describe how awesome and visionary you are. There was no requirement to submit a budget and the awards were for $500k direct costs for 5 years.
This analysis compares the DP1 awardees 2004-2006 with 1) matched R01s (on PI stage and background, topic, local institution, same time frame. combined budget was 50% of the DP1 Pioneers) 2) Random all-NIH portfolios of same total costs (but not matched on other PI characteristics), 3) HHMI investigators from 2005 competition ($600K direct costs, reappointment rate is 80% after 5 yrs and HHMI averages 15 yrs duration total) and 4) the DP1 applicants that weren't selected.
As you will see from the slides, there is tremendous degree of overlap in the distribution of outcome measures that they are using. Tremendous. So mean differences need to be taken with a huge dose of "yeah but all is never held completely equal". Still
1) DP1 produce the same number of publications per dollar as matched R01s, in higher Scimago ranked journals and the awards have a higher h-index rating (DM- interesting to give a grant award an h-index isn't it?). "experts assess DP1 research as having more impact and innovation".
Yeah. Big whoop. You select a group for innovation and awesomeness, take them off the cycle of grant churning by handing them a 2-3 grant award all at once (for the cost of a 5 page essay and a Biosketch, plus some big swanging reference letters) and they look incrementally better. See overlap, they aren't awesomely better. Compared with labs unselected, fighting the regular grant wars and with half the money. Color me severely underimpressed by this analysis of the DP1 program. All it does is tell us to give more people the same damn deal. One might even suggest this deal approximates the deal that many of our older colleagues had in effect for much of their careers when success rates for competing continuations from established investigators was north of 45%.
2) Now you match these Pioneers on R01 direct costs there is no difference in pubs or cites. Impact factor is higher for the Pioneers but the h-index doesn't differ. Experts assess Pioneers as higher impact and innovation.
These matched direct-costs PIs were in lower rank institutions (by some margin) and were longer past their terminal degrees. Since there was no matching on topic, institution or background characteristics I'm going to suggest that h-index really comes to the fore for this analysis. You simply cannot compare glamour chasing molecular eleventy labs with, say, clinical operations. There are too many differences in citation practices, GlamourHumping and what is conventionally viewed as "high impact". h-index gives us a better approximation of real impact. So meh on this analysis too.
3) HHMI folks published more papers, had more citations but not if one accounted for the direct-cost differential. HHMI folks published in higher impact journals but the h-index was the same. Experts assess the impact and innovation the same. Interestingly (to me) the HHMI folks were closer to their terminal award than the mean of the other groups. The spread was tighter, this is by design of the HHMI but I guess I was a little surprise the DP1 time-since-degree was so diverse. Institutional rankings did not differ between HHMI, DP1.
4) The only loser-finalists data reported was on the Institution rankings and the time-since-degree which didn't differ from the successful DP1 awardees.
Very frustrated on this one! An absolutely critical comparison group in my view. How did these folks do? Did they get other funding? Did they suck in terms of productivity? Did they get their money anyway and compete successfully?
The overall conclusion slide nails it in the first bullet point. We really don't need to go on from:
It appears that higher funding leads to higher portfolio‐level impact.
Look, I'm not saying that other factors don't contribute. But this is an "all else held equal" analysis. Or an attempt at it. If you match the PIs on their approximate fields, type of work, background, local institutions, etc and give them the same amount of research support, they do the same. Even the much-vaunted HHMI sinecure funding (at approximately 1 R21 or half a R01 greater value per year vs DP1) which can be expected to last for 15 year of programmatic support doesn't make a radical difference! Note that the DP1 doesn't come with any such guarantee of longer-term funding for the PI. S/he knows full well that they are right back in the hunt 2-3 years down the road. So I guess it is worth hammering the final bullet point:
DP1 vs HHMI: likely not attributable to flexibility of research, or riskiness of ideas, but may be due to funding level and stability, differences in PIs, or differences in areas of science.
What at a disaster.
Bunch of reinforcement that this is all about a bunch of senior dudes (mostly male dewds too) in neuron-recording neuroscience who used to make out like bandits from NIMH support. Now that we've undergone a long slide in funding levels and Insel's push to translational-ize the NIMH portfolio has gained the upper hand...these folks are struggling to get grants. JUST. LIKE. THE. REST. OF. US.
and they can't come up with anything amazing by themselves so they need $100M cash money to build some new recording tools to....you guessed it, record some more neurons.
Outside of the regular grant process because they find it hard to compete these days. JUST. LIKE. THE. REST. OF. US.
I have a proposal. Let's throw down, what, maybe $1M to record symposia and meetings of these people for the next year. Maybe have a few more of these summits. And after all that, if they've come up with some thing that is ACTUALLY NEW AND INTERESTING then and only then do we give them the $99M.
He wants to hear about the impact of the sequester on your NIH funded research. Follow the #NIHSequesterImpact tag on the Twitters.
Personally, my biggest takeaway is that I should have ignored all the doom and gloom about R21s last year and submitted a few of them.
NIAID is one of the NIH ICs that actually publishes a payline. According to their website, as of April 19 the R01s from experienced investigators will have a payline of 8 percentile. The payline for new investigators will be 12 percentile. By way of comparison these were 10%ile and 14%ile in the prior two Fiscal Years for NIAID.
Mechanisms such as the R03, R21 and R15 will have to get a 20 overall impact score, or better, to fund but these are still listed as "interim" criteria.
So from a statistical basis, you need to have put in 13 proposals to NIAID this year in order to have a fighting chance to get one.
While we should all continue to explore and discuss questions about the scientific direction, it is important that our community be perceived as positive about the incredible opportunity represented in the President’s announcement. If we are perceived as unreasonably negative or critical about initial details, we risk smothering the initiative before it gets started.
This is the kind of thought enforcement that should send academics and scientists round the bend, and graduate student Justin Kiggins of UCSD has offered up an excellent rejoinder, which reads in part:
To summarize your request, you think that we should disagree only in “our scientific communications channels” while ensuring that, to the taxpayers who will be funding this initiative, “our community be perceived as positive” about it. Not only do I find it offensive and patronizing that you would ask us to be disingenuous to the very public which supports our efforts, but I think that your request is short-sighted and undermines the work of neuroscientists who seek to cultivate a public that is informed and literate in matters of the brain.
The debate has already begun in the public sphere, whether you like it or not. And the public is looking to neuroscientists to make sense of the vague official announcements that have happened thus far. Will we actually fix Alzheimer’s in five years? Will we record from every neuron in the human brain? Why do we want to do this? Without our informed input to the debate, “we risk smothering the initiative before it gets started” due to bad reporting. While you ask us to stick to “our” channels of scientific discourse, like the paywalled journals and exclusive conferences that the public cannot access, it was only 4 days after the New York Times story broke that this gem of fear-mongering claimed that the Brain Initiative would allow Barack Obama to read people’s minds. If we don’t talk about the Brain Initiative, bad reporters will. And if bad reporters talk about the Brain Initiative, we risk creating a public which is fearful of the very work that we do.
Now, I didn't read the part about official communications channels quite in the same way, although I don't know what Swanson intended when he wrote:
SfN encourages healthy debate and rigorous dialogue about the effort’s scientific directions. Testing of assumptions, methodological debate, and constructive competition are central to scientific progress. I urge you to bring all this to the table through our scientific communications channels and venues, including the SfN annual meeting in San Diego this fall and The Journal of Neuroscience.
I'm going to choose to read the "our" as "anything available to the membership" as opposed to "SfN's". And my blog is my primary venue for discussing matters of my professional life. So "Cheers", Executive Committee! Bravo for encouraging us, the membership of the Society for Neuroscience to engage in a healthy debate and rigorous dialog.
First up, what is the NIH's skin in this particular game? All the newsmedia reports this as a $100M effort. The NIH site on the BRAIN Initiative provides a partial clue.
In total, NIH intends to allocate $40 million in FY14. Given the cross-cutting nature of this project, the NIH Blueprint for Neuroscience Research—an initiative spanning 14 NIH Institutes and Centers—will be the leading NIH contributor to its implementation in FY14.
There's some blah-blah there about DARPA and NSF so presumably some other outlay will be going in their direction (UPDATE: The infographic from Obama's Whitehouse says $50M to DARPA and $20M to NSF....so they need some math lessons). It remains unclear to me (perhaps a Reader knows?) if these agencies will be making up the rest of the $60M for FY2014, let's assume that for now.
$40M for the NIH Brain-related institutes to divvy up. To be administered by the Blueprint which has been in operation since 2004 and has produced this sort of outcome.
Blueprint Grand Challenges
- The Human Connectome Project
- The Grand Challenge on Pain
- The Blueprint Neurotherapeutics Network
- Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC)
- Neuroscience Information Framework (NIF)
- Blueprint Resources Antibodies Initiative for Neurodevelopment (BRAINdev)
- NIH Toolbox for Assessment of Neurological and Behavioral Function
- Cre Driver Network
- Gene Expression Nervous System Atlas (GENSAT)
- Blueprint Non-Human Primate Brain Atlas
- Blueprint Training Programs
- Blueprint Science Education Awards
So you can see that the BRAIN Initiative is really only $40M for more of the same. Right? Back to the NIH site on the BRAIN Initiative.
Despite the many advances in neuroscience in recent years, the underlying causes of most of neurological and psychiatric conditions remain largely unknown, due to the vast complexity of the human brain. If we are ever to develop effective ways of helping people suffering from these devastating conditions, researchers will first need a more complete arsenal of tools and information for understanding how the brain functions both in health and disease.
"A more complete arsenal of tools and information" is the operating concept here. Just like has already been produced.....
We have witnessed the sequencing of the human genome, the development of new tools for mapping neuronal connections, the increasing resolution of imaging technologies, and the explosion of nanoscience. These discoveries have yielded unprecedented opportunities for integration across scientific fields. For instance, by combining advanced genetic and optical techniques, scientists can now use pulses of light in animal models to determine how specific cell activities within the brain affect behavior. What’s more, through the integration of neuroscience and physics, researchers can now use high-resolution imaging technologies to observe how the brain is structurally and functionally connected in living humans.
Very true. Some of it funded by the Roadmap, no doubt. But read this history of the development of optogenetics, one of the hottest tools going at the moment. It is a classic weaving together of scientific information and techniques developed by many labs over an extended period of time. Not, I will note, from labs that set out to make optogenetics work. Different parts of the puzzle came together, yes, in an interval of single focus. In laboratories that were very well funded in the absence of any particular grants to develop optogenetics. This particular story is merely the latest in a long line of major innovations that were cobbled together around the edges of existing (robust) funding. The common denominator is well funded laboratories that managed to use the NIH project based funding system to sustain what is in essence a de facto program based funding reality.
And this passage from the NIH site has a really embarrassing confession of the bait and switch of basic science's interaction with the people who control the purse strings, right? The TIME IS NOW! Yes, we've done all this AWESOME stuff with your money but it isn't ENOUGH! We need MORE money to develop more AWESOME TOOLS (a veritable arsenal) and then we promise we'll solve
Neurological and psychiatric disorders, such as Alzheimer’s disease, Parkinson’s disease, autism, epilepsy, schizophrenia, depression, and traumatic brain injury, [which] exact a tremendous toll on individuals, families, and society.
Head of the NIH OER Sally Rockey posted another set of data on the extramural research population, this time focused on the applicant institution, aka, Universities, Med Schools, Research Institutions, etc.
my staff and I took a look at the number of institutions that submitted competing research project grant (RPG) applications each fiscal year, going back to 1995. In addition to looking at all RPGs, we also looked at data for R01s only.
At least with respect to RO1s it would seem to argue against the "a bunch of middling non-research intensive institutions jumped on the extramural bandwagon during the doubling" theory that's occasionally been floated here.
I don't agree that these data "argue against" at all. Not in the least. Unique Research Project Grant applicant* institutions went up 80%, if you limit the analysis only to R01s, 40%. This was the maximum effect of the doubling and numbers have subsequently subsided from the peak. Still, the most pertinent observation is that RPG seeking institutions remain 50% more numerous than they were in the late 1990s. As we've previously discussed, the unrelenting pace of inflation has resulted in an effective Un-doubling, putting the NIH budget back on the trendline established in decades prior to the doubling (and again, inflation means it never really doubled, 50% more purchasing power at best) interval. That un-doubling analysis is a bit old (2008) so we could be in quite a bit worse shape right now, following a few more years and the sequester.
Any way you look at it, seems a significant increase in competition from the *institutional* perspective to me.There are half again as many institutions fighting over what is very likely less than 150% of the purchasing power of the late 1990s budgets.
Is there anyone out there that believes that the pool of NIH-seeking institutions that existed in the late 1990s have shrunken the number of PIs that they each have applying?
*awardee RPG institutions went from 600 to 800 during the doubling. R01 *awardee* institutions went from about 450 to 55o. 33% increase versus 22% increase. Not much better than the applicant-institution numbers. I argue that the applicant institution number is more relevant to the low paylines, increased grant churning and overall dismality of the NIH situation at present.
Select your favorite ICs of interest in the Agency/Institute/Center field.
Enter %R56% in the Project Number field.
Click on various grants and hit the History tab
Grind teeth in impotent rage.