I've noticed something. It is now a standard comment from any BSD getting an award. It runs something like this.
"The NIH rejected one of my proposals once so it is all flawed and fucked!"
Try to have some class, people.
I've noticed something. It is now a standard comment from any BSD getting an award. It runs something like this.
"The NIH rejected one of my proposals once so it is all flawed and fucked!"
Try to have some class, people.
oh, did I blow the lede on that title?
A letter to the blog points to the NIGMS FeedbackLoop blog and an entry from Jon Lorsch on their "Large Scale Research Initiatives and Centers:. One of these, the Protein Structure Initiative is being readied for decommissioning.
At last week’s National Advisory General Medical Sciences Council meeting, Council members and staff discussed the future of one existing large-scale program, the Protein Structure Initiative (PSI). The Council heard the results of a midpoint evaluation of the PSI’s third 5-year phase, PSI:Biology. The evaluators found that PSI investigators have determined an impressive number of high-quality protein structures and that some of the program’s accomplishments, including methodological ones, could not have been readily achieved through R01-type investigator-initiated grants.
The evaluators concluded that the PSI will reach a point that no longer justifies set-aside funding and, as a result, strongly recommended that NIGMS begin planning the sunset of the PSI, being careful to identify resources developed by the initiative that should be retained for use by the biomedical community.
The blog entry details how the NIGMS only got into Big Mech science during the doubling and generally questions the value of massive projects focused on one topic or theme.
The obvious debate item for this and all ICs that throw money at BigMech science (Centers and Program Projects, assuredly. Sometimes big cooperative agreement U-mechs and contracts as well) is that the more plebeian R01s, investigator initiated and solicited alike, must be sacrificed to pay for larger projects. They have to balance the relative value of a Center or P01 Program Project against the loss of 3-4 individual R01s. Maybe even more of them, depending on size.
First, my disclaimer. I've had benefits from BigMech science for a good number of my months of support from the NIH as a PI. Sometimes the bennies amounted to less than an R03s worth of funding and sometimes it has been as large as R21 or small R01 money. I am also at the stage of my career where I not only can be a semi-credible substantial participant in someone's BigMech but I should be planning for a not-to-distant future in which I try to head one up. And by "should" I mean for the benefit of not just my own lab and career but in a more generational sense for the benefit of my scientific subfields of interest.
I occasionally wring my hands with observation that as the Baby Boomer contingent ages, there are not enough of the following generation to take over the large scale projects that we have at present. This is because both of broader demographic issues (I was born during a particularly low-ebb in total US births) but also because of the over-shadowing, squeezing of my generation of scientists out of academic jobs. So if the larger mechanisms of science funding are to continue, my tiny generation needs to step up.
There's a final knock-on problem. Because the few of us who managed to transition did so later, and with less assurance of continual funding, we are not ready. Our labs are less stable at this stage of the process. We have not been in them as long at a given age of life, either. We've had less time to reach a stage of comfort and (slight) boredom with our own thing that might motivate us to think of the larger picture. Why bother captaining a P01 application that will eventually fund ourselves with R01 money and four of our colleagues when we could just write two or three new projects for our own labs?
I see these pressures. And I see it as a challenge to my generation. Which means a challenge to me personally.
This all hinges on the assumption and stipulation that Big Mech Projects are a GoodThing.
I am ambivalent.
If you can get one, they are a very GoodThing. That is the tragedy of the Commons answer that governs our existence under the NIH-funded extramural system. I see Big Mechs as barely living up to their promise in the best case scenario. The promise of "the whole is bigger than the parts and SYNERGY". I've been around a BigMech or two and I'm not sure that I believe in the hype. As a general rule, I mean. What the BigMech does in my opinion is provide a very tasty carrot to pull together some investigators around one project. And let a subset (even just one) of those investigators herd the cats a little harder than would otherwise be possible. But there is nothing fancy about the unified project. A group of friendly, collaborating laboratories can do the same thing with individual R01 funding. The extra, add-on Cores that you get with a Center or Big Mech sound good in theory but in my experience may not be much in the way of value added.
And all the monthly Project Meetings and Advisory Board Lunches and other box-ticking stuff? It's about 10% about the science and 90% about making the next competing renewal go well. IME.
So there is some waste to balance out the theoretical synergistic advantages.
Of course, what is sacrificed is unknown. The Big Mech has to be unified under a common set of Aims or it is dead in the water. So obviously it is throwing a lot of cheddar at one particularly topic or theoretical orientation. That might be wrong or less valuable compared with the hypotheses generated by two or three other groups.
Those other groups will either fail to seek (because those Big Swaaanging GreyBeards have the Center) or failing to secure (ditto) funding for their own, more modest, R01 projects. We know this. What we can never know, is who has the best approach. The funds have to be directed to one strategy or another.
I am a big believer in the underlying structure of a democratic competition of ideas from all comers that underlies the NIH system of funding. Lots of smart people offer their take on the world and collectively we make sure to cover our bases. Great ideas can be recognized no matter where they come from. This is the superior way to get the best of the best ideas at the table.
I am not naive. I know it doesn't work exclusively this way in practice. But it is the ideal to which we should aspire.
Big Mech projects contradict this ideal. So I am reflexively against them.
Even if it would be in my present best interest to back an expansion of Big Mechs at the NIH ICs of my greatest interest.
I don't know about you but as we near the end of the Fiscal Year, I like to start looking at RePORTER and SILK to keep an eye on what is getting picked up for NIH funding. This is when the ICs have to allocate their total FY outlay so any conservativeness from the prior three rounds of regular funding policy gets adjusted with the leavings.
Look for big mega-mechs being picked up, Rs that clearly didn't make the regular paylines and most fascinatingly of all, the R56 handouts.
One of the little career games I hope you know about is to cite as many of your funding sources as possible for any given manuscript. This, btw, is one way that the haves and the rich of the science world keep their "fabulously productive" game rolling.
Grant reviewers may try to parse this multiple-attribution fog if they are disposed to criticize the productivity of a project up for competing renewal. This is rarely successful in dismantling the general impression of the awesome productivity of the lab, however.
Other than this, nobody ever seems to question, assess or limit this practice of misrepresentation.
Here we are in an era in which statements of contribution from each author is demanded by many journals. Perhaps we should likewise demand a brief accounting as to the contribution of each grant or funding source.
Interesting exchange on the twitts today with someone who is intimating that the process of selecting peers to serve as grant reviewers on NIH study sections requires some transparency and fixing.
As my longer term Readers are aware, my main objection along these lines is that I think Assistant Professors should not be excluded and that the purge urged on by Toni Scarpa back some years ago was misguided. I will also venture that I think it is ridiculous that the peer review pool is limited to those Professorial rank people who have already won funding from the NIH (for the most part). If really pressed, I've been know to suggest that it is even unfair that the more senior postdoc types who have not yet won a faculty-level appointment cannot review grants.
Other than that, I am generally down with the official mandates to seek ethnic/racial, gender and geographic representation on panels. My personal experience has been that the SROs do a pretty good job at this. Also, because of these factors, I have found that the types of institutions represented spans the range pretty well..small mostly teaching profs, big Research Uni profs, research insitutes of various sizes, public Unis, private Unis, Med Schools and academic departments.
So it is with some confusion that I read someone asserting that there is a problem with who is selected.
My query of the day, therefore, is to ask you if you know of people who seek to serve on study section but cannot seem to land an invite. Alternately, do you know of categories of investigators that are routinely overlooked?
Geez, are we here already? As you know, the 1st of July is the first possible funding date for NIH grants that were submitted in Oct-Nov and reviewed in Feb-Mar. I like to see what my favorite ICs are doing in terms of funding new grants. Everything is interesting. The total number, the ratio of R01 to R21 and/or R03, number of K99/R00s, BigMech grants, etc.
It may take a few days for RePORTER to update but in the mean time you can stalk SILK for the very latest of new NIH grant awards.
April 1 was an unmitigated disaster, coming within weeks of the sequester falling and the uncertainty of a continuing resolution (which passed mere days before April 1). Grants slated to start were delayed substantially although at least one of the ICs I follow seems to have caught up by now.
Another of the ICs of my closest interest has barely funded any new grants all Fiscal Year. It's amazing and I have no idea what the hell is going on. Perhaps the floodgates will open in July? I'm sure their applicants are hoping so!
Anyway, happy browsing. Try not to notice how HUGE the NCI is and what a disproportional number of awards are made by that IC....
This is my query of the day to you, Dear Reader.
We've discussed the basics in the past but a quick overview.
1) Since the priority score and percentile rank of your grant application is all important (not exclusively so but HEAVILY so) it is critical that it be reviewed by the right panel of reviewers
2) You are allowed request in your cover letter that the CSR route your NIH grant application to a particular study section for review.
3) Standing study section descriptions are available at the CSR website as are the standing rosters and the rosters for the prior three rounds of review (i.e., including any ad hoc reviewers).
4) RePORTER allows you to search for grants by study section which gives you a pretty good idea of what they really, really like.
5) You can, therefore, use this information to slant your grant application towards the study section in which you hope it will be reviewed.
A couple of Twitts from @drIgg today raised the question of study section "fit". Presumably this is related to an applicant concluding that despite all the above, he or she has not managed to get many of his or her applications reviewed by the properly "fitting" panel of reviewers.
This was related to the observation that despite ones' request and despite hitting what seem to be the right keywords it is still possible that CSR will assign your grant to some other study section. It has happened to me a few times and it is very annoying. But does this mean these applications didn't get the right fit?
I don't know how one would tell.
As I've related on occasion, I've obtained the largest number of triages from a study section that has also handed me some fundable scores over the past *cough*cough*cough* years. This is usually by way of addressing people's conclusion after the first 1, 2 or maybe 3 submissions that "this study section HATES me!!!". In my case I really think this section is a good fit for a lot of my work, and therefore proposals, so the logic is inescapable. Send a given section a lot of apps and they are going to triage a lot of them. Even if the "fit" is top notch.
It is also the case that there can be a process of getting to know a study section. Of getting to know the subtleties of how they tend to feel about different aspects of the grant structure. Is it a section that is really swayed by Innovation and could give a fig about detailed Interpretations, Alternatives and Potential Pitfalls? Or is it an orthodox StockCritiqueSpewing type of section that prioritizes structure over the content? Do they like to see it chock full of ideas or do they wring their hands over feasibility? On the other side, I assert there is a certain sympathy vote that emerges after a section has reviewed a half dozen of your proposals and never found themselves able to give you a top score. Yeah, it happens. Deal. Less perniciously, I would say that you may actually convince the section of the importance of something that you are proposing through an arc of many proposal rounds*.
This leaves me rather confused as to how one would be able to draw strong conclusions about "fit" without a substantial number of summary statements in hand.
It also speaks to something that every applicant should keep in the back of his or her head. If you can never find what you think is a good fit with a section there are only a few options that I can think of.
1) You do this amazing cross-disciplinary shit that nobody really understands.
2) Your applications actually suck and nobody is going to review it well.
3) You are imagining some Rainbow Fairy Care-a-lot Study section that doesn't actually exist.
What do you think are the signs of a good or bad "fit" with a study section, Dear Reader? I'm curious.
*I have seen situations where a proposal was explicitly mentioned to have been on the fourth or fifth round (this was in the A2 days) in a section.
Admittedly I hadn't looked all that hard but I was previously uncertain as to how NIH grants with tied scores were percentiled. Since the percentiles are incredibly important* for funding decisions, this was a serious question after the new scoring approach (reviewers vote 1-9 integer values, the average is multiplied by 10 for final score. lower is better.) which was designed to generate more tied scores.
The new system poses the chance that a lot of “ranges” for the application are going to be 1-2 or 2-3 and, in some emerging experiences, a whole lot more applications where the three assigned reviewers agree on a single number. Now, if that is the case and nobody from the panel votes outside the range (which they do not frequently do), you are going to end up with a lot of tied 20 and 30 priority scores. That was the prediction anyway.
NIAID has data from one study section that verifies the prediction.
Percentiles range from 1 to 99 in whole numbers. Rounding is always up, e.g., 10.1 percentile becomes 11.
So you should be starting to see that the number of applications assigned to your percentile base** and the number that receive tying scores is going to occasionally throw some whopping discontinuities into the score-percentile relationship.
Rock Talk explains:
However, as you can see, this formula doesn’t work as is for applications with tied scores (see the highlighted cells above) so the tied application are all assigned their respective average percentile
In her example, the top applications in a 15 application pool scored impact scores of 10, 11, 19, 20, 20, 20.... This is a highly pertinent example, btw. Since reviewers concurring on a 2 overall impact is very common and represents a score that is potentially in the hunt for funding***.
In Rockey's example, these tied applications block out the 23, 30 and 37 percentile ranks in this distribution of 15 possible scores. (The top score gets a 3%ile rank, btw. Although this is an absurdly small example of a base for calculation, you can see the effect of base size...10 is the best possible score and in an era of 6-9%ile paylines the rounding-up takes a bite.) The average is assigned so all three get 30%ile. Waaaaay out of the money for an application that has the reviewers concurring on the next-to-best score? Sheesh. In this example, the next-best-scoring application averaged a 19, only just barely below the three tied 20s and yet it got a 17%ile for comparison with their 30%ile.
You can just hear the inchoate screaming in the halls as people compare their scores and percentiles, can't you?
Rockey lists the next score above the ties as a 28 but it could have just as easily been a 21. And it garners a 43%ile.
Again, cue screaming.
Heck, I'm getting a little screamy myself, just thinking about sections which are averse to throwing 1s for Overall Impact and yet send up a lot of 20 ties. Instead of putting all those tied apps in contention for consideration they are basically guaranteeing none of them get funded because they are all kicked up to their average percentile rank. I don't assert that people are intentionally putting up a bunch of tied scores so that they will all be considered. But I do assert that there is a sort of mental or cultural block at going below (better than) a 2 and for many reviewers, when they vote a 2 they think this application should be funded.
In closing, I am currently breaking my will to live by trying to figure out the possible percentile base sizes that let X number of perfect scores (10s) receive 1%iles versus being rounded up to 2%iles and then what would be associated with the next-best few scores. NIAID has posted an 8%ile payline and rumours of NCI working at 5%ile or 6%ile for next year are rumbling. The percentile increments that are permitted, based on the size of the percentile base and their round-up policy, become acute.
*Rumor of a certain IC director who "goes by score" rather than percentile becomes a little more understandable with this example from Rock Talk. The swing of a 20 Overall Impact score from 10%ile to 30%ile is not necessarily reflective of a tough versus a softball study section. It may have been due to the accident of ties and the size of the percentile base.
**typically the grants in that study section round and the two prior rounds for that study section.
***IME, review panels have a reluctance to throw out impact scores of 1. The 2 represents a hesitation point for sure.
One of the more fascinating things I attended at the recent meeting of the College on Problems of Drug Dependence was a Workshop on "Novel Tobacco and Nicotine Products and Regulatory Science", chaired by Dorothy Hatsukami and Stacey Sigmon. The focus on tobacco is of interest, of course, but what was really fascinating for my audience was the "Regulatory Science" part.
As background the Family Smoking Prevention and Tobacco Control Act became law on June 22, 2009 (sidebar, um...four years later and..ahhh. sigh.) This Act gave "the Food and Drug Administration (FDA) the authority to regulate the manufacture, distribution, and marketing of tobacco products to protect public health."
As the Discussant, David Shurtleff (up until recently Acting Deputy Director at NIDA and now Deputy Director at NCCAM), noted this is the first foray for the NIH into "Regulatory Science". I.e., the usual suspect ICs of the NIH will be overseeing conduct of scientific projects designed directly to inform regulation. I repeat, SCIENCE conducted EXPLICITLY to inform regulation! This is great. [R01 RFA; R21 RFA]
Don't get me wrong, regulatory science has existed in the past. The FDA has whole research installments of its very own to do toxicity testing of various kinds. And we on the investigator-initiated side of the world interact with such folks. I certainly do. But this brings all of us together, brings all of the diverse expert laboratory talents together on a common problem. Getting the best people involved doing the most specific study has to be for the better.
In terms of specifics of tobacco control, there were many on this topic that you would find interesting. The Act doesn't permit the actual banning of all tobacco products and it doesn't permit reducing the nicotine in cigarettes to zero. However, it can address questions of nicotine content, the inclusion of adulterants (say menthol flavor) to tobacco and what comes out of a cigarette (Monoamine Oxidase Inhibiting compounds that increase the nicotine effect, minor constituents, etc). It can do something about a proliferation of nicotine-containing consumer products which range from explicit smoking replacements to alleged dietary supplements.
Replacing cigarette smoking with some sort of nicotine inhaler would be a net plus, right? Well.....unless it lured in more consumers or maintained dependence in those who might otherwise have quit. Nicotine "dietary supplements" that function as agonist therapy are coolio....again, unless they perpetuate and expand cigarette use. Or nicotine exposure...while the drug itself is a boatload less harmful than is the smoking of cigarettes it is not benign.
There are already some grants funded for this purpose.
NIH administers several and there was a suggestion that this is new money coming into the NIH from the FDA. Also a comment that this was non-appropriated money, it was being taken from some tobacco-tax fund. So don't think of this as competing with the rest of us for funding.
I was enthused. One of the younger guns of my fields of interest has received a LARGE mechanism to captain. The rest of the people who seem to be involved are excellent. The science is going to be very solid.
I really, really (REALLY) like this expansion of the notion that we need to back regulatory policy with good data. And that we are willing as a society to pay to get it. Sure, in this case we all know that it is because the forces *opposing* regulation are very powerful and well funded. And so it will take a LOT of data to overcome their objections. Nevertheless, it sets a good tone. We should have good reason for every regulatory act even if the opposition is nonexistent or powerless.
That brings me to cannabis.
I'm really hoping to see some efforts along these lines [hint, hmmmm] to address both the medical marijuana and the recreational marijuana policy moves that are under experimentation by the States. In the past some US States have used state cigarette tax money (or settlement money) to fund research, so this doesn't have to be at the Federal level. Looking at you, Colorado and Washington.
As always, see Disclaimer. I'm an interested party in this stuff as I could very easily see myself competing for "regulation science" money on certain relevant topics.
It struck me today
— Drug Monkey (@drugmonkeyblog) June 6, 2013
thanks to the referenced comment from Jim Woodgett that we've never really had a discussion of unfunded overhead situations, despite several discussions of overhead rates in the ongoing effort to determine TheRealProblemTM with NIH budgets these days. It is worth bringing up, particularly for anyone who might be job seeking or negotiating in the near future. As we continue, you'll see what you need to ask about, and what you need to get in writing along with your job offer.
As a brief introduction the overhead (or Indirect Costs; IDC) associated with a research grant award is the amount that disappears into the University, research institution (or what have you) instead of going into the PI's account to spend.
When it comes to federal awards from the NIH (and some other agencies beloved of my Readership) the IDC rate varies across the Universities, research institutes and varied other applicant institutions. For discussion's sake, I'll throw out that the general rate for larger public Universities is about 56%. Smaller (private) Universities and not-for-profit research institutes tend to have higher ones with overhead rates of over 80% not uncommon. Rumors abound of 100% overhead rates but I've not directly seen one of those myself. To my recollection. This research crossroads site used to have a handy database of the federally-negotiated overhead rates but it has been down for some time now and I suspect it is defunct. I don't know where they were scraping their data from but presumably these overhead rates are public info.
There are numerous non-federal sources of funding that a given PI might see as appropriate to pursue for her laboratory. Contracts with biotech or Big Pharma companies. Larger or smaller disease focused foundations (American Heart, Michael J Fox). Less-focused foundations (like Bill and Melinda Gates Foundation). Local philanthropic donors. State foundations or funds (like those diverted from tobacco or alcohol taxes). In many, if not most, cases these funding streams do not wish to pay your University the federally-negotiated overhead rate.
The differential can be large. Such as a foundation that will pay 10% maximum...and your federal rate sits at 70%. Perhaps a donor doesn't want to pay any overhead at all and expects the full donation to go into the research lab's coffers.
The ways that Universities and research institutions deal with this issue varies considerably. Across institutions, of course. But also within an institution depending on the money source, the amount of funding involved, the identity of the PI, etc.
The best case scenario for PIs is the institution that doesn't care. Money is money and....they'll take it. I've heard rumor of such things but it is fantasy as far as I am concerned.
What is more common is that the University has a way to cover the "unfunded overhead" situation to make it appear that the full federally negotiated rate is being applied to each and every grant of consequence*. Sometimes this is accomplished through the mumbo-jumbo of money being fungible and the University simply using their endowment proceeds or some other source of funds not easily connected to a grant to "cover" the overhead. This is good, if you can get it. That is, if your University has a default, no-questions-asked way to do this for a given source of grant support. That's a supportive place to be.
Considerably less-good is the situation where the PI is supposed to "cover" this for herself. Now sometimes it is the case that the Chair of the Department covers it through a slush fund and, obviously, this would be a more limited pool of money. Consequently, the Chair has to balance who gets the slush. This leaves a lot of room for shenanigans having to do with departmental politics. A lot of room for problems based on how many faculty are trying to tap this pot of slush money in a given year. This is why you, as a prospective new hire, need to ask how these situations are covered and get as much in writing as you can.
There are two remaining horrible options which I hesitate to rank.
Some Universities will pull the overhead out of the new-hire's startup funds. That's a dicey game for a new faculty member to play. It might be worth it, it might not. Why would it be worth it? Well, that startup is a fixed, nonrenewable pool of money that is supposed to get you launched, right? This means, in essence, to help you secure a grant. Having grant funding awarded to your lab is a good thing and catapults you into the "funded investigator" category. Depending on the size of it, your use of startup to secure that award, instead of continuing the uncertain game of generating more preliminary data, may be advisable. You just have to look at the leverage that contributing startup to the unfunded overhead will give you.
Some places (and here I find the very high overhead, small not-for-profit research institutes to raise their heads) simply refuse to let faculty (even new hires) apply for anything that doesn't come with full overhead.
Yes, this seems an unbelievably stupid policy and a way to cripple the prospects of your newly-hired faculty, but there you have it.
For anybody on the job market that is reading this, the conclusions are clear. If the unfunded overhead policies of your prospective institutions are not handed to you when you visit, ask. Determining what grants you will and will not be allowed to apply for in your first few years (or across your career) should not be left up to the (entirely logical) assumption that any grant available is attractive to your University.
ETA: A comment from Jim Woodgett
In essence, NIH subsidizes those agencies and philanthropists that don't allow or who restrict overhead.
reminded me I forgot to address why the Universities are doing this. My assumption is that if the federal negotiators thought this statement sufficiently true, they would lower the IDC rate for that University. As I said, my assumption. I've never been able to get an institutional official to verify this directly though.