This is soooo friggin cool.
There is now a tool to search all Federal research grants, i.e. across the various funding agencies.
Federal RePORTER awaits!
This is soooo friggin cool.
There is now a tool to search all Federal research grants, i.e. across the various funding agencies.
Federal RePORTER awaits!
Reviewing a competing continuation of a longitudinal human subjects study always has a little bit of a whiff of extortion to it. I'm not saying this is intentional but......
The sunk cost fallacy is a monster.
One of the things that is immediately picked up by the typical reader is the conceit we scientists express about having a job paid for by taxpayer funds, that allows us to do whatever we want, unfettered and without any obligation to the people paying for the work.
One example of the type:
I argue that the very presence of government (taxed) money is "free" money to scientists to indulge in directions that perhaps are pointless. When something is free, people line-up to collect it (with bad science or poor quality work). A better approach is no funding at all. Then, only the best science would be a candidate for private funding since that is money that people are voluntarily investing expecting a return.
This is what you call an own-goal, people. We cause it by the way we talk about our jobs.
We usually get into this topic most specifically when we are discussing overhead rates awarded to local Universities by the Federal process and when we are discussing the percentage of faculty salaries that should be paid from Federal grants versus the University pot of MagicLeprechaunFairyMoney.
I am the one who continually makes the point that science funded by the NIH (or DOD, CDC, FDA, NSF and a bunch of other Federal entities) should be viewed EXACTLY the same as any other good or service. I tend to get a lot of push-back on this from those of you who are committed to the argument that Universities need to put "skin in the game" and that the solution to the entire NIH budget problem lies with defunding those Universities who get more than 50% overhead.
Bushwa. Science is no different from any other good or service the Federal government wishes to obtain. Yes, the deliverables are going to differ in terms of how concrete they may be but this makes no difference to the main point. The US Federal government pays Universities, Research Institutes and the occasional small business to conduct research. That is what they want, that is what we extramural, NIH-funded scientists provide them with.
The fact that we find it enjoyable is of no importance. The folks making money off building the latest jet fighter (that doesn't work) or the latest software security package for the FBI (that doesn't work) or the latest armor for the Humvees (that we hope works better) find their profits enjoyable. The people getting paid to send plumbers and truck drivers and "private security contractors" along with our military to help pacify Afghanistan or Iraq enjoy making many times the salary they would get otherwise in the civilian world.
Know anyone in elite military jobs? I have known several in my lifetime. Guess what? They enjoy the everloving blazes out of the opportunity that they had to DO something that they find personally fulfilling. Do we question the SEAL or Ranger or TopGun type duder and ask them to do it for free just because they find their jobs personally fulfilling and the taxpayer is footing the bill? Isn't the fact that they are shoo-ins for much better paid gigs as airline pilots and "private security contractors" in their post-Federal-employment career evidence that we don't need to worry about how they are paid while doing the Nation's business?
In many of these cases, the companies and people responding to the US Government request for a good or service tell the government exactly what and how they choose to respond. They present themselves as available for the task. The Government agencies involved then select the winner via a competitive bidding process or other competitive review. Sounds very similar to the NIH Grant game to me.
The Government very frequently, if I read the newspapers correctly, ends up paying even more than the bid, more than expected, more than reasonable for that good or service. Cost overruns. Ooopsies. Progress not as expected in the wildly optimistic original bid. Stuff happens when trying to build a complex modern fighter jet. Mission creep. Is the variable outcome of a NIH Grant funding interval any different? Why should anyone expect it to be different?
I also note that it has to get really, really bad in terms of excessive payouts and utter failure to provide a semblance of the good or service before the Nation's attention is engaged when it comes to most other areas. Golden toilet seats in my era. Then it was fighter jets. Then Haliburton's war profiteering and Blackstoneriverwtfever "security". FBI software upgrade. Fighter jets again. It goes on and on.
The extramural NIH-funded science area of government contracting for goods and services really doesn't look so bad when you put it up against the proper comparison.
We generate knowledge and we publish it. Just as we are asked to do. By the US taxpayer.
The individual taxpayer may object to the US federal government asking us to provide them with a service. That's fine. I have a problem with the amount of military stuff we ask for.
But don't try to pretend we scientists are grifters, looking for a handout to do whatever the heck we want, purely on our own hook. We choose to work in a particular job sector, true. But a lot of other people choose to work in a federally-funded job sector as well.
We should be viewed the same. We should view ourselves* as the same.
*consistent with the percentage of our effort dedicated to Federal goods and services requests, of course.
The latest in the NIH/science focused series from Richard Harris is:
Patients Vulnerable When Cash-Strapped Scientists Cut Corners
It hits on some of the expected themes. Including:
Most of the experimental ALS drugs, it turns out, undergo very perfunctory testing in animals before moving into human tests — based on flimsy evidence.
In hopes of figuring out why, scientists went back to take a second look at the mouse experiments that were the basis for the human study, and found them to be meager. Additional, more careful tests found no compelling reason to think the experimental drug would have ever worked.
Stefano Bertuzzi, the executive director of the American Society for Cell Biology, says that's partly because there is little incentive for scientists to take the time to go back and verify results from other labs.
"You want to be the first one to show something," he says — not the one to verify or dispute a finding, "because you won't get a big prize for that."
and then the former head of NINDS, Story Landis checks in:
Landis has thought a lot about how those last-chance patients ended up in this untenable situation. There is no single answer, she says, but part of the explanation relates to a growing issue in biomedical science: the mad scramble for scarce research dollars.
"The field has become hypercompetitive," she says.
Many excellent grant proposals get turned down, simply because there's not enough money to go around. So Landis says scientists are tempted to oversell weak results.
"Getting a grant requires that you have an exciting story to tell, that you have preliminary data and you have published," she says. "In the rush, to be perfectly honest, to get a wonderful story out on the street in a journal, and preferably with some publicity to match, scientists can cut corners."
So. The offending comment came from Story Landis. I am shaking my head with dismay.
Remember, SHE is the one who has made the decision on which grants get funded at the National Institute on Neurological Disorders and Stroke since 2003. Specifically and personally.
All that peer review of science and Program Officer priority and National Advisory Council concurrence? That is all process advisory to the Director who makes the ultimate decision on what to fund.
So, if there are any fingers to be pointed about what is driving particular aspects of scientist behavior in their attempts to stay funded merely so that they can work on thorny problems like ALS, well that finger goes right at Story Landis.
It's really simple, Directors of ICs. Simple as pie.
If you want to prioritize meticulously replicated and extended scientific investigations, you fund those proposals that are planning just that with urgent priority. When you are evaluating PIs to support with the usual spectrum of Programmatic priority handouts, select those with a history of meticulous replication instead of those who hit the hot highlights and never flesh out the story.
I'm telling you, this would snap a lot more PIs right into line in this current environment.
We are just exactly like everyone else. We respond to the contingencies under which we operate. When HawtEleventyGlamourScience and InstantlyTranslational is seen as the route to funding, guess what. We are going to "oversell weak results". When meticulous and incremental advance is seen as the province of irrelevant plodders who do not deserve grant funding, nobody in their right mind* is going to propose a project which mentions any such thing.
So, you want my advice? Find projects in your funded portfolio that meet the meticulous replication standard- give them a R37 MERIT extension and say why. Publicly. Next, find some of these type of proposals in your just-missed pile and fund them. Also brag on that.
Look up the PLoS ONE pubs that are associated with your grants.....presumably they are going to be enriched in negative results, confirmational findings and all the good stuff Story Landis seems to be seeking. Put out a press release on THOSE results. Particularly the negative ones.
In short, put your money where your mouth is, NIH. Don't engage in this double speak when you, yourselves, are a major contributing factor. Don't put this on your extramural investigators and pretend that you played anything other than a central role in their behavior.
*I may possibly have proposed** a grant which was dedicated to replication and sorting out failures-to-replicate with the explicit expectation of a lot of essentially negative or pedestrian results.
**and received funding for***
***yes, I would have been, assuming that this indeed transpired, as amazed as you are****.
****should such a thing have occurred, I have absolutely no explanation for how such a feat was accomplished*****. Really, none.
*****I mean, the 2%ile priority score, if such had been the result, only begs the question, right?
In the event that you missed it, NPR has been running stories on the current situation with NIH-funded biomedical research in the US. These seem to be mostly the work of Richard Harris, so many thanks to him for telling these stories to the public. You will note that these are not issues new to this readership for the most part. The themes are familiar and, perhaps necessarily, latch onto one position and therefore lack breadth and dimension. Those familiar with my views on "the real problem" with respect to NIH funding will see many things I object to in terms of truthy sounding assertions that don't hold water on examination. Still, I am positively delighted that this extensive series is being brought to the NPR audience.
"When I was a very young scientist, I told myself I would only work on the hardest questions because those were the ones that were worth working on," he says. "And it has been to my advantage and my detriment."
Over the years, he has written a blizzard of grant proposals, but he couldn't convince his peers that his edgy ideas were worth taking a risk on. So, as the last of his funding dried up, he quit his academic job.
"I shouldn't be a grocer right now," he says with a note of anger in his voice. "I should be training students. I should be doing deeper research. And I can't. I don't have an outlet for it."
"If I don't get another NIH grant, say, within the next year, then I will have to let some people go in my lab. And that's a fact," Waterland says. "And there could be a point at which I'm not able to keep a lab."
He notes that the hallway in his laboratory's building is starting to feel like a ghost town as funding for his colleagues dries up. He misses the energy of that lost camaraderie.
"The only people who can survive in this environment are people who are absolutely passionate about what they're doing and have the self-confidence and competitiveness to just go back again and again and just persistently apply for funding," Waterland says.He has applied for eight grants and has been rejected time and again. He's still hoping that his grant for the obesity research will get renewed — next year.
PAULA STEPHAN: In many ways, the research university that's evolved today is much like a shopping mall.
HARRIS: She says think of universities as mall owners and individual scientists as the shopkeepers. Scientists get research grants and then pay rent to the universities out of that money. When grant funding doubled between 1998 and 2003, construction cranes went up all over the country to build more lab space.
STEPHAN: Universities were exuberant. They thought that they could keep running this kind of scheme - where the NIH budget would keep going up, and they could keep hiring more people.
HARRIS: But that didn't happen. After the NIH budget doubled, it stagnated. In fact it's declined more than 20 percent when you take inflation into account.
STEPHAN: We greatly overbuilt the shopping malls.
By The Numbers: Search NIH Grant Data By Institution (support site for the pieces by Richard Harris)
Simple truth of the recentEbola hysteria and the ensuing media coverage of scientists working on hemorrhagic viruses. Approximately 85% of bioscience now wishing ill on a whole lot of people so as to draw attention to their scientific domain.
One key to determining the right study section to request is to look on RePORTER for funded grants reviewed in your study sections of interest.
Sometimes this is much more informative than the boilerplate description of the study section listed at CSR.
A news bit in Nature overviews Richard Nakamura's plans to investigate the disparity in grant funding that was identified in the Ginther report in 2011. See here, here, here for blog comment on Gither et al. Nakamura is, of course, the current director (pdf) of the Center for Scientific Review, the entity at NIH that conducts the peer review of most grant applications that are submitted.It is promising-ish. Nakamura's plans are summarized.
One basic issue that the NIH will address is whether grant reviewers are thinking about an applicant’s race at all, even unconsciously. A team will strip names, racial identification and other identifying information from some proposals before reviewers see them, and look at what happens to grant scores.
Hope they check on the degree to which the blinding works, of course. As you know, Dear Reader, I am always concerned that blinding of academic works cannot always be assumed to have functionally worked to prevent the reviewer from identifying the author or lab group.
The NIH will also study reviewers’ work in finer detail, by analysing successful applications for R01 grants, the NIH’s largest funding programme for individual investigators. The goal is to see whether researchers can spot trends in the language used by reviewers to describe proposals put forward by applicants of different races. There is precedent for detectable differences: in a paper to be published in Academic Medicine, a team led by Molly Carnes, a physician at the University of Wisconsin-Madison, used automated text analysis to show that reviewers’ critiques of R01 grant applications by women tended to include more words denoting praise, as though the writer is surprised at the quality of the work.
Very intriguing contribution to the analysis. Nice.
The NIH will also analyse text in samples of reviewers’ unedited critiques. The Center for Scientific Review typically edits the wording and grammar of these reviews before grant proposals are returned to applicants, but even the subtlest details of such raw comments might hold clues about bias. Nakamura says that reviewers will not be told whether their comments will be analysed, because that in itself would bias the sample. “We want them to be sloppy,” he says.
Hmmm. I guess this is just human factors checking on the automated analysis. Together they are stronger.
The NIH’s Study Sections, in which review groups discuss the top 50% of grant applications, might also harbour bias: the 2011 Science paper found that submissions authored by African Americans are less likely to be discussed in the meetings. But when they are, a negative comment arising from even one person’s unconscious bias could have a major impact in such a group setting, says John Dovidio, a psychologist at Yale University in New Haven, Connecticut, and a member of the NIH’s Diversity Working Group. “That one person can poison the environment,” he says.
This is not presented in a context that suggests the NIH plans to investigate this directly. Not sure how this could be done without putting a severe finger on the balance. I mean sure, most meetings have call-in lines open to Program staff and Nakamura could just record and transcribe meetings but.....reviewers won't like it and if you warn them you scare off the fishies. So to speak.
Nakamura expects that the NIH’s effort to identify and root out prejudice, which he says could cost up to $5 million over three years, might prove controversial. “People resent the implication they might be biased,” he says — an idea borne out by some responses to his 29 May blogpost on the initiative. One commenter wrote, “It is absolutely insulting to be accused of review bigotry. Please tell me why I should continue to give up my time to perform peer review?”
But Nakamura believes that the NIH — and reviewers — need to keep open minds.
He, and media covering this, need to focus on the opportunity to communicate that institutional racism does not hinge on whether individual actors are overtly biased. The piece leads with the comment that Nakamura got his butt over to the Implicit Association website and identified his own biases to himself. This is the sort of introspection that needs to happen. Heck, I'd love to see a trial where study section reviewers were told to go over there and complete a few tests prior to receiving their grant assignments*.
The Nature editorial is, for the most part, on the side of goodness and light on this.
The idea that scientists who volunteer time and energy to review NIH grants could be biased against qualified minority researchers is a tough pill to swallow. The NIH is to be commended for not sweeping this possibility under the rug: it has turned to the scientific method to investigate the suggestion.
It is a topic that the NIH will need to broach delicately. Few academics consciously hold any such inclinations, and fewer still would deliberately allow them to affect their grant evaluations. Some are likely to bristle at what might be seen as an accusation of racism, and the NIH plans to conduct at least some of its studies of grant reviews without the reviewers’ knowledge or consent.
But better for the NIH to offend a few people than to make snap judgements and institute blunt policies to address the problem. Fixes such as increasing scholarships and training for minority groups would no doubt be a good thing, but they could be an unhelpful use of money if they do not address the root cause of the disparity.
yes, yes, excellent.....
And policies such as grant-allocation quotas could come at the expense of other researchers.
No. Bad Nature.
Right back to victim blaming. Right back to ignoring what it means to have a BIAS identified. Right back to ignoring what the nature of privilege means.
Those "other researchers" at present enjoy a disparate benefit at the expense of AfricanAmerican PIs. That's what Ginther means. Period. The onus shifted, upon identification of the disparity, to proving that non-AfricanAmerican PIs actually deserve their awards.
Ginther, btw, went a long ways toward rejecting some of the more obvious reasons why the disparity was not in fact an unfair bias. Read it, including the supplementary materials before you start commenting with stupid. Also, review this.
But there is also this. The low numbers of AfricanAmerican scientists submitting applications to the NIH for funding means that any possible hit to the success rate of non-AfricanAmerican PIs would be well nigh undetectable. A miniscule effect size relative to all other sources of variance in the funding process.
Another way to look at this issue is to take Berg's triage numbers from above. To move to 40% triage rate for the African-AmericanPI applications, we need to shift 20% (230 applications) into the discussed pile. This represents a whopping 0.4% of the White PI apps being shifted onto the triage pile to keep the numbers discussed the same.
These are entirely trivial numbers in terms of the "hit" to the chances of White PIs and yet you could easily equalize the success rate or award probability for African-American PIs.
It is even more astounding that this could be done by picking up African-American PI applications that scored better than the White PI applications that would go unfunded to make up the difference.
And of course "grant-allocation quotas" are precisely what the special paylines and other assists for ESI investigators consist of. Affirmative Action for the young and untried.
Did we get this sort of handwringing, call for long-duration "study" of the "true causes" of the disparity?
The NIH just started picking up ESI grants to balance the odds of funding, even when study sections responded to news of this affirmative action by further punishing ESI scores!
So yeah, my call is for the NIH to balance the funding rates first, and then do all their fancy studies to root out the "real cause" later.
Also for editorial teams like the one at Nature to stop repeating this whinging about those who already enjoy disparate privilege who might lose (some of) it.
*yeah, it might backfire. that would itself be interesting.
I ran across a curious finding in a very Glamourous publication. Being that it was in a CNS journal, the behavior sucked. The data failed to back up the central claim about that behavior*. Which was kind of central to the actual scientific advance of the entire work.
So I contemplated an initial, very limited check on the behavior. A replication of the converging sort.
It's going to cost me about $15K to do it.
If it turns out negative, then where am I? Where am I going to publish a one figure tut-tut negative that flies in the face of a result published in CNS?
If it turns out positive, this is almost worse. It's a "yeah we already knew that from this CNS paper, dumbass" rejection waiting to happen.
Either way, if I expect to be able to publish in even a dump journal I'm gong to need to throw some more money at the topic. I'd say at least $50K.
Spent from grants that are not really related to this topic in any direct way.
If the NIH is serious about the alleged replication problem then it needs to be serious about the costs and risks involved.
*a typical problem with CNS pubs that involve behavioral studies.
A specific issue that recently has recently created interesting conversations in the blogosphere is whether female K99/R00 awardees were less likely to receive a subsequent R01 award compared to male K99/R00 awardees. We at NIH have also found this particular outcome among K99/R00 PIs and have noted that those differences again stem from differential rates of application. Of the 2007 cohort of K99 PIs, 86 percent of the men had applied for R01s by 2013, but only 69 percent of the women had applied.
She's referring here to a post over at DataHound ("K99-R00 Evaluation: A Striking Gender Disparity") which observed:
Of the 201 men with R00 awards, 114 (57%) have gone on to receive at least 1 R01 award to date. In contrast, of the 127 women with R00 awards, only 53 (42%) have received an R01 award. This difference is jarring and is statistically significant (P value=0.009).
To investigate this further, I looked at the two cohorts separately. For the FY2007 cohort, 70 of the 108 men (65%) with R00 awards have received R01 grants whereas only 31 of the 62 women (50%) have (P value = 0.07). For the FY2008 cohort, 44 of the 93 men (47%) with R00 awards have received R01s whereas only 22 of the 65 women (34%) have (P value = 0.10). The lack of statistical significance is due to the smaller sample sizes for the cohorts separately rather than any difference in the trends for the separate cohorts, which are quite similar.
And Rockey isn't even giving us the data on the vigor with which a R00 holder is seeking R01 funding. That may or may not make the explanation even stronger.
Seems to me that any mid or senior level investigators who have new R00-holding female assistant professors in their department might want to make a special effort to encourage them to submit R01 apps early and often.