Scientopia Thu, 26 Feb 2015 20:38:51 +0000 en-US hourly 1 Number of New and Competing Renewal Awards from 1995-2014 Thu, 26 Feb 2015 20:38:51 +0000 In my recent post, I noted in passing that the number of Type 2 (Competing Renewal) awards (R01s and R37s) fell from 2653 in FY1995 to 1532 in FY2014. This led to both comments on the post and a post on this topic from Drugmonkey. Since I was also struck by this observation, I was already working on additional analysis.

Below is a plot of the number of New (Type 1) and Competing Renewal (Type 2) awards (just R01s this time for simplicity) as a function of time from FY1995 to FY2014.

Type 1-2 Award number graph

The first striking observation is the dramatic increase in New (Type 1) awards from FY1997 to FY2000 (at the beginning of the NIH "budget doubling" with no corresponding increase in Type 2 awards. This lack of increase in Type 2 awards is almost certainly due to the lack of an increase in applications although I have found no readily available data from these years. Note, further, that success rates for Type 2 applications were likely around 50% (or perhaps above) during this period (see below for data for later years).

From NIH RePORT Funding Facts, data are available for the number of applications and awards for Type 1 and Type 2 R01 grants from FY2001 to the present. Note that these data different slightly from those above and do not appear to include awards made associated with the Recovery Act. These data are plotted below.

Type 1-2 Apps Award Plot-2

This plot shows the further increase in Type 1 Applications over this period. As shown here and in the first figure, the number of Type 1 Awards has been relatively flat (after the 75% increase just prior to FY2000. The number of Type 2 applications increased gradually from FY2001 to FY2006 (by 35%), slowly fell from FY2006 to FY2010 (by 15%), and then fell somewhat more dramatically (by 25%) from FY2010 to FY2014. The number of Type 2 awards decreased by 11% from FY2001 to FY2006, by 4% from FY2006 to FY2010, and then dramatically (by 31%) from FY2010 to FY2014.

These trends are reflected in success rates for Type 1 and Type 2 R01 grants over this period shown below:

Success rate plot-2


The success rates fell dramatically shortly after the end of the "budget doubling" and then stabilized to some extent from FY2007 to FY2014.

Taken together, these data reveal that there has been a sharp drop in the number of Competing Renewal Awards, particularly over the past 4 years. This have been driven in large part by a drop in the number of Type 2 applications. This, in turn, may be due to the "No A2" policy or to changes in application behavior around and after the Recovery Act.

]]> 0
What would you ask Sally Rockey? Thu, 26 Feb 2015 19:56:52 +0000 Apparently Sally Rockey, NIH Deputy Director in charge of the Office of Extramural Research is on some sort of University tour because I have received several queries lately that go something like this:

Dear Drugmonky:
Sally Rockey will be visiting our University soon and I have the opportunity to ask a question or two if I can get a word in edgewise between all our local BigWig voices. Do you or your Readers have any suggestions for me to add to my list of potential things to ask her?
A. Reader

I have my thoughts and suggestions, of course, but mostly my Readers know what those are.

How about you folks in the commentariat? What would you ask Sally Rockey if you had her in a small room with your peers?

]]> 0
Significance, p-values, and what to publish Thu, 26 Feb 2015 16:55:39 +0000 I gave a Big Talk once on the difference between statistical significance and clinical significance. It was poorly received by the editor of the society's journal. He said to me that now he was going to be inundated with papers that say "Dr. Theron sez I don't need no stinkin' numbers". Actually he didn't quote The Treasure of Sierra Madre,  but the implication was there.

My simple point was that effect size is as important as significance. I've talked about this some, before. Something can be significant (because of pseudo-replication, over sampling, etc) but that the effect size is trivial, or less than your resolution of measurement. For (made-up) example, say you find a difference in a variable measuring "age at disease onset" between males and females, but that difference is on the order of days, i.e., males 60 years, 2 months and 3 days, females, 60 years, 2 months and 4 days. What does that mean in any kind of measure of effect size other than indicating there is probably some other bias lurking in your data. This one is obvious because we have an intuitive sense  of what the data mean. Other problems are often less obvious.

Now comes the journal Basic and Applied Social Psychology which is banning significance testing. In specific, they have banned  null hypothesis significance testing procedure (NHSTP). Steve Novella, who writes at the excellent Science-Based Medicine Blog, has a great discussion. He links to a video by Geoff Cumming that is similar to what I showed in the talk I gave.

All of this is well and good. Many Good Thinking People are cheering. But most of us publish in journals that require estimates of significance. My clinical society judges abstracts (which are severely limited in number of both talks and posters) based on some numskull criteria, which include "testable hypothesis" and "testing of hypothesis". They have not yet learned that horrible research comes in many flavors, and that one rule (must have p-value) does not a good piece of research make.

Another person who's view I respect is Andrew Gelman who has a good but short post on this. Here's the close of his post:

 Actually, I think standard errors, p-values, and confidence intervals can be very helpful in research when considered as convenient parts of a data analysis .... The problem comes when they’re considered as the culmination of the analysis, as if “p less than .05″ represents some kind of proof of something. I do like the idea of requiring that research claims stand on their own without requiring the (often spurious) support of p-values.

The bottom line always is: understand your data. Go back to what the effect size means. What are the data saying to you? What is the story? Use statistics to support what you do, but if they become the science of your research, you are in trouble.

]]> 0
Sustaining NIH funding then and now: 58% as many Type 2 awards in FY2014 Thu, 26 Feb 2015 04:48:26 +0000 Datahound has a cool new analysis posted on the distribution of competing continuation R01/R37 awards (Type 2 in NIH grant parlance).

There is one thing that I noticed that makes for a nice simple soundbite to go along with your other explanations to the willfully blind old guard about how much harder the NIH grant game is at the moment.

Datahound reports that in FY 1995 there were 2653 Type 2 competing continuation R01/R37 awards funded by the NIH. In FY 2014 there were only 1532 Type 2 competing continuation R01/R37 grants awarded.

I make this out to be 58% of the 1995 total.

This is a huge reduction. I had no idea that this was the case. I mean sure, I predicted that there would be a big decline in Type 2 following the ban of A2 revisions*. And I would have predicted that the post-Doubling, Undoubling, Defunding dismality would have had an impact on Type 2 awards. And I complained for years that the increasing odds of A0 apps being sent into the traffic holding pattern itself put a kibosh on Type 2 because PIs simply couldn't assume a competing continuation would be funded in time to avoid a gap. Consequently PIs were strategically putting in closely related but "new" apps in say Year 3 of the original noncompeting interval.

But I think if I had been asked to speculate I would have estimated a much smaller reduction.

*I can't wait until Datahound brackets this interval so we can see if this was the major effect or if the trend has developed more gradually since 1995.

]]> 0
Nature is not at all serious about double blind peer review Wed, 25 Feb 2015 20:10:48 +0000 The announcement for the policy is here.

Before I get into this, it would be a good thing if the review of scientific manuscripts could be entirely blind. Meaning the authors do not know who is editing or reviewing their paper- the latter is almost always true already - and that the editors and reviewers do not know who the authors are.

The reason is simple. Acceptance of a manuscript for publication should be entirely on the basis of what is contained in that manuscript. It should rely in no way on the identity of the people submitting the manuscript. This is not true at present. The reputation and/or perceived power of the authors is hugely influential on what gets published in which journals. Particularly for what are perceived as the best or most elite journals. This is a fact.

The risk is that inferior science gets accepted for publication because of who the authors are and therefore that more meritorious science does not get accepted. Even more worrisome, science that is significantly flawed or wrong may get published because of author reputation when it would have otherwise been sent back for fixing of the flaws.

We should all be most interested in making science publication as excellent as possible.

Blinding of the peer review process is a decent way to minimize biases based on author identity, so it is a good thing.

My problem is that it cannot work, absent significant changes in the way academic publishing operates. Consequently, any attempts to conduct double-blinded review that does not address these significant issues is doomed to fail. And since anyone with half a brain can see the following concerns, if they argue this Nature initiative is a good idea then I submit to you that they are engaged in a highly cynical effort to direct attention away from certain things. Things that we might describe as the real problem.

Here are the issues I see with the proposed Nature experiment.
1) It doesn't blind their editors. Nature uses a professional editorial staff who decide whether to send a manuscript out for peer review or just to summarily reject it. They select reviewers, make interim decisions, decide whether to send subsequent revised versions to review, select new or old reviewers and decide, again, whether to accept the manuscript. These editors, being human, are subject to tremendous biases based on author identity. Their role in the process is so tremendously powerful that blinding the reviewers but not the editors to the author identity is likely to have only minimal effect.

2) This policy is opt-in. HA! This is absurd. The people who are powerful and thus expected to benefit from their identity will not opt in. They'd be insane to do so. The people who are not powerful and are, as it happens, just exactly those people who are calling for blinded review so their work will have a fair chance on its own merits will opt-in but will gain no relative advantage by doing so.

3) The scientific manuscript as we currently know it is chock full of clues as to author identity. Even if you rigorously excluded "we previously reported..." statements and manged to even out the self-citations to a nonsuspicious level (no easy task on either account) there is still the issue of scientific interest. No matter what the topic, there is going to be a betting gradient for how likely different labs are to have produced the manuscript.

4) The Nature policy mentions no back checking on whether their blinding actually works. This is key, see above comment about the betting gradient. It is not sufficient to put formal de-identification in place. It is necessary to check with reviewers over the real practice of the policy to determine the extent to which blinding succeeds or fails. And you cannot simply brandish a less than 100% identification rate either. If the reviewer only thinks that the paper was written by Professor Smith, then the system is already lost. Because that reviewer is being affected by the aforementioned issues of reputation and power even if she is wrong about the authors. That's on the tactical, paper by paper front. In the longer haul, the more reputed labs are generally going to be more actively submitting to a given journal and thus the erroneous assumption will be more likely to accrue to them anyway.

So. We're left with a policy that can be put in place in a formal sense. Nature can claim that they have conducted "double blind" review of manuscripts.

They will not be able to show that review is truly blinded. More critically they will not able to show that author reputational bias has been significantly decoupled from the entire process, given the huge input from their editorial staff.

So anything that they conclude from this will be baseless. And therefore highly counterproductive to the overall mission.

]]> 0
Press! Wed, 25 Feb 2015 16:10:57 +0000 All right then. This recent paper has a weird life of its own. I guess what I've learned is that academics really like papers about academics. And reporters are very interested in bias and funding policy. I've been asked to give an interview for radio! I'm apparently going to be on the German version of NPR sometime in March. I don't speak German, so I assume they'll translate or something.

This is exciting and nerve-wracking. I don't know what it means for my career (probably nothing), or if it's good for my ego or bad. I know that I need to be clear and careful about what I say. Don't over claim, don't offer opinion on things I don't have expertise in. I build simulations. One of them was about peer review. That doesn't make me an expert in peer review. I'm still only an expert in modeling complex systems in health care.

I've made this clear to the interviewer already. We are apparently going to talk tomorrow. I am happy to discuss my work, and what I think the implications are. But I'm not going to put myself forward as an expert in the funding or review of grants. I'm a small-time player in the research game. Less than half a million in career funding. Fewer than 20 papers. H-index: 4. I'm utterly unimportant. But apparently, I've written an interesting paper.

Without having done it yet, here's my advice on giving radio interviews: Ask people who know more than you. This is my advice on pretty much everything in life. Get a diversity of experienced opinions, and then try your best. Err on the side of caution. Act humbly even if you don't feel humble. And remember that in a week, no one will give a flying fuck.

]]> 0
More on Meetings Wed, 25 Feb 2015 14:51:48 +0000 Morgan Price's Comment that I blogged on the other day about going to meetings had the following parenthetical end bit:

(Do I not get it because I don’t like going to meetings?)

Lots of people have talked about why go to meetings, and what's hard about them. The upshot was, learn to like going to meetings. Lots of good stuff. DM (natch) had something important to say:

drugmonkey : I think it also takes some time in the field for meetings to become less uncomfortable. The longer you've been going to the meeting(s) with the same old crowd, the more likely to have old friends swing by your poster, to see people in the coffee line to chat up, to go out to eat with. You are more comfortable getting up to make a comment at the microphone, to grab the person in the hallway to discuss your / their data and folks from the platform call you out when they know you can answer a question better than they can.

It's a bit self-reinforcing. Here's the logic: I am uncomfortable at meetings. So I don't like going. But when I do I'm even more uncomfortable so I don't see that I get anything out of them but feeling inadequate in whatever way. So I don't like meetings even more than I didn't like meetings before, and I can now justify the unimportance of meetings because I get nothing out of them.

But this is also self-correcting. If you do go to meetings, and go to meetings purposefully, you will start to meet people and you will have new friends for the next time. It took me a while to realize that yes, I'm going to have lunch alone and that's not as much fun as lunch with friends. But, gird your loins and talk to someone and invite yourself along. You can do a little homework in advance. Find a new young asst prof whose work you admire. Go up to them. Ask about their work. Ask to have lunch or coffee or dinner with them.


One good point Qaz  makes is science is dialog. My PhD advisor (indubitably rolling in hell at this very moment) used to say that science is not an edifice built brick by brick, and if I am your advisor, your thesis will not be a stone in that edifice. It is a living breathing entity, he said, that if one is lucky, one can grow with science and into science. The implication, of course, is that we were all very lucky to be growing with him. "Under his care" would imply an involvement on his part that did not exist.
Qaz's point about good presentations is also important. Put your best self forward. Posters in particular open up those opportunities for dialog. If no one is talking to you, make friends with the posters on either side of you, or across the aisle. That is someone with whom you can go have coffee next year.
My answer to M. Price is could be. We may not love all parts of science, but meetings, for all their pain are totally worth it. Good meeting experiences are totally different from reading papers. Here's qaz again:
Too often people treat meetings as glorified reading sessions, where they sit in the back and let the text be talked at them. The whole point of a meeting is to engage the other individuals.
That's real science.

]]> 0
Year Distribution of Competing Renewal (Type 2) Grants from 1995-2014 Wed, 25 Feb 2015 14:18:42 +0000 R01 grants may be renewed, typically every 4-5 years. These are called "competing renewals" or "Type 2" grants. In the context of discussions of the "Emeritus Award" discussion, I examined the distribution of R01 grants that had been renewed over a long period of time. Here, I look at the distribution of Type 2 grants over the period from 1995 to 2014.

Data for Type 2 R01 grants (as well as the corresponding but much smaller number of R37 MERIT awards) for each year were compiled from NIH RePORTER. Note that these grants only include grants that were competitively renewed in a given year and not non-competing continuations (Type 5 awards). The distributions of the 2653 Type 2 awards from FY1995 and 1532 Type 2 awards from FY2014 over the Support Year are shown below:

1995-2014 plot

The distribution for FY1995 shows peaks at Year 4, Year 6 corresponding to renewals of initial (Type 1) grants of 3 and 5 years, respectively. There are additional peaks at Year 9 and Year 14. These correspond primarily to grants that initially made for a period of 3 years and then were renewed for 5 year periods.

The distribution for FY2014 is similar but shows some important differences. First, the overall amplitude is smaller as there were 58% as many Type 2 R01 grants awarded in FY2014 compared to FY1995. Second, the FY2014 shows peaks at Year 6 and Year 11. These correspond to grants awarded initially for a period of 5 years and then renewed for an additional 5 year period. Third, the FY2014 distribution shows a longer tail extended to Year 40 and beyond, corresponding to long-running grants.

The distributions are shown in normalized and integrated form below:

1995-2014 frac plot

These curves show more clearly the tail extending longer grant durations. For FY1995, 20% of the grants are in Years 13 or beyond whereas for FY2014, 20% of the grants are in Years 18 and beyond.

The distribution of Type 2 grants can be seen more clearly by looking are the distribution summed over all of the years from FY1995 to FY2014 as shown below.

Overall Type 2 distribution

This shows peaks from Years 5-6 and Years 10-11 and then the tail extending from Year 15 and beyond.

The structure of this tail can be seen more clearly by replotting the data on a log (base 10) scale as shown below:

Log plot-2-60

The portion of this graph corresponding to the extended tail is very nearly linear with a slight downward curvature. This indicates the tail that is approximately exponential. Fitting this curve reveals an exponent of approximately -0.064/year. This corresponds to a half-life 0f 4.7 years. In order words, the chance that a long-standing grant is renewed every 4-5 years is approximately 50%. The NIH reported success rate for all competing renewal Grants averaged 38% from FY2001 to 2014. Thus, it appears that the likelihood that a longstanding R01 is competitively renewed is slightly, but not dramatically, higher than that for R01 grants overall. The slight downward curvature of the log plot likely reflects that fact that there were a smaller number of longstanding grants in the earlier years in this analysis.

An alternative approach to examining trends the duration of these grants involves looking at the fraction of A0 applications among the funded Type 2 grants. I have previously examined this parameter in other contexts. A plot of the fraction of A0 applications among funded Type 2 grants as a function of the Year of the grant is shown below:

Fraction A0 plot


This fraction dips slightly for grant years from 4-7 (corresponding to the first renewal) and then reaches a relatively stable level extending out to 40 years. This suggests there is not a major increase in the likelihood of application success as the year of the grant increases.

Overall, these observations are consistent with the notion that R01 grants reach longstanding status through the perseverance of the principal investigators. Over time, these investigators continue to execute research programs that are sufficiently productive that they compete for renewal with a success rate close to 50% (at least historically). There does not appear to be a substantial advantage for longstanding grant applications above the general advantage for Type 2 versus Type 1 applications based on these publicly available data.

]]> 0
Repost: To a conference in baby-attachmode Wed, 25 Feb 2015 07:16:12 +0000 Yesterday, Potnia Theron wrote about why it is so important to go to meetings to hear new ideas and stay creative. In response, Christina Lis wrote how incredibly difficult and expensive this is when you have children. I couldn't agree more after not having gone to meetings (other than our local neuroscience meeting) in the past two years because of being pregnant and having small children. There's only so many options: take your kid(s) to a meeting means either bringing enough support so that you can still go to social events in the evenings (which is expensive), or not going to social events which takes away most of the usefulness of going to a meeting, or being able to leave your kid(s) at home, which has its own challenges (ask my husband about not sleeping for a couple nights the first time I left without 15 month old BlueEyes to go ta a conference...). That's why I thought it would be fitting to repost my second blog post ever, about going to a conference with a small baby:


Last year’s society for Neuroscience meeting was right when I went back to work after my maternity leave. And since I had patched a whole bunch of cells while very pregnant, I even had something to present there. The meeting was right around the corner from where I live, which is why I decided that even though BlueEyes was only 4 months old, the whole family was going to the meeting (and in this case, with meeting I mean the actual science-part, and not so much the social and drinking part). So on Saturday and Sunday I put BlueEyes in a baby wrap (Girasol Chococabana for those of you interested), and walked around the conference.

SfN turned out to be very baby-friendly, since they even had a specific room for infant care, where you could nurse and change your baby. The only disadvantage was that this was kind of far away from the poster hall, so after I had checked out a poster or two I had to walk back there to nurse a hungry baby or change a diaper. Oh well, most people walk around the poster hall to meet people they know instead of actually look at the posters anyway, right? A major unexpected disadvantage was that when you show up at someone’s poster with a baby attached to you, they automatically assume that you’ve come to show your cute baby instead of ask a serious science question. So not much science talk for me that weekend…

On Monday BlueEyes went to his usual daycare, and I traded the baby-in-wrap for my breast pump. This was potentially even bulkier and certainly more annoying to drag around all day. The same sort of thing as before happened where I would check out a bunch of posters (at least now I got to ask science-questions and have people answer them), and then have to walk back to the infant care room to pump milk. And after I presented my own poster I realized that whoever thought of four hour poster sessions had probably never lactated him- or herself….

A last thing to note is that the night after we took BlueEyes to SfN, he had his longest night sleep so far (a 6 hour stretch of sleep!). And mind you, this was in November... So I guess nothing puts our baby to sleep like a couple 1000 neuroscience posters!

]]> 0
So I didn't get a fundable score, either Tue, 24 Feb 2015 18:15:35 +0000 Here are things that I am grateful for, right now, when I didn't get a fundable score:

1. I am old enough not be totally incapacitated for 12 to 48 hours with grief and depression.

2. I am mature enough that I am not going to beat myself up and say that I'm stupid, that I'm incompetent, that my science sucks.

3. I have enough self control not to go out and get 3 gallons of expensive ice cream and eat it over a period of a few hours till I feel even more miserable about myself.

4. Nor am I going to take it out, in one form or another, on my current partner. I am not going to pick a fight, so I that can say that no one cares at all about me. No one in my life right now deserves that.

5. I am not going to blame: the molecular geneticists, Millennials, clinical idiots who don't like animal models, GenX, Sally Rockey, the study section chair, the reviewers, my mother, men, or anyone else. Not because they aren't at fault, but because it doesn't matter. Fault is irrelevant here. Getting funded is the goal.

These are all things I have done in the past.

It will be a while till I get pink sheets (reviews). I will read them. I will be unhappy. I will try & rewrite (as a new grant, of course).

Meanwhile, it's time to trot out ideas for proposals B and C and work on making them presentable, i.e., submittable.

My heart goes out to all the young people who are in the same boat, folks who haven't learned what I've learned about responding to being trashed by study section. Because it hurts a lot to be rejected.

If my thoughts help, great. That's why I write this blog. If they don't, well, my heart is still with you, whether you want it or not.



]]> 0