Is "The Big Pitch" testing what NSF thinks it is?

May 25 2012 Published by under [Education&Careers]

Say you have an experiment that is working, but you want to see if you can get a better yield. How do you go about troubleshooting the work? Would you change one potential variable at a time so that you could see the effect of each independently before you started to mix and match or would you change everything at once and sort out the mess later?

NSF seems to be taking the latter approach in their The Big Pitch initiative. In an attempt to determine whether potentially "transformative" proposals were being selected against in the review process the MCB division of NSF BIO chose a panel with 55 proposals on the biological consequences of climate change and asked the PIs to write a traditional proposal and a 2 page "anonymous" proposal. The carrot was that the awards were going to be split between the panel reviewing the full proposals and the separate panel reviewing the short ones. Guess what? Everyone agreed to write the 2 pager*.

So what happened? The two panels came up with a very different "High Priority" list and now people are quick to say "ZOMG, normal panels are just a buddy system discriminating against the lowly and unknown!"

To wit:

Shirley Taylor, an awardee during the evolution round of the Big Pitch, says a comparison of the reviews she got on the two versions of her proposal convinced her that anonymity had worked in her favor. An associate professor of microbiology at Virginia Commonwealth University in Richmond, Taylor had failed twice to win funding from the National Institutes of Health to study the role of an enzyme in modifying mitochondrial DNA.

The Big Pitch format could “remove bias and allow better support of smaller, innovative research groups that otherwise might be overlooked,” Taylor adds. “The current system is definitely a ‘buddy system’ where it's not what you know but who you know, where you work, and where you publish. And the rich get richer.”

OF COURSE it was the anonymity! It couldn't have been that Dr. Taylor is better at selling an idea in 2 pages than in 15 (or 12 in NIH's case). It also couldn't have been that an idea can look good until you ask who is going to do it.

Both times, she says, reviewers questioned the validity of her preliminary results because she had few publications to her credit. Some reviews of her full proposal to NSF expressed the same concern. Without a biographical sketch, Taylor says, reviewers of the anonymous proposal could “focus on the novelty of the science, and this is what allowed my proposal to be funded.”

Now, I have to say I'm conflicted on this last part. Whereas I don't believe that you need to have XX number of papers to your name before you can be funded to do something, I have been first-hand witness to shenanigans whereby someone was able to sell science that was beyond them (and with which they had no experience) simply because the reviewers didn't have a CV. It didn't end well and significantly stunted at least one trainee's career. No matter what people claim in terms of "it's all who you know**", you will never convince me that a PI's CV is not a valuable document to a reviewer. We may judge the future on the science, but we judge the likelihood of success, to some degree, on the PI's history. It's all we have to go on, barring advances in time travel.

An interesting observation from this process is something that DrugMonkey has contended for a while with regard to reviewers - just because You are convinced that Dr. Smith reviewed your proposal, doesn't make it true. The opposite appears true here as well:

In both Big Pitch rounds, reviewers evaluating the anonymous two-pagers were later told the identity of the applicants. In some cases, Chitnis says, panelists were surprised to learn that a highly rated two-pager had come from a researcher they had never heard of. In others, he notes, reviewers “thought they knew who this person is going to be” only to find that the application came from a former student of the presumed bigwig, working at a small institution.

Does this prove anything? It seems to only muddy the waters for me. In two pages I can sell a project simply by being well-read on the topic. What are the chances that anyone who is well-read on a topic is going to concentrate on what the bigwigs are doing? By proposing something along the lines of the trend setters, reviewers may be assuming that it is the bigwigs themselves writing these proposals when the reality is that anyone can write one without the need for preliminary data or a CV to back the work up.

I've thought about this a lot and I don't know where I come down on it, in all honesty. I know that I don't view this pu-pu platter experiment as "proof" of anything. I think the interpretation here is entirely in the eye of the beholder. Those who weren't making the jump because their CVs or ability to write a convincing traditional proposal will be quick to scream vindication, but time will tell if these PIs become successful with these grant moneys. Whereas this "experiment" is being pitched as a look at peer review, we won't know the real results for another 3-5 years when comparisons can be made between the outcomes of each review method.

One other thing that jumped out at me from this article was the following:

In January, the divisions of Integrative Organismal Systems and Environmental Biology began inviting four-page preproposals that reviewers evaluate before soliciting full proposals from a subset of applicants. (The Big Pitch grant reviewers told NSF officials that four pages of information would be better than two.) Chitnis says his own division, MCB, plans to institute the same system soon.

Get ready MCBers.

*IME, a 2 page "sell" is so incredibly different to write and read than a 15 traditional proposal, that they are almost not directly comparable as documents.

**And I am speaking as someone who is not in The Club, by any stretch.

11 responses so far

  • anon says:

    I like that the NSF is willing to do experiments like this. If done properly, they could be very revealing and helpful for everyone - the panelists, applicants, and POs. In this case, they lacked a proper control. It would have been much more effort, but there should have been FOUR study sections - two for each condition (15p vs 2p). The extra study section for each condition would control for variation between panels when looking at the same proposals in the same format. I'd be happy to volunteer for such an experiment either as a reviewer or as an applicant.

  • Morgan Price says:

    If trainees are the big losers when grants go to underprepared PIs then should the trainees have known better? And should more of the $$ follow trainees rather than to PIs who then hire trainees?

  • MediumPriority4Life says:

    I did the 2 page big pitch for MCB's 2nd go around at this. I come from less well known university but a big post-doc lab. I got medium priority for the big pitch and medium priority for the 15 pager. The 15 pager place 2% of proposals in high priority and 27% in medium. The 2 pager though, placed 4% in high priority and 55% in medium priority. All my comments for the 2 pager said that they wanted more information but loved the idea.

    To me it seems the 2 pager just is not sufficient, makes everything or 59% of things sound great and worthy of funding (maybe they are) but perhaps the cream cant separate to the top.

  • proflikesubstance says:

    MP4L, sounds like you ended up in roughly the same spot both ways. I would be curious how many went to each extreme and why.

  • [...] on the big picture rather than on small methodological details. Of course, as Prof-Like Substance points out in an excellent post, one could mount a pretty reasonable argument that this isn’t [...]

  • [...] costing farmers crops and money. Harvard team cracks the code for new drug resistant superbugs Is “The Big Pitch” testing what NSF thinks it is? Is it worth $61 per [...]

  • Meghan says:

    I was just looking through NSF's Award and Administration Guide and noticed this:
    "The NSF decision to support or not to support a proposed project is based to a considerable extent upon its evaluation of the proposed PI/PD and any identified co-PI/co-PD’s knowledge of the field of study and his/her capabilities to conduct the project in an efficient and productive manner."
    http://www.nsf.gov/pubs/policydocs/pappguide/nsf11001/aag_2.jsp#IIB2

    I wonder how they reconcile that with the anonymous review?

  • proflikesubstance says:

    They would probably argue that the PI would have to demonstrate that in the 2 page pitch, but I agree that seems unlikely.

  • MediumPriority4Life says:

    Right back at you Dude, not too soon on MCB!

    Taken from the MCB page at NSF: "January 28, 2013 is the next deadline for proposals submitted to core programs in the Division of Molecular and Cellular Biosciences (MCB). The Division will accept full proposals submitted in response to solicitation 11-545. MCB has no plans to require preliminary proposals before the submission of full proposals."

  • proflikesubstance says:

    MP4L, if that's the new decision, then good. It also suggests the demise of the one submission deadline, because if DEB and IOS stay with the preproposals, there is no nance of co-review with MCB. Alternatively, if all three are working on something that is in the middle, then MCB has no reason to go through what DEB and IOS just did.

  • [...] form of publications) is really wasteful. This relates to NSF’s “Big Pitch” where some proposals were short and anonymous to reviewers. The panel with anonymous proposals reached a very different conclusion than the one without [...]

Leave a Reply