Scientopia http://scientopia.org Fri, 24 Apr 2015 14:47:04 +0000 en-US hourly 1 That study of peer review perplexes me http://drugmonkey.scientopia.org/2015/04/24/that-study-of-peer-review-perplexes-me/ http://drugmonkey.scientopia.org/2015/04/24/that-study-of-peer-review-perplexes-me/#comments Fri, 24 Apr 2015 14:47:04 +0000 http://drugmonkey.scientopia.org/?p=7837 I just can't understand what is valuable about showing that a 1%ile difference in voted score leads to 2% difference in total citations of papers attributed to that grant award. All discussions of whether NIH peer review is working or broken center on the supposed failure to fund meritorious grants and the alleged funding of non-meritorious grants. 

Please show me one PI that is upset that her 4%ile funded grant really deserved a 2%ile and that shows that peer review is horribly broken. 

The real issue, how a grant overlooked by the system would fare *were it to be funded* is actually addressed to some extent by the graph on citations to clearly outlying grants funded by exception.

This is cast as Program rescuing those rare exception brilliant proposal. But again, how do we know the ones that Program fails to rescue wouldn't have performed well?

]]>
http://drugmonkey.scientopia.org/2015/04/24/that-study-of-peer-review-perplexes-me/feed/ 0
Science Article with an Analysis of NIH Peer Review http://datahound.scientopia.org/2015/04/23/science-article-with-an-analysis-of-nih-peer-review/ http://datahound.scientopia.org/2015/04/23/science-article-with-an-analysis-of-nih-peer-review/#comments Thu, 23 Apr 2015 19:09:12 +0000 http://datahound.scientopia.org/?p=975 In the current issue of Science, Li and Agha present an analysis of the ability of the NIH peer review system to predict subsequent productivity (in terms of publications, citations, and patents linked to particular grants). These economists obtained access to the major NIH databases in a manner that allowed them to associate publications, citations, and patents with particular R01 grants and their priority scores. They analyzed R01 grants from 1980 to 2008, a total of 137,215 grants. This follows on studies (here and here) that I did while I was at NIH with a much smaller data set from a single year and a single institute as well as a publication from NHLBI staff.

The authors' major conclusions are that peer review scores (percentiles) do predict subsequent productivity metrics in a statistically significant manner at a population level. Because of the large data set, the authors are able to examine other potentially confounding factors including grant history, institutional affiliation, degree type, career stage) and they conclude the statistically significant result persists even when correcting for these factors.

Taking a step back, how did they perform the analysis?

(1) They assembled lists of funded R01 grants (both new (Type 1) and competing renewal (Type 2) grants from 1980 to 2006.

(2) They assembled publications (within 5 years of grant approval) and citations (through 2013) linked to each grant.

(3) They assembled patents linked either directly (cited in patent application) or indirectly (cited in publication listed in application) for each grant.

There are certainly challenges in assembling this data set and some of these are discussed in the supplementary material to the paper. For example, not all publications cite grant support and other methods must be used. Also, some publications are supported by more than one grant and, in this case, the publication was linked to both grants.

The assembled data set (for publications) is shown below:

Science Figure

By eye, this shows a drop in the number of linked publications with increasing percentile score. But this is due primarily to the fact that more grants were funded with lower (better) percentile scores over this period. What does this distribution look like?

I had assembled an NIH-wide funding curve for FY2007 as part of the Enhancing Peer Review study (shown below):

NIH EPR Figure

To estimate this curve for the full period, I used success rates and numbers of grants funded to produce the following:

R01 funding curve graph

Of course, after constructing this graph, I noticed that Figure 1 in the supplementary material for the paper includes the actual data on this distribution. While the agreement is satisfying, I was reminded of a favorite saying from graduate school: A week in the lab can save you at least an hour in the library. This curve accounts (at least partially) for the overall trend observed in the data. The ability of peer review scores to predict outcomes lies in more subtle aspects of the data.

To extract the information about the role of peer review, the authors used Poisson regression methods. These methods assume that the distribution of values (i.e. publications or citations) at each x-coordinate (i.e. percentile score) can be approximated as a Poisson distribution. The occurrence of such distributions in these data makes sense since they are based on counting numbers of outputs. The Poisson distribution has the characteristic that the expected value is the same as its variance so that only a single variable in necessary to fit the trends in an entire curve that follows such a distribution. The formula for a Poisson distribution at a point k (an integer) is f = (λ^k*e^-λ)/k!. Here, λ corresponds to the expected value on the y axis and k corresponds to the value on the x axis.

Table 1 in the paper presents "the coefficient of regression on scores for a single Poisson regression of grant outcomes on peer review scores." These coefficients have values from -0.0076 to -0.0215. These values are the β coefficients in a fit of the form ln(λ) = α + βk where k is the percentile score from 1 to 100 and λ is the expected value for the grant outcome (e.g. number of publications).

From the paper, a model which includes corrections for five additional factors (subject-year, PI publication history, PI career characteristics, PI grant history, and PI institution/demographics (see below and supplementary material for how these corrections are included)), the coefficient of regression for both publications and citations is β = -0.0158. A plot of the value of λ as a function of percentile score (k) for publications (with α estimated to be 3.7) is shown below:

Distribution b=-0.0152 plot

The shape of this curve is determined primarily by the value of β.

The value of λ at each point determines the Poisson distribution at the point. For example, in this model at k=1, λ=39.81 and the expected Poisson distribution is shown below:

Poisson distribution-k=1 plot

There will be a corresponding Poisson distribution at each percentile score (value of k). These distributions for k=1 and k=50 superimposed on the overall curve of λ as a function of k (from above) are shown below:

Distribution plot curves

This represents the model of the distributions. However, this does not take into account the number of grants funded at each percentile score shown above. Including this distribution results in an overall distribution of the expected number of publications as a function of percentile score corresponding to this model shown as a contour plot below (where the contours represent 75%, 50%, 25%, 10%, and 1% of the maximum density of publications):

Poisson Curves Plot

This figure can be compared with the first figure above with the data from the paper. The agreement appears reasonable although there appear to be more grants with a smaller number of publications than would be expected from this Poisson regression model. This may reflect differences in publication patterns between fields, the unequal value of different publications, and differences between the productivity of PIs.

With this (longwinded) description of the analysis methods, what conclusions can be drawn from the paper?

First, there does appear to be a statistically significant relationship between peer review percentile scores and subsequent productivity metrics for this population. This relationship was stronger for citations than it was for publication numbers.

Second, the authors studied the effects of correcting force various potential confounding factors. These included:

(i) "Subject-year" determined by correcting for differences in metrics by study section and by year as well as by funding institute. This should at least partially account for differences in fields although some study sections review grants from fields with quite different publication patterns (e.g. chemistry versus biochemistry or mouse models versus human studies).

(ii) "PI publication history" determined by the PIs publication history for the five years prior to the grant application including the number of publications, the number of citations up to the time of grant application, the number of publications in the top 0.1%, 1% and 5% in terms of citations in the year of applications and these same factors limited to first author publications or last author publications.

(iii) "PI career characteristics" determined by Ph.D., M.D., or both, and number of years since the completion of her/his terminal degree.

(iv)  "PI grant history" categorized as one previous R01 grant, more than previous R01 grant, 1 other type of NIH grant, or 2 or more other NIH grants.

(v) "PI institution/demographics" determined as whether the PI institution falls within the top 5, top 10, top 20, or top 100 institutions within this data set in terms of the number of awards with demographic parameters (gender, ethnicity (Asian, Hispanic) estimated from PI names.

Including each of the factors sequentially in the regression analysis did not affect the value of β substantially, particularly for citations as an output. This was interpreted to mean that the statistically significant relationship between percentile score and subsequent productivity metrics persists even correcting for these factors. In addition, examining results related to these factors revealed that (from supplementary material):

"In particular, we see that competing renewals receive 49% more citations, which may be reflective of more citations accruing to more mature research agendas (P<0.001). Applicants with M.D. degrees amass more citations to their resulting publications (P<0.001), which may be a function of the types of journals they publish in, citation norms, and number of papers published in those fields. Applicants from research institutions with the most awarded NIH grants garner more citations (P<0.001), as do applicants who have previously received R01 grants (P<0.001). Lastly, researchers early in their career tend to produce more highly cited work than more mature researchers (P<0.001)."

So what is the bottom line? This paper does appear to demonstrate that NIH peer review does predict subsequent productivity metrics (numbers of publications and citations) at a population level even correcting for many potential confounding factors in reasonable ways. In my opinion, this is an important finding given the dependence of the biomedical enterprise on the NIH peer review system. At the same time, one must keep in mind the relatively shallow slope for the overall trend and the large amount of variation at each percentile score. A 1 percentile point change in peer review score resulted in, on average, a 1.8% decrease in the number of citations attributed to the grant. By my estimate (based on the model in this paper), the odds that funding a grant with a 1 percentile point better peer review score over an alternative will result in more citations are 1.07 to 1. The slight slope and the large amount of "scatter" are not at all surprising given that grant peer review is largely about predicting the future, is a challenging process, and the NIH portfolio includes many quite different areas of science.

One disappointing aspect of this paper is the title: "Big names or big ideas: Do peer-review panels select the best science proposals?" This is an interesting and important question, but the analysis is not suited to address it except peripherally. The analysis does demonstrate that PI factors (e.g. publication history, institutional affiliation) do not dominate the effects seen with peer review, but this is does not really speak to "big names" versus "big ideas" in a more general way. Furthermore, while the authors admit that they cannot study unfunded proposals, it is likely that some of the "best science proposals" fall into this category. The authors do note that some of the proposals funded with poor percentile scores (presumably picked up by NIH program staff) were quite productive.

There is a lot more to digest in this paper. I welcome reactions and questions.

]]>
http://datahound.scientopia.org/2015/04/23/science-article-with-an-analysis-of-nih-peer-review/feed/ 0
Scientific Publishers  http://drugmonkey.scientopia.org/2015/04/23/scientific-publishers/ http://drugmonkey.scientopia.org/2015/04/23/scientific-publishers/#comments Thu, 23 Apr 2015 15:37:12 +0000 http://drugmonkey.scientopia.org/?p=7835 Scientific publishers being told they can't keep fleecing the taxpayer so badly are basically Cliven Bundy. Discuss.

]]>
http://drugmonkey.scientopia.org/2015/04/23/scientific-publishers/feed/ 0
New plan for angry authors http://drugmonkey.scientopia.org/2015/04/23/new-plan-for-angry-authors/ http://drugmonkey.scientopia.org/2015/04/23/new-plan-for-angry-authors/#comments Thu, 23 Apr 2015 14:46:18 +0000 http://drugmonkey.scientopia.org/?p=7833 Two years after your paper is published in Journal of SocietyB send the citation report showing that it quadrupled the JIF of the JournalA that rejected it to the rejecting Editor. 

Let's make this a thing, people. 

]]>
http://drugmonkey.scientopia.org/2015/04/23/new-plan-for-angry-authors/feed/ 0
Thought of the Day http://drugmonkey.scientopia.org/2015/04/22/thought-of-the-day-34/ http://drugmonkey.scientopia.org/2015/04/22/thought-of-the-day-34/#comments Wed, 22 Apr 2015 15:10:03 +0000 http://drugmonkey.scientopia.org/?p=7826 You know those clickbait links on the bottom of some dubious "news" website articles, including HuffPo? Usually about the latest celebrity pictures or "hottest NFL wives" or something?

There is a trend for "white celebrity you didn't know was married to a black spouse!" 

Now it's "...and aren't their biracial kids  kyooooot?"

This feels like interracial fetish porn to me. 

Icky. 

Discuss.

]]>
http://drugmonkey.scientopia.org/2015/04/22/thought-of-the-day-34/feed/ 0
Newt Gingrich to the rescue! (Again) http://drugmonkey.scientopia.org/2015/04/22/newt-gingrich-to-the-rescue-again/ http://drugmonkey.scientopia.org/2015/04/22/newt-gingrich-to-the-rescue-again/#comments Wed, 22 Apr 2015 13:01:30 +0000 http://drugmonkey.scientopia.org/?p=7824 Newt has called for substantial increases in the NIH allocation

]]>
http://drugmonkey.scientopia.org/2015/04/22/newt-gingrich-to-the-rescue-again/feed/ 0
The Daily Show is just plain wrong on pot being non-addictive http://drugmonkey.scientopia.org/2015/04/21/the-daily-show-is-just-plain-wrong/ http://drugmonkey.scientopia.org/2015/04/21/the-daily-show-is-just-plain-wrong/#comments Tue, 21 Apr 2015 16:58:46 +0000 http://drugmonkey.scientopia.org/?p=7814 In the 420 bit from this week, Jessica Williams asserts that marijuana is "a non-addictive proven medical treatment".

Marijuana is most certainly addictive.

In 2012, 17.5% of all substance abuse treatment admissions had marijuana as their primary abused drug. Alcohol alone was 21.5%, heroin 16.3% and cocaine 6.9%.

Daily marijuana smokers use 3 times a day on average and have little variability from day to day.

Pregnant women are unwilling or unable to stop smoking pot almost daily. Increasing numbers of pregnant women are seeking help to discontinue pot use.

At least one woman found out her hyperemesis during pregnancy was the pot, not morning sickness.

Marijuana is addictive in adolescents.

When adolescents stop smoking weed, their memory gets better.

About six percent of High School seniors are smoking pot almost every day.

Clinical trials of medications to help people who are addicted to marijuana stop using are far from rare.

Francophones are addicted to pot.

Yes, Dutch people are addicted to pot.

Many Cases of cannabis hyperemesis syndrome are unable to stop smoking pot, even though it is severely incapacitating them.

Marijuana is addictive.

About 37% of frequent pot users will transition to dependence in three years.

Oh, and pot users are not awesome, friendly and mellow, actually nondependent users are impulsive and hostile on the day they use pot compared with nonsmoking days.

]]>
http://drugmonkey.scientopia.org/2015/04/21/the-daily-show-is-just-plain-wrong/feed/ 0
FLAKKA! (and other failures of the alleged profession of journalism) http://drugmonkey.scientopia.org/2015/04/20/flakka-and-other-failures-of-the-alleged-profession-of-journalism/ http://drugmonkey.scientopia.org/2015/04/20/flakka-and-other-failures-of-the-alleged-profession-of-journalism/#comments Mon, 20 Apr 2015 17:51:23 +0000 http://drugmonkey.scientopia.org/?p=7805 Flakka is just the latest in a long line of stimulant drugs that can, in some very rare cases, result in astonishing public behavior.

Such as running nude through the streets to escape "unknown people trying to kill him".

Such as trying to kick in the door of a police station to get IN so as to escape cars that were supposedly chasing him.

Such as trying to shoot oneself on a rooftop, naked.

Such as trying to have carnal relations with a tree after proclaiming oneself to be Thor.

These stories are like crack to the mainstream media. They have been telling these stories for years, encompassing public scares over PCP, crack cocaine and methamphetamine over the decades past. More recently we've seen these types of stories about synthetic cathinones, in particular under the generic term "bath salts".

Sprinkled amongst the stories about classical psychomotor stimulant effects, we have stories of overdose involving synthetic opioids, MDMA and/or Molly and stories of adverse psychotropic effects of synthetic cannabinoid products. I've addressed some of these issues in prior posts and for today I want to discuss the stimulants of more traditional effect.

My greatest frustration with the reporting is not actually the breathless sensationalism, although that runs a close second. The biggest problem is the lack of verification of the bizarre behavior (or overdose) being associated with ingestion of the drug that is alleged in the initial reporting. I have not see one single verification of alpha-PVP in the body tissues of these recent Florida cases where the subjects reported consuming Flakka. We still do not know exactly what drugs were consumed by the 11 Wesleyan University students who became ill enough to hospitalize. We don't know what caused the death of Kimchi Truong at last year's Coachella music festival.

Oftentimes there are multiple media reports which, to their credit, mention that toxicology testing will take some weeks to verify. And yet. Rarely is there ever a follow-up accounting. And when there is a followup, well, it gets very poor penetration and people often parrot the wrong information even years later.

The Florida Causeway Cannibal is a case in point. At the time of the initial event it was almost universally reported to be due to "bath salts", i.e. MDPV. Toxicology reporting found no sign of any synthetic cathinone in Mr. Eugene.

It is long past time for us to hold the media as accountable for accuracy and followup on drug-related stories as we do for, say, sports reporting.

Now, there are a couple of bright lights in this generally dismal area of news reporting. Here's a local story that reported MDA, not MDMA, was at blame for a death (although they still screw up, MDA is not a "parent" drug of MDMA). In 2013 there was followup in three music festival deaths in New York to confirm MDMA, methylone and the combination of the two caused the three fatalities. We need this kind of attention paid to all of these cases.

Getting back to the current media storm over "Flakka", which is alpha-pyrrolidinopentiophenone (alpha-PDP), I have a few links for you if you are interested in additional reading on this drug.

@forensictoxguy posted a list of scientific papers on alpha-PVP at The Dose Makes the Poison blog. It is not a very long list at present! (Marusich et al, 2014 is probably the place to start your reading.)

The Dose Makes the Poison discussed alpha-PVP back in early 2014....this is not a new 2015 drug by any means.

Michael Taffe from The Scripps Research Institute [PubMed; Lab Site] gives a preview of a paper in press showing alpha-PVP and MDPV are pretty similar to each other in rat self-administration.

There was also a post on the Taffe blog suggesting that alpha-PVP samples submitted to ecstasydata.org were more consistently pure than MDPV and some other street drugs.

Wikipedia, NIDA brief

Jacob Sullum has written a pretty good Opinion piece at Forbes Fear Of Flakka: Anti-Drug Hysteria Validates Itself.

Review of the above information will help you to assess claims in the media that Flakka is "[insert more addictive, more dangerous, more powerful, worse] than [insert bath salts, MDPV, methamphetamine, cocaine]".

tldr; It isn't.

It will also assist you in coming to an understanding that Flakka is likely to be just as addictive and problematic as these previously sensationalized stimulants.

tldr; It is.

In my view, the scope of the Flakka problem over the coming years will be dictated by user popularity and availability, and not by anything particularly unique about the molecular structure of alpha-PVP.

]]>
http://drugmonkey.scientopia.org/2015/04/20/flakka-and-other-failures-of-the-alleged-profession-of-journalism/feed/ 0
What's in your back pocket? http://blather.scientopia.org/2015/04/20/whats-in-your-back-pocket/ http://blather.scientopia.org/2015/04/20/whats-in-your-back-pocket/#comments Mon, 20 Apr 2015 15:48:45 +0000 http://blather.scientopia.org/?p=1017 In academic circles it's common to hear junior folks on the TT to be advised to have a back up plan in their back pocket. You know, just in case. Not that they'll need it of course. But it's good to have one. Right?

What about the tenured/not-so-junior?

 

]]>
http://blather.scientopia.org/2015/04/20/whats-in-your-back-pocket/feed/ 0
It's the WaterMAN award, afterall... http://proflikesubstance.scientopia.org/2015/04/20/its-the-waterman-award-afterall/ http://proflikesubstance.scientopia.org/2015/04/20/its-the-waterman-award-afterall/#comments Mon, 20 Apr 2015 14:22:36 +0000 http://proflikesubstance.scientopia.org/?p=4024 When it comes to early career awards in the NSF world, the Alan T Waterman Award is about as good as it gets. The awardee gets $1m over 5 years and a pretty medal. Open to any field of science, the major criteria for winning are listed as:

Candidates should have demonstrated exceptional individual achievements in scientific or engineering research of sufficient quality to place them at the forefront of their peers. Criteria include originality, innovation, and significant impact on the field.

Last week NSF announced the 2015 Waterman Award winner, Andrea Alù. I'm sure he's a good engineer and scientist, in general. I have no doubt that all of the recipients are exceptional at what they do and deserving of the award. However, at this point, the string of men receiving the award is getting a bit hard to ignore. We're now over a decade since a woman has won the Waterman, and aside from a five (2000-2004) year stretch where three women were recognized, only two other times has it gone to women since the award was established in 1975!

That's 40 years and five women who can claim to have won. At some point that starts to look a little embarrassing in how blatantly it exposes an undercurrent of sexism in science and the evaluation of who significantly impacts the field. Apparently NSF doesn't think a <13% awardee rate to women is over that threshold, just yet.

]]>
http://proflikesubstance.scientopia.org/2015/04/20/its-the-waterman-award-afterall/feed/ 0