Archive for the 'Peer Review' category

A can't-miss inquiry to Editor following the initial review of your paper

Jul 23 2014 Published by under Careerism, Conduct of Science, Peer Review

Dear Editor Whitehare,

Do you really expect us to complete the additional experiments that Reviewer #3 insisted were necessary? You DO realize that if we did those experiments the paper would be upgraded enough that we sure as hell would be submitting it upstream of your raggedy ass publication, right?

Collegially,
The Authors

21 responses so far

Sex differences in K99/R00 awardees from my favorite ICs

Jul 21 2014 Published by under Grantsmanship, NIH, NIH Careerism, NIH funding, Peer Review

Datahound has some very interesting analyses up regarding NIH-wide sex differences in the success of the K99/R00 program.

Of the 218 men with K99 awards, 201 (or 92%) went on to activate the R00 portion. Of the 142 women, 127 (or 89%) went on to these R00 phase. These differences in these percentages are not statistically different.

Of the 201 men with R00 awards, 114 (57%) have gone on to receive at least 1 R01 award to date. In contrast, of the 127 women with R00 awards, only 53 (42%) have received an R01 award. This difference is jarring and is statistically significant (P value=0.009).

Yowza.

So per my usual, I'm very interested in what the ICs that are closest to my lab's heart have been up to with this program. Looking at K99 awardees from 07 to 09 I find women PIs to constitute 3/3, 1/3 and 2/4 in one Institute and 1/7, 2/6 and 5/14 in the other Institute. One of these is doing better than the other and I will just note that was before the arrival of a Director who has been very vocal about sex discrimination in science and academia.

In terms of the conversion to R01 funding that is the subject of Datahound's post, the smaller Institution has decent outcomes* for K99 awardees from 07 (R01, R21, nothing), 08 (R01, R01, R01) and 09 (P20 component, U component, nothing, nothing).

In the other Institute, the single woman from 07 did not appear to convert to the R00 phase but Google suggests made Assistant Professor rank anyway. No additional NIH funding. The rest of the 07 class contains 4 with R01 and two with nothing. In 08, the women PIs are split (one R01, one nothing) similar to the men (2 R01, 2 with nothing). In 09 the women PIs have two with R01s, one R03 and two with nothing.

So from this qualitative look, nothing is out of step with Datahound's NIH-wide stats. There are 14/37 women PIs, this 38% is similar to the NIH-wide 39% Datahound quoted although there may be a difference between these two ICS (30% vs 60%) that could stand some inquiry. One of 37 K99 awardees failed to convert to R00 from the K99 (but seems to be faculty anyway). Grant conversion past the R00 is looking to be roughly half or a bit better.

I didn't do the men for the 2009 cohort in the larger Institute but otherwise the sex differences in terms of getting/not getting additional funding beyond the R00 seems pretty similar.

I do hope Datahound's stats open some eyes at the NIH, however. Sure, there are reasons to potentially excuse away a sex difference in the rates of landing additional research funding past the R00. But I am reminded of a graph Sally Rockey posted regarding the success rate on R01-equivalent awards. It showed that men and women PIs had nearly identical success rates on new (Type 1) proposals but slightly lower success on Renewal (Type 2) applications. This pastes over the rates of conversion to R00 and the acquisition of additional funding, if you squint a bit.

Are women overall less productive once they've landed some initial funding? Are they viewed negatively on the continuation of a project but not on the initiation of it? Are women too humble about what they have accomplished?
__
*I'm counting components of P or U mechanisms but not pilot awards.

14 responses so far

Scientific peer review is not broken, but your Glamour humping ways are

I have recently had a not-atypical publishing experience for me. Submitted a manuscript, got a set of comments back in about four weeks. Comments were informed, pointed a finger at some weak points in the paper but did not go all nonlinear about what else they'd like to see or whinge about mechanism or talk about how I could really increase the "likely impact". The AE gave a decision of minor revisions. We made them and resubmitted. The AE accepted the paper.

Boom.

The manuscript had been previously rejected from somewhere else. And we'd revised the manuscript according to those prior comments as best we could. I assume that made the subsequent submission go smoother but it is not impossible we simply would have received major revisions for the original version.

Either way, the process went as I think it should.

This brings me around to the folks who think that peer review of manuscripts is irretrievably broken and needs to be replaced with something NEW!!!!11!!!.

Try working in the normal scientific world for awhile. Say, four years. Submit to regular journals edited by actual working peer scientists. ONLY. Submit to journals of pedestrian and/or unimpressive Impact Factor (that would be the 2-4 range from my frame of reference). Submit interesting stories- whether they are "complete" or "demonstrate mechanism" or any of that bullshit. Then submit the next part of the continuing story you are working on. Repeat.

Oh, and make sure to submit to journals that don't require any page charge. Don't worry, they exist.

Give your trainees plenty of opportunity to be first author. Give them lots of experience writing and allow them to put their own thoughts into the paper..after all, there will be many more of them to go around.

See how the process works. Evaluate the quality of review. Decide whether your science has been helped or hindered by doing this.

Then revisit your prior complaints about how peer review is broken.

And figure out just how many of them have more to do with your own Glamour humping ways than they do with anything about the structure of Editor managed peer-review of scientific manuscripts.

__
Also see Post-publication peer review and preprint fans

16 responses so far

Quality of grant review

Jun 13 2014 Published by under Grant Review, NIH funding, Peer Review

Where are all the outraged complaints about the quality of grant peer review and Errors Of Fact for grants that were scored within the payline?

I mean, if the problem is with bad review it should plague the top scoring applications as much as the rest of the distribution. Right?

47 responses so far

I’m Your Huckleberry

Jun 06 2014 Published by under Fixing the NIH, Grant Review, NIH, Peer Review

bluebirdhappinessThis is a guest appearance of the bluebird of Twitter happiness known as My T Chondria. I am almost positive the bird does some sort of science at some sort of US institution of scientific research.


 

I’m your biased reviewer. I’ve sat on study sections for most of the years I’ve been a faculty member and I’m biased. I’m exactly who Sally Rockey and Richard Nakamura are targeting in their call for proposals to lessen bias and increase impartial reviewing of NIH applications.

Webster’s defines bias as “mental tendency or inclination” listing synonyms including “predisposition, preconception, predilection, partiality, and proclivity”. When I review a grant from an African American applicant, I have a preconception of who they are. I refine that judgment based on their training, publications and productivity.

I should share that I’m also biased in my review of applicants who have health issues, are women, are older than 30 and have children. I’ve had every one of these types of trainees in my lab and my experiences with them lead me to develop partiality and preconceptions that impact my opinions and judgments. Parts of my preconceptions arise from my experiences with these trainees in my as well those I interacted with while serving on my University’s admission committee. I was biased when I performed those duties as well.

Anyone who pretends to be utterly impartial is dangerous and hurtful to those we say we value as a scientific community. I am frankly stunned to see so many tone deaf and thoughtless comments claiming they are deeply offended at the at this ‘mindless drivel’.
MyT-fromRockTalk
Dr Marconi is just one of many scientists who claim, “I’ve never seen this, so it must not be true”. Scientist’s careers are based on things that cannot be seen, but we collect and interpret data and develop an understanding based on that which we cannot see. Data has been collected and the results are alarming and open for active debate.

Bias is far more insidious than racism. Racists reveal themselves and their ignorance and are often dismissed by ‘educated’ society for their extremist views. Bias is far subtler. Even if it results in an imperceptible change in scoring, we are in a climate where these things matter. Where razor fine decisions are being made on funding.

It's the people who are sure they have no bias that I fear. I know I have bias. We are simply incapable of being utterly impartial and anyone who says they are impartial is dangerously obtuse to these problems at best and a liar at worst.

22 responses so far

Put up or shut up time, all ye OpenSciencePostPubReview Waccaloons!

Oct 24 2013 Published by under Academics, Conduct of Science, Open Access, Peer Review

PubMed Commons has finally incorporated a comment feature.

NCBI has released a pilot version of a new service in PubMed that allows researchers to post comments on individual PubMed abstracts. Called PubMed Commons, this service is an initiative of the NIH leadership in response to repeated requests by the scientific community for such a forum to be part of PubMed. We hope that PubMed Commons will leverage the social power of the internet to encourage constructive criticism and high quality discussions of scientific issues that will both enhance understanding and provide new avenues of collaboration within the community.

This is described as being in beta test version and for now is only open to authors of articles already listed in PubMed, so far as I can tell.

Perhaps not as Open as some would wish but it is a pretty good start.

I cannot WAIT to see how this shakes out.

The Open-Everything, RetractionWatch, ReplicationEleventy, PeerReviewFailz, etc acolytes of various strains would have us believe that this is the way to save all of science.

This step of PubMed brings the online commenting to the best place, i.e., where everyone searches out the papers, instead of the commercially beneficial place. It will link, I presume, the commentary to the openly-available PMC version once the 12 month embargo elapses for each paper. All in all, a good place for this to occur.

I will be eager to see if there is any adoption of commenting, to see the type of comments that are offered and to assess whether certain kinds of papers get more commentary than do others. All and all this is going to be a neat little experiment for the conduct-of-science geeks to observe.

I recommend you sign up as soon as possible. I'm sure the devout and TrueBelievers would beg you to make a comment on a paper yourself so, sure, go and comment on some paper.

You can search out commented papers with this string, apparently.
has_user_comments[sb]

In case you are interested in seeing what sorts of comments are being made.

25 responses so far

Post-publication peer review and preprint fans

Anyone who thinks this is a good idea for the biomedical sciences has to have served as an Associate Editor for at least 50 submitted manuscripts or there is no reason to listen to their opinion.

28 responses so far

Repost: Study Section, Act I

I think it has been some time since I last reposted this. This originally appeared Jun 11, 2008.


Time: February, June or October
Setting: The Washington Triangle National Hotel, Washington DC

    Dramatis Personæ:

  • Assistant Professor Yun Gun (ad hoc)
  • Associate Professor Rap I.D. Squirrel (standing member)
  • Professor H. Ed Badger (standing member, second term)
  • Dr. Cat Herder (Scientific Review Officer)
  • The Chorus (assorted members of the Panel)
  • Lurkers (various Program Officers, off in the shadows)

Continue Reading »

No responses yet

If you are going to talk about "tiers", then you'd better own that

SevenTierCakeOccasionally during the review of careers or grant applications you will see dismissive comments on the journals in which someone has published their work. This is not news to you. Terms like "low-impact journals" are wonderfully imprecise and yet deliciously mean. Yes, it reflects the fact that the reviewer himself couldn't be bothered to actually review the science IN those paper, nor to acquaint himself with the notorious skew of real world impact that exists within and across journals.

More hilarious to me is the use of the word "tier". As in "The work from the prior interval of support was mostly published in second tier journals...".

It is almost always second tier that is used.

But this is never correct in my experience.

If we're talking Impact Factor (and these people are, believe it) then there is a "first" tier of journals populated by Cell, Nature and Science.

In the Neurosciences, the next tier is a place (IF in the teens) in which Nature Neuroscience and Neuron dominate. No question. THIS is the "second tier".

A jump down to the IF 12 or so of PNAS most definitely represents a different "tier" if you are going to talk about meaningful differences/similarities in IF.

Then we step down to the circa IF 7-8 range populated by J Neuroscience, Neuropsychopharmacology and Biological Psychiatry. Demonstrably fourth tier.

So for the most part when people are talking about "second tier journals" they are probably down at the FIFTH tier- 4-6 IF in my estimation.

I also argue that the run of the mill society level journals extend below this fifth tier to a "the rest of the pack" zone in which there is a meaningful perception difference from the fifth tier. So.... Six tiers.

Then we have the paper-bagger dump journals. Demonstrably a seventh tier. (And seven is such a nice number isn't it?)

So there you have it. If you* are going to use "tier" to sneer at the journals in which someone publishes, for goodness sake do it right, will ya?

___
*Of course it is people** who publish frequently in the third and fourth tier and only rarely in second tier, that use "second tier journal" to refer to what is in the fifth or sixth tier of IFs. Always.

**For those rare few that publish extensively in the first tier, hey, you feel free to describe all the rest as "second tier". Go nuts.

22 responses so far

GrantRant XI

Jan 31 2013 Published by under Grant Review, Grantsmanship, NIH, NIH funding, Peer Review

Combative responses to prior review are an exceptionally stupid thing to write. Even if you are right on the merits.

Your grant has been sunk in one page you poor, poor fool.

12 responses so far

Older posts »