Check out this new blog by @boehninglab
Archive for the 'Uncategorized' category
In email chatting with PP, as is our wont, I had the following query.
Do you think these "do it to Julia" muppethuggers really think they have the best objective solution? Or do they really know they are just looking out for número uno?
What do you think, Dear Reader?
Relevant to Sci's recent ranting about the paper chase in science...
Sorry reviewers, I am not burning a year and $250K to satisfy your curiosity about something stupid for a journal of this IF.
Why do many urban jurisdictions ban the keeping of a rooster but you think banning established dangers to life and limb (instead of mere sleep/wake cycle) is the equivalent of racial discrimination?
*having just learned at PhysioProf's blog that Jack Russell terriers are "statistically" dangerous just like pitbull terriers perhaps we are getting at the real problem. Terriers.
You will have noticed that I repost my old blog entries with frequency. I do so mostly when I think it has been long enough that the blog readership has changed enough that it will be new to some eyes. This is related to the fact that I am convinced blog readership is more like news readership...ephemeral and current. The majority of the viewer traffic lands on the blog through current links rather than through google searches that land on older content.
My view of the reading of scientific content is different. Sure, new and topical stuff will get the most eyes but this is not, precisely, where the primary value of academic papers lies. Particularly when it comes to review articles? I think so.
I've run across a most curious situation. I noticed this in one of my various feeds.
Cosyns B, Droogmans S, Rosenhek R, Lancellotti P. Republished: Drug-induced valvular heart disease. Postgrad Med J. 2013 Mar;89(1049):173-8. doi: 10.1136/postgradmedj-2012-302239rep.
Since "Republished" caught my eye, I clicked on the first author and found:
Cosyns B, Droogmans S, Rosenhek R, Lancellotti P. Drug-induced valvular heart disease. Heart. 2013 Jan;99(1):7-12. doi: 10.1136/heartjnl-2012-302239. Epub 2012 Aug 8.
Tracking over to the journal page for the Republished one I found it has the following "Footnote".
This is a reprint of a paper that first appeared in Heart, 2013, Volume 99, pages 7–12.
That note appears prominently on the PDF of the article (in the sidebar block for author details and the submission/acceptance dates) and there is a header on every page of the article that reads "Republishedreview"(sic).
I then did a PubMed search for "Republished" and found that the Postgrad Med J really is quite fond of this strategy. There are some other players in this game too, though. The Br J Sports Med seems to like the "Republished research" tag, for example.
I've seen the occasional retracted-and-republished strategy for dealing with errata. But this was a new one for me, to my recollection. I scanned through the Postgrad Med J Instructions to Authors and it wasn't really clear if these are unsolicited submissions or requested by the Editorial staff. I'd tend to suspect the latter but both versions of the review say "Provenance and peer review Not commissioned; externally peer reviewed." at the bottom. Interestingly the Republished one has color figures where the original has B/W images....it does look nicer. And I can make out no indication in the Republished one that it has the permission of the original journal Heart to republish the work. They are both in the BMJ Group, however, so maybe this issue* is irrelevant?
I find myself curious about the advantages and disadvantages for both authors and the journals/publishers for doing this sort of thing. To be honest, what I'd like to see is the the bloggy "Update:" tag added to the title of those reviews from authors that seem to publish essentially the same review over and over. Particularly when it is just an updating of progress since they last wrote a review. That would be a great service to the reader.
*I.e., if the Publisher, not the journal, holds copyright and the Publisher is the same for both journals...the "permission" is implied or implicit? But then we have the issue of "self-plagiarism" that seems to bother the humanities majors' sentiments which are insinuating themselves into the business of science lately.
Time: February, June or October
Setting: The Washington Triangle National Hotel, Washington DC
- Dramatis Personæ:
- Assistant Professor Yun Gun (ad hoc)
- Associate Professor Rap I.D. Squirrel (standing member)
- Professor H. Ed Badger (standing member, second term)
- Dr. Cat Herder (Scientific Review Officer)
- The Chorus (assorted members of the Panel)
- Lurkers (various Program Officers, off in the shadows)
A new paper has been published that purports to refute the conclusion of the Ginther report (also see this, this, this, this) that there exists substantial bias in the awarding of NIH grants to white versus black PIs.
Jiansheng Yang, Michael W. Vannier, Fang Wang, Yan Deng, Fengrong Ou, James Bennett, Yang Liud, Ge Wang A bibliometric analysis of academic publication and NIH funding Journal of Informetrics 7 (2013) 318– 324 [ journal link ]
My biggest concern here has to do with the sampling...otherwise I guess we should view it as data that contributes to the overall picture. Much as Ginther et al drew a host of "oh it must really be..." alternative explanations, so should this.
The authors targeted 92 medical schools (1) and selected 31 odd-number-rank schools (2). They identified white and African American faculty members (from, ah, web page pictures and, um "names". also "resumes as needed".(3)) They then did a 1:2 pairing of black with white faculty in the same discipline, with the same degree and within the same medical school (4), same sex and title/academic rank.
So. They were able to identify 130 black professors of which only 14 were funded by the NIH from 2008 to 2011(5). Two were excluded because they couldn't find matching white faculty and one for failing to have any SCI/Web of Science presence (this was used to generate h-index, citations etc).
Eleven. Eleven faculty (out of 130) members, plus an additional 22 matched white faculty, comprise the sample for the correlation of scientific productivity with grant award. Kinda thin.
They took the rankings of the medical schools from US News and World Report and divided the institutions into thirds "Tiers". Ten of the grant sample pairs came from the top third of medical schools and one from the second tier (6)
In Table 2 the paper lists the mean (7) papers, citations and a couple of productivity indices they made up (8). Black investigators had fewer papers (but not significantly different), significantly fewer citations (9) and significantly lower Pc-index.
Second, the productivity measure in terms of peers’ citations, or the Pc-index, is the sum
of the numbers of citations to one’s papers weighted by his/her a-indices respectively. While the Pr-index is useful for
immediate productivity measurement, the Pc-index is retrospective and generally more relevant.
There was no difference in the PcXImpactFactor index. Interesting how they describe the one that identified a difference as "most relevant" isn't it?
Then we move on to tables 3 and 4 in which the authors show that if you "normalize" the PIs' award funding by the various performance measures (10) there is no difference between black and white professors.
There are a few more complaints about the earlier part of the study but that isn't really focused on the grant-getting so I'll leave it for now. It reflects the entire 130 pair sample and examines the productivity measures. There are interesting tibits in the fact that they only had significant differences in the Asst professor ranks. In the larger NIH-grant picture, perhaps their excuse of too few black Full and Associate professors for analysis is highly meaningful for the overall disparity of grant award? Then there was the observation of differences only in the Assistant professors at the top one-third of medical schools but not in the bottom two thirds.
I'll end with my observations:
1) why not academic departments? what proportion of the NIH PI population is at medical schools versus regular academic departments? what about non-University institutions?
2) why not all of them?
3) really? like they never heard of passing. Also "white"? What sort of "white" are we talking here? How do we know their sample of white medical school faculty matches the overall NIH sample of white PIs?
4) so the sample had to be really narrow here because they had to find disciplinary descriptions broad enough that they even had an AfricanAmerican professors represented. This will not be the case everywhere.
5) isn't the whole issue that is at the heart of Ginther those investigators who were NOT funded by the NIH? That's what assessing the disparity is about....figuring out if there are "missing" investigators who should have been funded by were not. Right? Determining whether those funded black investigators are as good as a sample of white investigators is beside the point. I really need to chase down the exact quote but one of the ERA era leaders said something to the effect that women will enjoy true equality not when they can succeed by being better than all the men but when all they have to do is be as good as the worst men in a given workplace. The same logic applies here. The focus should be on the whole distribution of funded investigators. It is irrelevant if, say, black investigators who "should" be at Tier 2b Med school are really employed at Tier 1c Med schools. What matter is if there are black scientists who are just as good as Tier 3f Med school white investigators but are not getting the funding their counterparts are enjoying.
6) ok, whut? why this skew for the top end? if they sought to focus on the elite, why not just sample all of the schools in the top third? or once you get past this the NIH grants are few and thin on the ground? particularly for black investigators perhaps? or for both white and black professors?
7) all of a sudden the white sample is down to 11, should have been 22. I can't figure out what they did here.
8) the a-index they base much of this on seems to be an attempt to parse author credit depending on position in the author list, number of authnors, etc. yeah....that's not resting on a bunch of subfield(9) practice equivalencies, is it?
9) yeah, the disciplinary "matching" isn't working for me here. if the pairs were within Medical School and within discipline presumably this means within Department. This is almost certain to mean that the pairs differed in subdisciplinary issues like model, technical approaches, etc. Differences that can be even more significant contributors to citations than are the broad disciplinary labels. Now true, we'd want to know if there was any evidence that black investigators were more likely to be in lower citation, slower pub rate subfields...
10) This also depends on their being a direct and positive correlation between funding and "productivity". As one example, human imaging research is really expensive, generates papers slowly, rarely ends up in CNS journals and probably isn't cited that highly. People who do such work are living in the same pharmacology, psychiatry and neuroscience departments that contain bench jockey labs shiving each other in the back to race to the latest CNS scoop job. Same title, same department but....comparable? please. oh yeah, see 9) again.
Against my usual principles in such matters, I've been reading pro-cyclist Tyler Hamilton's confessional book. It isn't the finest writing in the world, as you might imagine. There are also a lot of specifics about particular races, events and participants/characters that will be of interest only to those who followed pro cycling during Tyler's career.
There is a great deal of parallel here for the top level ranks of competitive sciencing. A great deal. And if we do not clamp down hard on where the Glamour Game has been taking science lately, this is where we are headed.
A place where "everybody is doing it, so we're just leveling the playing field by photoshopping bands" is true, if not an excuse.
I suggest you read Hamilton's book with a constant eye on science fraud.