For me, last night brought awareness of a new low point in the dismal, embarrassing behavior of the rank and file of the Republican party in these fair Uuuuunited States. It was noticeably more depressing then usual because it was so tawdry and pathetic. No, not AP wire tapping. Not Benghazi.
I refer to umbrellagate.
First the idiot mouthbreathing knuckledraggers were delighting in the notion that Obama "had" to have someone else hold an umbrella over him. Complete with anecdata showing other Presidents holding their own umbrellas.
I concluded this morning that it is really rather remarkable, and a testament to basic American decency, that despite all their machinations the Republicans have not been able to produce the rampant, postapocalyptic movie fantasy USA that they seem to desire for some reason.
Since I know many of my readers are comparative children who may have missed the legendary sketch comedy show....
There's some Twittage today about the Glamour Science situation and what we (meaning the relatively established professoriat) are doing to back up our fine criticisms. Particularly in the face of younger and transitioning scientists who realize that they need to play the GlamourChase game as hard as they can if they expect to make it.
Personally, I don't think we need some overt revolution of radical shunning of anything having to do with high Impact Factor journals to have a substantial effect. Refusing to play the game has its advantages. I ran off a couple of quick Twitts having to do with choices we can make.
First, never let data go unpublished for lack of impact.
To me the absolutely most corrosive part of GlamourIdiot science is that lots and lots of perfectly fine data go unpublished. Forever. This is for several reasons including the fact that at least 5 person years of work go into the CNS paper and even with ridiculous amounts of Supplementary Figures only a fraction gets into press. There's a lot of dross that nobody wants to see, sure, but there's also a lot of stuff that would help other people out. Save them some blind alleys if nothing else. (Did we mention this is being done on the federal taxpayer dime? And that grant dollars are scarce? wouldn't the NIH want most of the work they payed for made available...?) Then there's the scoopage factor- if someone else gets there first it automatically downgrades your work...so the GlamourDouche lab goes in another direction to try to salvage another high-profile publication. So there's another bunch of figures trashed. Figures that save for the scooping would have been in the same damn high IF journal! Jesus this is INSANE, right? yeah, well, welcome to GlamourScience. Then we have projects that just aren't cool enough in terms of the result. Some PIs simply won't let their labs publish it for fear of diminishing the aggregate lab JIF level. Again...crazy, right? Why the hell does a PI with 5 CNS papers a year give a flying fig if a postdoc sneaks out a IF 5 paper? There's an instructional part here for postdocs- some of this lack of publication is your own damn fault. Yes, you who have drunken the FlavorAde participate in this too. Why? Because you don't force the PI to see sense. For one thing, let me tell you the hard hearted PI's heart tends to soften when an essentially ready-to-submit manuscript crosses her desk with a clear rationale for why it is okay (and necessary) to publish the data and why this particular journal is perfect, save for the IF. Don't be afraid to play on her scoop fears now... "We gotta get this in somewhere, I hear Postdoc Lin has her story ready to go in our competitor lab!". Some mentors will be susceptible to the "I need X first author pubs to get a shot at a job and I already have the two CNS papers so...." argument.
Second, never ever decide what to cite based on JIF.
Ever. It's hard. I know. You are steeped in turning first to the big papers in high reputation single-word-title journals. This is unnecessary you know. Cite the right paper that makes the right point for which you are citing it.
Third, if you can't cite first/best/recent...go with best over first
I tend to, all else equal, go with a citation strategy that pays homage to the first paper for a given point, the best one and then maybe a recent one to show the continuation of the theme, topicality, etc. The best is rarely ever the GlamourMag one although when you get down to the sub 10IF level in my fields then you might see a bit of a correlation. The first observation, especially if it is coolio stuff, tends to have been in a Glamour Mag which is why I make the point. But hey, if it isn't, cite the first one. Give some cred to the overlooked person who published a finding 10 years before some big lab jumped all over it.
Fourth- review manuscripts on your principles. Get your peers into high IF journals
You know what they want to hear, those GlamourEditors. Impact, importance and eleventy six kinds of pizazz. Write your reviews accordingly to get your peers' solid, if not really Glamourous stuff into those journals. Destablize the system from within. Just be subtle about it or the Associate Editors will no longer send you stuff to review.
A few weeks ago, I attended the Women and Leadership conference on campus that featured a conversation between President Shirley Tilghman and Wilson School professor Anne-Marie Slaughter, and I participated in the breakout session afterward that allowed current undergraduate women to speak informally with older and presumably wiser alumnae. I attended the event with my best friend since our freshman year in 1973. You girls glazed over at preliminary comments about our professional accomplishments and the importance of networking. Then the conversation shifted in tone and interest level when one of you asked how have Kendall and I sustained a friendship for 40 years. You asked if we were ever jealous of each other. You asked about the value of our friendship, about our husbands and children. Clearly, you don’t want any more career advice. At your core, you know that there are other things that you need that nobody is addressing. A lifelong friend is one of them. Finding the right man to marry is another.
Jesus. The "MRS degree"? What fucking year is this again? 2013, right?
Oh, right. It's because the elite of this world have such special problems in this regard, isn't it?
As Princeton women, we have almost priced ourselves out of the market. Simply put, there is a very limited population of men who are as smart or smarter than we are. And I say again — you will never again be surrounded by this concentration of men who are worthy of you.
Of course, once you graduate, you will meet men who are your intellectual equal — just not that many of them. And, you could choose to marry a man who has other things to recommend him besides a soaring intellect. But ultimately, it will frustrate you to be with a man who just isn’t as smart as you.
So Princeton has cornered the market on smart men, eh? What easily falsifiable claptrap. Maybe once these Precious Princetonian Princesses are out in the world they find that the "smart men" aren't enamored of elitist, pretentious twits who have fully embraced their ILAF snobbery? naaahh.... couldn't be.
Here is another truth that you know, but nobody is talking about. As freshman women, you have four classes of men to choose from. Every year, you lose the men in the senior class, and you become older than the class of incoming freshman men. So, by the time you are a senior, you basically have only the men in your own class to choose from, and frankly, they now have four classes of women to choose from. Maybe you should have been a little nicer to these guys when you were freshmen?
If I had daughters, this is what I would be telling them.
I don't even know where to start. The assumption that you can only marry a man your age or older if you are a woman? This woman has basically failed to mature past the highschool prom level. My goodness what a twit. Or is this really about the underclassmen failing to put out enough for her darling boys who allegedly have their pick of any woman in the world?
I am the mother of two sons who are both Princetonians. My older son had the good judgment and great fortune to marry a classmate of his, but he could have married anyone. My younger son is a junior and the universe of women he can marry is limitless.
Rest easy, o ye Editors of Glamour Magazines of Science. I have been reminded that there are many who will be up against the wall before you, come the revolution.
Ivy League Asshole Factories, coined by our good blog friend bill
To be absolutely clear, I use the term "dump journal" without malice. Some do, I know, but I do not. I use it to refer to journals of last resort. The ones where you and your subfield are perfectly willing to publish stuff and, more importantly, perfectly willing to cite other papers. Sure, it isn't viewed as awesome, but it is....respectable. The Editor and sub-editors, probably the editorial board, are known people. Established figures who publish most of their own papers in much, much higher IF journals. It is considered a place where the peer review is solid, conducted by appropriate experts who, btw, review extensively for journals higher up the food chain.
What interests me today, Dear Reader, are the perceptions and beliefs of those people who are involved in the dump journal. Authors who submit work there, the Editor and any sub-editors....and the reviewers. Do we all commonly view the venue in question as a "dump journal"? Or are there those that are surprised and a bit offended that anyone else would consider their solid, society level journals as such a thing?
Are there those who recognize that others view the journal as a dump journal but wish to work to change this reputation? By being harsher during the review process than is warranted given the history of the journal? That approach is a game of chicken though...if you think a dump journal is getting too uppity for its current IF then you are going to just move on to some other journal for your data-dumping purposes, are you not? If a publisher or journal staff wanted to make a serious move up the relative rankings, they'd better have a plan and a steely nerve if you ask me.
This brings me around to my fascination with PLoS ONE and subjective notions of its quality and importance. What IS this journal? Is it a dumping grounds for stuff you had rejected elsewhere on "importance" and "impact" grounds and you just want the damn data out there already? That would qualify as a dump journal in my view. Or do you view it as a potential primary venue...because it enjoys an IF in the 4s and that's well into run-of-the-mill decent for your subfield?
Furthermore, how does this color your interaction with the journal? I know we have a few folks around here who function as Academic Editors. Are you one of those that thinks PLoS ONE should be ever upping its "quality" in an attempt to improve the reputation? Do you fear it becoming a "dump journal"? Or do you embrace that status?
Are you involved with another journal that some might consider a dump journal for your field? Do you think of it this way yourself? Or do see it as a solid journal and it is that other journal, 0.245 IF points down, which is the real dump journal?
An example of a rejected descriptive manuscript would be a survey of changes in gene expression or cytokine production under a given condition. These manuscripts usually fare poorly in the review process and are assigned low priority on the grounds that they are merely descriptive; some journals categorically reject such manuscripts (B. Bassler, S. Bell, A. Cowman, B. Goldman, D. Holden, V. Miller, T. Pugsley, and B. Simons, Mol. Microbiol. 52: 311–312, 2004). Although survey studies may have some value, their value is greatly enhanced when the data lead to a hypothesis-driven experiment. For example, consider a cytokine expression study in which an increase in a specific inflammatory mediator is inferred to be important because its expression changes during infection. Such an inference cannot be made on correlation alone, since correlation does not necessarily imply a causal relationship. The study might be labeled “descriptive” and assigned low priority. On the other hand, imagine the same study in which the investigators use the initial data to perform a specific experiment to establish that blocking the cytokine has a certain effect while increasing expression of the cytokine has the opposite effect. By manipulating the system, the investigators transform their study from merely descriptive to hypothesis driven. Hence, the problem is not that the study is descriptive per se but rather that there is a preference for studies that provide novel mechanistic insights.
But how do you choose to block the cytokine? Pharmacologically? With gene manipulations? Which cells are generating those cytokines and how do you know that? Huh? Are there other players that regulate the cytokine expression? Wait, have you done the structure of the cytokine interacting with its target?
The point is that there is always some other experiment that really, truly explains the "mechanism". Always.
Suppose some species of laboratory animal (or humans!) are differentially affected by the infection and we happen to know something about differences in that "mediator" between species. Is this getting at "mechanism" or merely descriptive? How about if we modify the relevant infectious microbe? Are we testing other mechanisms of action...or just further describing the phenomenon?
This is why people who natter on with great confidence that they are the arbiters of what is "merely descriptive" and what is "mechanistic" are full of stuff and nonsense. And why they are the very idiots who compliment the Emperor on his fine new Nature publication clothes.
They need to be sent to remedial philosophy of science coursework.
The authors end with:
Descriptive observations play a vital role in scientific progress, particularly during the initial explorations made possible by technological breakthroughs. At its best, descriptive research can illuminate novel phenomena or give rise to novel hypotheses that can in turn be examined by hypothesis-driven research. However, descriptive research by itself is seldom conclusive. Thus, descriptive and hypothesis-driven research should be seen as complementary and iterative (D. B. Kell and S. G. Oliver, Bioessays 26:99–105, 2004). Observation, description, and the formulation and testing of novel hypotheses are all essential to scientific progress. The value of combining these elements is almost indescribable.
They almost get it. I completely agree with the "complementary and iterative" part as this is the very essence of the "on the shoulders of giants" part of scientific advance. However, what they are implying here is that the combining of elements has to be in the same paper, certainly for the journal Infection and Immunity. This is where they go badly wrong.