Archive for the 'Impact Factor' category

BJP pulls a neat little self-citation trick

Sep 24 2013 Published by under Academics, Impact Factor

As far as I can tell, the British Journal of Pharmacology has taken to requiring that authors who use animal subjects conduct their studies in accordance with the "ARRIVE" (Animals in Research: Reporting In Vivo Experiments) principles. These are conveniently detailed in their own editorial:

McGrath JC, Drummond GB, McLachlan EM, Kilkenny C, Wainwright CL.Guidelines for reporting experiments involving animals: the ARRIVE guidelines.Br J Pharmacol. 2010 Aug;160(7):1573-6. doi: 10.1111/j.1476-5381.2010.00873.x.

and paper on the guidelines:

Kilkenny C, Browne W, Cuthill IC, Emerson M, Altman DG; NC3Rs Reporting Guidelines Working Group.Animal research: reporting in vivo experiments: the ARRIVE guidelines. Br J Pharmacol. 2010 Aug;160(7):1577-9. doi: 10.1111/j.1476-5381.2010.00872.x.

The editorial has been cited 270 times. The guidelines paper has been cited 199 times so far and the vast, vast majority of these are in, you guessed it, the BRITISH JOURNAL OF PHARMACOLOGY.

One might almost suspect the journal now has a demand that authors indicate that they have followed these ARRIVE guidelines by citing the 3 page paper listing them. The journal IF is 5.067 so having an item cited 199 times since it was published in the August 2010 issue represents a considerable outlier. I don't know if a "Guidelines" category of paper (as this is described on the pdf) goes into the ISI calculation. For all we know they had to exempt it. But why would they?

And I notice that some other journals seem to have published the guidelines under the byline of the self same authors! Self-Plagiarism!!!

Perhaps they likewise demand that authors cite the paper from their own journal?

Seems a neat little trick to run up an impact factor, doesn't it? Given the JIT and publication rate of real articles in many journals, a couple of hundred extra cites in the sampling interval can have an effect on the JIT.

11 responses so far

The 2012 Journal Impact Factors are out

Jun 24 2013 Published by under Impact Factor, Scientific Publication

Naturally this is a time for a resurgence of blathering about how Journal Impact Factors are a hugely flawed measure of the quality of individual papers or scientists. Also it is a time of much bragging about recent gains....I was alerted to the fact that they were out via a society I follow on Twitter bragging about their latest number.

whoo-hoo!

Of course, one must evaluate such claims in context. Seemingly the JIF trend is for unrelenting gains year over year. Which makes sense, of course, if science continues to expand. More science, more papers and therefore more citations seems to me to be the underlying reality. So the only thing that matters is how much a given journal has changed relative to other peer journals, right? A numerical gain, sometimes ridiculously tiny, is hardly the stuff of great pride.

So I thought I'd take a look at some journals that publish drug-abuse type science. There are a ton more in the ~2.5-4.5 range but I picked out the ones that seemed to actually have changed at some point.
2012-ImpactFactor1
Neuropsychopharmacology, the journal of the ACNP and subject of the abovequoted Twitt, has closed the gap on arch-rival Biological Psychiatry in the past two years, although each of them trended upward in the past year. For NPP, putting the sadly declining Journal of Neuroscience (the Society for Neuroscience's journal) firmly behind them has to be considered a gain. J Neuro is more general in topic and, as PhysioProf is fond of pointing out does not publish review articles, so this is expected. NPP invented a once-annual review journal a few years ago and it counts in their JIF so I'm going to score the last couple of years' of gain to this, personally.

Addiction Biology is another curious case. It is worth special note for both the large gains in JIF and the fact it sits atop the ISI Journal Citation Reports (JCR) category for Substance Abuse. The first jump in IF was associated with a change in publisher so perhaps it started getting promoted more heavily and/or guided for JIF gains more heavily. There was a change in editor in there somewhere as well which may have contributed. The most recent gains, I wager, have a little something to do with the self-reinforcing virtuous cycle of having topped the category listing in the ISI JCR and having crept to the top of a large heap of ~2.5-4.5 JIF behavioral pharmacology / neuroscience type journals. This journal had been quarterly up until about two years ago when it started publishing bimonthly and their pre-print queue is ENORMOUS. I saw some articles published in a print issue this year that had appeared online two years before. TWO YEARS! That's a lot of time to accumulate citations before the official JIF window even starts counting. There was news of a record number of journals being excluded from the JCR for self-citation type gaming of the index....I do wonder why the pre-print queue length is not of concern to ISI.

PLoS ONE is an interest of mine, as you know. Phil Davis has an interesting analysis up at Scholarly Kitchen which discusses the tremendous acceleration in papers published per year in PLoS ONE and argues a decline in JIF is inevitable. I tend to agree.

Neuropharmacology and British Journal of Pharmacology are examples of journals which are near the top of the aforementioned mass of journals that publish normal scientific work in my fields of interest. Workmanlike? I suppose the non-perjorative use of that term would be accurate. These two journals bubbled up slightly in the past five years but seem to be enjoying different fates in 2012. It will be interesting to see if these are just wobbles or if the journals can sustain the trends. If real, it may show how easily one journal can suffer a PLoS ONE type of fate whereby slightly elevated JIF draws more papers of a lesser eventual impact. While BJP may be showing the sort of virtuous cycle that I suspect Addiction Biology has been enjoying. One slightly discordant note for this interpretation is that Neuropharmacology has managed to get the online-to-print publication lag down to some of the lowest amongst its competition. This is a plus for authors who need to pad their calendar-year citation numbers but it may be a drag on the JIF since articles don't enjoy as much time to acquire citations.

28 responses so far

Placing PLoS ONE in the appropriate evaluative context

Jan 14 2013 Published by under Impact Factor, PLoS ONE, Ponder

As you know, I have a morbid fascination with PLoS ONE and what it means for science, careers in science and the practices within my subfields of interest.

There are two complaints that I see as supposed objective reasons for old school folks' easy complaining bout how it is not a real journal. First, that they simply publish "too many papers". It was 23,468 in 2012. This particular complaint always reminds me of

which is to say that it is a sort of meaningless throwaway comment. A person who has a subjective distaste and simply makes something up on the spot to cover it over. More importantly, however, it brings up the fact that people are comparing apples to oranges. That is, they are looking at a regular print type of journal (or several of them) and identifying the disconnect. My subfield journals of interest maybe publish something between about 12 and 20 original reports per issue. One or two issues per month. So anything from about 144 to 480 articles per year. A lot lower than PLoS ONE, eh? But look, I follow at least 10 journals that are sort of normal, run of the mill, society level journals in which stuff that I read, cite and publish myself might appear. So right there we're up to something on the order of 3,000 article per year.

PLoS ONE, as you know, covers just about all aspects of science! So multiply my subfield by all the other subfields (I can get to 20 easy without even leaving "biomedical" as the supergroup) with their respective journals and.... all of a sudden the PLoS ONE output doesn't look so large.

Another way to look at this would be to examine the output of all of the many journals that a big publisher like Elsevier puts out each year. How many do they publish? One hell of a lot more that 23,000 I can assure you. (I mean really, don't they have almost that many journals?) So one answer to the "too many notes" type of complaint might be to ask if the person also discounts Cell articles for that same reason.

The second theme of objection to PLoS ONE is as was recently expressed by @egmoss on the Twitts :

An 80% acceptance rate is a bit of a problem.

So this tends to overlook the fact that much more ends up published somewhere, eventually than is reflected in a per-journal acceptance rate. As noted by Conan Kornetsky back in 1975 upon relinquishing the helm of Psychopharmacology:

"There are enough journals currently published that if the scientist perseveres through the various rewriting to meet style differences, he will eventually find a journal that will accept his work".

Again, I ask you to consider the entire body of journals that are normal for your subfield. What do you think the overall acceptance rate for a given manuscript might be? I'd wager it is competitive with PL0S ONE's 80% and probably even higher!

49 responses so far

Reviewing your CV by Journal Impact Factor

So one of the Twitts was recently describing a grant funding agency that required listing the Impact Factor of each journal in which the applicant had published.

No word on whether or not it was the IF for the year in which the paper was published, which seems most fair to me.

It also emerged that the applicant was supposed to list the Journal Impact Factor (JIF) for subdisciplines, presumably the "median impact factor" supplied by ISI. I was curious about the relative impact of listing a different ISI journal category as your primary subdiscipline of science. A sample of ones related to the drug abuse sciences would be:

Neurosciences 2.75
Substance Abuse 2.36
Toxicology 2.34
Behavioral Sciences 2.56
Pharmacology/Pharmacy 2.15
Psychology 2.12
Psychiatry 2.21

Fascinating. What about...
Oncology 2.53
Surgery 1.37
Microbiology 2.40
Neuroimaging 1.69
Veterinary Sciences 0.81
Plant Sciences 1.37

aha, finally a sub-1.0. So I went hunting for some usual suspects mentioned, or suspected, as low-cite rate disciplines..
Geology 0.93
Geosciences, multidisc 1.33
Forestry 0.87
Statistics and Probability 0.86
Zoology 1.06
Forestry 0.87
Meteorology 1.67

This a far from complete list of the ISI subdisciplines (and please recognize that many journals can be cross-listed), just a non-random walk conducted by YHN. But it suggests that range is really restricted, particularly when it comes to closely related fields, like the ones that would fall under the umbrella of substance abuse.

I say the range is restricted because as we know, when it comes to journals in the ~2-4 IF range within neuroscience (as an example), there is really very little difference in subjective quality. (Yes, this is a discussion conditioned on the JIF, deal.)

It requires, I assert, at least the JIF ~6+ range to distinguish a manuscript acceptance from the general herd below about 4.

My point here is that I am uncertain that the agency which requires listing disciplinary medians JIFs is really gaining an improved picture of the applicant. Uncertain if cross-disciplinary comparisons can be made effectively. You still need additional knowledge to understand if the person's CV is filled with Journals that are viewed as significantly better than average within the subfield. About all you can tell is that they are above or below the median.

A journal which bests the Neurosciences median by a point (3.75) really isn't all that impressive. You have to add something on the order of 3-4 IF points to make a dent. But maybe in Forestry if you get to only a 1.25 this is a smoking upgrade in the perceived awesomeness of the journal? How would one know without further information?

15 responses so far

A smear campaign against Impact Factors...and the Sheep of Science

Aug 13 2012 Published by under Impact Factor, Scientific Publication

Stephen Curry has a nice lengthy diatribe against the Impact Factor up over at the occam's typewriter collective. It is an excellent review of the problems associated with the growing dominance of journal Impact Factor in the career of scientists.

I am particularly impressed by:

It is time to start a smear campaign so that nobody will look at them without thinking of their ill effects, so that nobody will mention them uncritically without feeling a prick of shame.

Well, of course I would be impressed, wouldn't I? I've been on the smear campaign for some time.

The problem I have with Curry's post is the suggestion that we continue to need some mechanism, previously filled by journal identity/prestige, as a way to filter the scientific literature. As he quoted from a previous Nature EIC:

“nobody wants to have to wade through a morass of papers of hugely mixed quality, so how will the more interesting papers […] get noticed as such?”

This is the standard bollocks from those who have a direct or indirect interest in the GlamourMag game. Stephen Curry responds a bit too tepidly for my taste:

The trick will be to crowd-source the task.

Ya think?

Look, one of the primary tasks of a scientist is to sift through the literature. To review data that has been presented by other scientists and to decide, for herself, where these data fit. Are they good quality but dull? Exciting but limited? Need verification? Require validation in other assays? Gold-plated genius ready for Stockholm?

This. Is. What. We. Do!!!!!!

And yeah, we "crowdsource" it. We discuss papers with our colleagues. Lab heads and trainees alike. We come back to a paper we've read 20 times and find some new detail that is critical for understanding something else.

This notion that we need help "sifting" through the vast literature and that that help is to be provided by professional editors at Science and Nature who tell us what we need to pay attention to is nonsense.

And acutely detrimental to the progress of science.

I mean really. You are going to take a handful of journals and let them tell you (and your several hundred closest sub-field peers) what to work on? What is most important to pursue? Really?

That isn't science.

That's sheep herding.

And guess what scientists? You are the sheep in this scenario.

22 responses so far

A significant change in Impact Factor

Jul 02 2012 Published by under Impact Factor

I received a kind email from Elsevier this morning, updating me on the amazing improvement in 2011 Impact Factor (versus 2010) for several journals in their stable of "Behavioral & Cognitive Neuroscience Journals". There are three funny bits here, first that the style was:

2010 Impact Factor WAS 2.838, 2011 Impact Factor NOW 3.174

You have to admit the all-caps is a crack up. Second, THREE decimal places! Dudes, this shit is totally precise and that means....sciencey.

As you know, however, DearReader, I have a rather unhealthy interest in the hilariousity of the Impact Factor and I was thinking about the more important issue here.

Is this a significant difference? Who gives a hoot if the IF goes up by 0.336? Is this in any way meaningful?

I suspect the number of available citations is ever on the increase. The business of science is ever expanding, the pressure to publish relentless and the introduction of new journals continues. This means that IFs will be on some baseline level of background increase over time. This is borne out, I will note, by my completely unscientific tracking of journals most closely related to my interests over the past *cough*cough* years *cough*cough*. They all seem to have gradually inched up a few decimal points year in, year out.

For the 0.336 increase, let us do a little seat of the pants. Let's say a journal with 20 articles per issue, 12 issue per year....480 items over the 2 year tracking interval for calculating IF. Round it to 160 extra citations*. If only 17% of the articles got two more citations, this would account for it. If a mere 3% of articles turned out to be AMAZING for the sub-sub-sub field and won an extra 10 citations each....this would account for the change.

For one thing, I can now see why editors would be willing to try the "Cite us a few more times" gambit with authors in the review stage. It doesn't take many intimidated authors throwing in 4-5 more citations of recent work from the journal in question to move a third of an impact factor.

Heck, just one solo operator author could probably make a notable impact over two years. If I put everything we submit into a single journal over two years time, and did my level best to make sure to cite everything plausibly relevant from that journal, I could generate 40 extra citations in two year easily. Probably without anyone so much as noticing what I was up to!

The fact that the vast majority of society rank journals that I follow fail to experience dramatic IF gains suggests that nobody is trying to game the system like this and that seemingly universal increases are a reflection of overall trends for total number of publications. But it does make you wonder about those few journals that managed to gain** a subjective rank over a few years time, say from the 2-4 to the 6-8 range and just how they pulled it off.

Additional:
This tool permits you to search some citation trends by journal.
__
*Yes, I realize the overlap year for adjacent annual IFs. For our thought exercise, imagine it is non-overlapping years if this bothers you.

**My hypothesis is that an editorial team would only have to pull shenanigans for 2-4 years and after that the IF would be self-sustaining.

14 responses so far

Reviewing academic papers: Observation

When you are reviewing papers for a journal, it is in your best interest to stake out papers most like your own as "acceptable for publication".

If it is a higher IF than you usually reach, you should argue for a manuscript that is somewhat below that journal's standard.

If it is a journal in which you have published, it is in your interest to crap on any manuscript that is lesser than your typical offerings.

16 responses so far

Another reason why journals maintain those lengthy pre-publication queues...

Apr 04 2012 Published by under Impact Factor, Science Publication

So you finally got your paper accepted, the proofs have come and been returned in 48 hrs (lest some evil, unspecified thing happen). You waited a little bit and BOOM, up it pops on PubMed and on the pre-publication list at the journal. The article is, for all most intents and purposes, published.

YAY!

Now get back to work on that next paper.

But there's that nagging little thought.....it isn't really published until it gets put in a print issue. Most importantly, you don't know for sure which year it will be properly published in, so the citation is still in flux. So you look at the number of items below yours in the pre-print list, figure out approximately how many articles are published per issue in the journal and game it out. Ugh.... four months? Six? EIGHT????

WHY O WHY gods of publishing?? WHY must it take so long???????

Whenever I've heard a publishing flack address this it has been some mumblage about making sure they have a smooth publication track. That they are never at a loss to publish enough in a given issue. And they have to stick to the schedule don't you know!

(except they don't. Volumes are pretty fixed but you'll notice a "catch up" extra issue of a volume now and again.)

Well, well, well. Something I've never considered was raised in a blog post at Scholarly Kitchen. An article by Frank Krell in Learned Publishing (I swear I'm not making that journal title up) asks if publishers might be using this to game the Impact Factor of their journals.

Dammit! Totally true. Think about it...

Now, before I get started, the Scholarly Kitchen, good publisher flacks that they are, caution:

To me, there needs to be some evidence — even anecdotal — that editors are purposefully post-dating publication for the purposes of citation gaming. Large January issues may be one piece of evidence; however, it may also signal the funding and publication cycle of academics. I’d be more interested to know whether post-dating conversations are going on within editorial boards, or whether authors have been told that the editor is holding back their article to maximize its contribution to the journal’s impact factor.

But this only really addresses the specific point that Krell made about pushing issues around with respect to the start of a new year.

There's a larger point at hand. One of the points of objection I've always had about the IF calculation is that the two-year window puts a serious constraint on the types of citations that are available in certain kinds of science. The kind where it just takes a lot of time to come up with a publishable data set.

Take normal old, run of the mill behavioral experiments that can be classified as behavioral pharmacology (within which a lot of substance abuse studies live). Three to four months, easily, just to get an animal experiment done. Ordering, habituating, pre-training, surgeries and recovery...it takes time. A typical study might be 3-4 groups of subjects, aka, experiments. That's if you get lucky. Throw in some false avenues and failed experiments and you are easily up to 6 or 8 groups. Keep in mind that physical resources like operant boxes, constraints such as the observation window (could be a 6 hr behavioral experiment, no problemo) and available staff (not everyone has a tech) really narrow down the throughput. You can't just "work faster" or "work harder" like supposedly is possible at the bench. The number of "experiments" you can do don't scale up with time spent in the lab if you are doing behavioral studies with some sort of timecourse. The PI may not even be able to do much by throwing more people into the project even if the laboratory does have this sort of flexibility.

Right?

So here you are, Joe Scholar, reading your favorite journal when BLAM! You see an awesome paper that gives you a whole line of new ideas that you could and should set out to studying. Like, RIGHT FREAKING NOW!!! Okay, so suppose money is not an issue and you don't have anything else particularly pressing. Order some animals and off you go.

It's going to be a YEAR minimum to complete the studies. A month to write up the draft, throw in three months for peer review and another month for the journal to get it's act together. Thus, if things go really, really well for you there is only a 6 month window of slack to get a citation in for that original motivating paper before the 2 year IF citation window elapses.

Things never go that well.

In my view this makes it almost categorically impossible for a publication to garner IF credit for a citation that is the most meaningful of all. A citation from a paper motivated almost entirely by said prior work.

The principle extends though. Even if you only see the paper and realize you need to incorporate it into your Discussion or Introduction, the length of time the paper is available with respect to the IF window matters. If there were just some way journals could extend that window between general availability of a work and the expiration of the IF window then this would, statistically, boost the number of citations. If the clock doesn't start running until the paper has been visible for 6 months....say, how could we do that? How....? Hmm.

Ah yes. Let it languish in the "online first" archive! Brilliant! It goes up on PubMed and people can read the paper. Get their experiments rolling. Write the findings into their Intros and Discussions.

Right.

I agree with the Scholarly Kitchen post that we don't know that this is why some journals keep such a hefty backlog of articles in their pre-print queue. Having watched a handful of my favorite journals maintain anywhere from six to thirteen month offsets over periods of many months to years, however, I have my suspicions. The journals I pay attention to have maintained their offsets over at least a decade if you assume the lower bound of about 4-5 months (and trust that my spot-checking is valid as a general rule). The idea that they do this to avoid publication dryspells is nonsense, they have plenty of accepted articles on a frequent enough basis so that they could trim down to, say, 2-3 months of backlog. So there must be another reason.

32 responses so far

Explaining resistance to the Elsevier boycott: Practicalities.

As you are aware, calls to boycott submitting articles to, and reviewing manuscripts for, journals published by Elsevier are growing. The Cost of Knowledge petition stands at 4694 as of this writing. Of these some 623 signatories have identified themselves as being in Biology, 380 in Social Sciences, 260 in Medicine and 126 in Psychology.

These disciplines cover the sciences and the scientists I know best, including my own work.

There seems to be some dismay in certain quarters with the participation of people in these disciplines. This is based, I would assume, on a seat of the pants idea that there are way more active scientists in these disciplines than seems represented by signatures on the petition. Also, I surmise, based on the host of journals published by Elsevier that cater to various aspects of these broader disciplinary categories.

Others have pointed out that in certain cases, such as Cell or The Lancet, there is no way a set of authors are going to give up the cachet of a possible paper acceptance in that particular journal.

I want to address some more quotidian concerns.

I already mentioned the notion of academic societies which benefit from their relationship with Elsevier. Like it or not, they host a LOT of society journals. Sometimes this is just ego and sometimes the society might really be making some ca-change from the relationship. For those scientists who really love the notion that their society has its own journal, this needs to be addressed before they will get on board with a boycott.

Moving along we deal with the considerations that go into selection of a journal to publish in. Considerations that are not driven by Impact Factor since within the class of society journals, such concerns fade. The IFs are all really close, even if they do like to brag about incremental improvement, or about their numerical advantage over a competitor. Yes, 4.5 is better than 4.3 but c'mon. Other factors come into play.

Cost: Somewhere or other (was it Dr. Zen?) someone in this discussion brought up the notion that paying Open Access fees upfront is a big stumbling block. Yes, in one way or another the taxpayers (state and federal in the US) are footing the bill but from the perspective of the PI, increasing library fees to the University don't matter. What matters are the Direct Cost budgets of her laboratory (and possible the Institutional funds budget). Sure, OA journals allow you to ask for a fee waiver...but who knows if they will give it? Why would you go through all that work (and time) to get the manuscript accepted just to have to pull it if they refuse to let you skip out on the fee? I mean, heck, $1,000 is always handier to have in the lab than being shunted off to the OA publisher, right? I don't care how many R01s you have...

Convenience: The online manuscript handling system of Elsevier is good. I've had experience with a few others, Scholar ONE based systems, etc. Just heard a complaint about the PLoS system on the Twitts the other day, as it happens. Bottom line is that the Elsevier one works really well. Easy file uploading, fast PDF creation, reasonably workable input of all the extraneous info....and good progress/status updating as the manuscript undergoes peer review and decision-making at the editorial offices. This is not the case for all other publishers/journals. And what can I say? I like easy. I don't like fighting with file uploads. I don't like constantly having to email the managing editorial team to find out if my fucking manuscript is out for review, back from review, sitting on the Editor's desk or what. And yeah, we didn't have that info back in the day. And knowing the first two reviews are in but the journal is still waiting for the third one doesn't really change a damn thing. But you know what? I like to see the progress.

Audience: One of the first things I do, when considering submitting to a journal in which I do not usually publish, is to keyword search for recent articles. Do they publish stuff like the one we're about to submit? If yes, then I feel more comfortable in a general sense about editorial decision making and the selection of relevant reviewers. If no...well, why waste the time? Why start off with the dual problem of arguing the merits of both the specific paper and the general topic of interest? Now note, this is not always a valid assumption. I have a clear example in which the journal description seemed to encompass our work...but if you looked at the papers they generally published you'd think we were crazy to submit there. "But they only publish BadgerDigging Studies, not a BunnyHopper to be seen" you'd say. Well, turns out we didn't have one lick of trouble about topic "fit" from that journal. Go figure. But even with that experience under my belt, I'm still gonna hesitate.

Editor (friendly): Yes, yes, I frequently point out how stupid and wrong we are when trying to game out who is going to respond favorably to our grant proposals. Same thing holds for paper review. But still. I can't help but feel that I've gotten more editorial rulings going my way from editors that I know personally, know they know my work/me and suspect that they are at least 51% favorable towards me/my submissions. The hit rate from people that I'm pretty convinced don't really know who I am seems somewhat reduced. So yeah, you are damn right I am going to scrutinize the Editorial board of a journal for signs of a friendly name.

Editor (unfriendly): Again, I know it is a fool's errand. I know that just because I think someone is critical of our work, or has a personal dislike for me, this means jackall. Heck, I've probably given really nice manuscript and / or grant reviews to scientists who I personally think are complete jerks, myself. But still... it is common enough that biomedical scientists see pernicious payback lurking behind every corner. Perhaps with justification?

I don't intend to just stay mad, but to get fucken EVEN the next time I'm reviewing one of theirs. Which will fucken happen. It will.

So yeah, many biomedical scientists are going to put "getting the damn paper accepted already" way up above any considerations about Elsevier's support for closing off access to tax-payer funded science. Because they feel it is not their fight, yes, but also because it has the potential to cost 'em. This is going to have to be addressed.

On a personal note, PLoSONE currently fails the test. Their are some papers starting to come out in the substance abuse and behavioral pharmacology areas. Some. But not many. And it is hard to get a serious feel for the whole mystique over there about "solid study, not concerned about impact". Because opinions vary on what represents a solid demonstration. Considerably. Then I look at the list of editors that claim to handle substance abuse. It isn't extensive and I note at least a few.....strong personalities. Surely these individuals are going to trigger friendly/unfriendly issues for different scientists in their fields. Even worse, however, is the fact that many of them are not listed as having edited any papers published in PLoSONE yet. And that is totally concerning to me if I were considering submitting to that journal instead of one of the many Elsevier titles that might work for us.

46 responses so far

On giving advice to newly transitioned Assistant Professors

Dr Becca has a post up in which she ponders a perennial issue for newly established labs....and many other labs as well.

The gist is that which journal you manage to get your work published in is absolutely a career concern. Absolutely. For any newcomers to the academic publishing game that stumbled on this post, suffice it to say that there are many journal ranking systems. These range from the formal to the generally-accepted to the highly personal. Scientists, being the people that they are, tend to take shortcuts when evaluating the quality of someone else's work, particularly once it ranges afield from the highly specific disciplines which the reviewing individual inhabits. One such shortcut is inferring something about the quality of a particular academic paper by knowledge of the reputation of the journal in which it is published.

One is also judged, however, by the rate at which one publishes and, correspondingly, the total number of publications given a particular career status.

Generally speaking there will be an inverse correlation between rate (or total number) and the status of the journals in which the manuscripts are published.

This is for many reasons, ranging from the fact that a higher-profile work is (generally) going to require more work. More time spent in the lab. More experiments. More analysis. More people's expertise. Also from the fact that the manuscript may need to be submitted to more higher-profile journals (in sequence, never simultaneously), on average, to get accepted then to get picked up by so-called lesser journals.

This negative correlation of profile/reputation with publishing rate is Dr Becca's issue of the day. When to keep bashing your head against the "high profile journal" wall and when to decide that the goal of "just getting it published" somewhere/anywhere* takes priority.

I am one who advises balance. The balance that says "don't bet the entire farm" on unknowables like GlamourMag acceptance. The balance that says to make sure a certain minimum publication rate is obtained. And for a newly transitioning scientist, I think that "at least one pub per year" needs to be the target. And I mean, per year, in print, pulled up in PubMed for that publishing year. Not an average, if you can help it. Not Epub in 2011, print in 2012. Again, if you can help it.

The target. This is not necessarily going to be sufficient...and in some cases a gap of a year or two can be okay. But I think this is a good general rubric for triaging your submission strategy.

It isn't that one C/N/S pub won't trump a sustained pub rate and a half-dozen society level publications. It will. The problem is that it is a far from certain outcome. So if you end up with a three year publication gap, no C/N/S pubs and you end up dumping the data in a half-dozen society level journal pubs anyway...well, in grant-getting and tenure-awarding terms, a 2-3 year publication gap with "yeah but NOW we're submitting this stuff to dump journals like wild fire so all, good, k?" just isn't smart.

My advice is to take care of business first, get that 1-2 pub per year in bare minimum or halfway decent journals track going, and then to think about layering high-profile risky business on top of that.

Dang, I got all distracted. What I really meant to blog about was a certain type of comment popping up in Dr. Becca's thread.

The kind of comment that I think pushes the commenter's pet agenda, vis a vis academic publishing, over what is actually good advice for someone that is newly transitioned to an independent laboratory position. I have my own issues when it comes to this stuff. I think the reification of IF and the pursuit of GlamorMag publication is absolutely ruining the pursuit of knowledge and academic science.

But it is absolutely foolish and bad mentoring to ignore the realities of our careers and the judging of our talents and accomplishments. I'd rather nobody *ever* submitted to journal solely because of the journal's reputation. I long for the end of each and every academic journal in which the editors are anything other than actual working scientists. The professional journal "editors" will be, as they say, the first against the wall come the revolution in my glorious future. Etc.

But you would never catch me telling someone in Dr. Becca's position that she should just ignore IF and journal status and publish everything in the easiest venue to get accepted. Never.

You wackaloon Open Access Nazdrul and followers need to dissociate your theology from your advice giving.
__
*there are minimum standards. "Peer Reviewed" is one such standard. I would argue that "indexed in PubMed" (or your relevant major database) is another such. Also, my arbitrary sub-field snobbery** starts at an Impact Factor of around 1.something.....however I notice that the IF of my touchstone journals for "the bottom" have inched up over the years. Perhaps "2" is my lower bound now.

**see? for some fields this is snobbery. for others, a ridiculous, snarky statement. Are you getting the message yet?

12 responses so far

Older posts »